Your search
Results 22 resources
-
In online social networks (OSN), followers count is a sign of the social influence of an account. Some users expect to increase the followers count by following more accounts. However, in reality more followings do not generate more followers. In this paper, we propose a two player follow-unfollow game model and then introduce a factor for promoting cooperation. Based on the two player follow-unfollow game, we create an evolutionary follow-unfollow game with more players to simulate a miniature social network. We design an algorithm and conduct the simulation. From the simulation, we find that our algorithm for the evolutionary follow-unfollow game is able to converge and produce a stable network. Results obtained with different values of the cooperation promotion factor show that the promotion factor increases the total connections in the network especially through increasing the number of the follow follow connections.
-
Web applications are built to be accessible to everyone. Unfortunately, many web applications are inaccessible to people with special needs or disabilities. In this work, we show a methodology used to make web applications more accessible to a diverse group of people. The process includes two phases: evaluation and improvement. In the first phase, the Web Accessibility Barrier (WAB) score metric together with the Accessibility Failure Rate (AFR) metric are used to evaluate web applications. In the second phase changes suggested by accessibility checker tool are implemented in the software to enhance the metrics values and reach the target level of accessibility. The open-source chat application, Zulip, is used as a case study to show the effectiveness of this approach. © 2021 IEEE.
-
Our research objective is to compare the effectiveness of standard online learning methods versus the utilization of virtual reality in education in terms of student focus and information retention. Our proposed platform will have identical lesson plans in virtual reality as our online learning methods. Eye gaze tracking and a recall test will be used on both platforms to measure focus on the screen and retention, respectively. The ultimate goal of the project is to use this data to evaluate the effectiveness of VR as a digital learning environment. © 2021 IEEE.
-
The pervasive nature of long non-coding RNA (lncRNA) transcription in the mammalian genomes has changed our protein-centric view of genomes. But the identification of lncRNAs is an important task to discover their functional role in species. The rapid development of next-generation sequencing technology leveraged the opportunity to discover many lncRNA transcripts. However, the cost and time-consuming nature of transcriptomics verification techniques barred the research community from focusing on lncRNA identification. To overcome these challenges we developed LNCRI (Long Non-Coding RNA Identifier), a novel machine learning (ML)-based tool for the identification of lncRNA transcripts. We leveraged weighted k-mer, pseudo nucleotide composition, hexamer usage bias, Fickett score, information of open reading frame, UTR regions, and HMMER score as a feature set to develop LNCRI. LNCRI outperformed other existing models in the task of distinguishing lncRNA transcripts from protein-coding mRNA transcripts with high accuracy in human and mouse. LNCRI also outperformed the existing tools for cross-species prediction on chimpanzee, monkey, gorilla, orangutan, cow, pig, frog and zebrafish. We applied the SHAP algorithm to demonstrate the importance of most dominating features that were leveraged in the model. We believe our tool will support the research community to identify the lncRNA transcripts in a highly accurate manner. The benchmark datasets and source code are available in GitHub: http://github.com/smusleh/LNCRI. © 2013 IEEE.
-
The students’ performance prediction (SPP) problem is a challenging problem that managers face at any institution. Collecting educational quantitative and qualitative data from many resources such as exam centers, virtual courses, e-learning educational systems, and other resources is not a simple task. Even after collecting data, we might face imbalanced data, missing data, biased data, and different data types such as strings, numbers, and letters. One of the most common challenges in this area is the large number of attributes (features). Determining the highly valuable features is needed to improve the overall students’ performance. This paper proposes an evolutionary-based SPP model utilizing an enhanced form of the Whale Optimization Algorithm (EWOA) as a wrapper feature selection to keep the most informative features and enhance the prediction quality. The proposed EWOA combines the Whale Optimization Algorithm (WOA) with Sine Cosine Algorithm (SCA) and Logistic Chaotic Map (LCM) to improve the overall performance of WOA. The SCA will empower the exploitation process inside WOA and minimize the probability of being stuck in local optima. The main idea is to enhance the worst half of the population in WOA using SCA. Besides, LCM strategy is employed to control the population diversity and improve the exploration process. As such, we handled the imbalanced data using the Adaptive Synthetic (ADASYN) sampling technique and converting WOA to binary variant employing transfer functions (TFs) that belong to different families (S-shaped and V-shaped). Two real educational datasets are used, and five different classifiers are employed: the Decision Trees (DT), k-Nearest Neighbors (k-NN), Naive Bayes (NB), Linear Discriminant Analysis (LDA), and LogitBoost (LB). The obtained results show that the LDA classifier is the most reliable classifier with both datasets. In addition, the proposed EWOA outperforms other methods in the literature as wrapper feature selection with selected transfer functions.
-
Quadrotor UAVs are one of the most preferred types of small unmanned aerial vehicles, due to their modest mechanical structure and propulsion precept. However, the complex non-linear dynamic behavior of the Proportional Integral Derivative (PID) controller in these vehicles requires advanced stabilizing control of their movement. Additionally, locating the appropriate gain for a model-based controller is relatively complex and demands a significant amount of time, as it relies on external perturbations and the dynamic modeling of plants. Therefore, developing a method for the tuning of quadcopter PID parameters may save effort and time, and better control performance can be realized. Traditional methods, such as Ziegler–Nichols (ZN), for tuning quadcopter PID do not provide optimal control and might leave the system with potential instability and cause significant damage. One possible approach that alleviates the tough task of nonlinear control design is the use of meta-heuristics that permit appropriate control actions. This study presents PID controller tuning using meta-heuristic algorithms, such as Genetic Algorithms (GAs), the Crow Search Algorithm (CSA) and Particle Swarm Optimization (PSO) to stabilize quadcopter movements. These meta-heuristics were used to control the position and orientation of a PID controller based on a fitness function proposed to reduce overshooting by predicting future paths. The obtained results confirmed the efficacy of the proposed controller in felicitously and reliably controlling the flight of a quadcopter based on GA, CSA and PSO. Finally, the simulation results related to quadcopter movement control using PSO presented impressive control results, compared to GA and CSA.
-
Obstructive sleep apnea (OSA) is a well-known sleep ailment. OSA mostly occurs due to the shortage of oxygen for the human body, which causes several symptoms (i.e., low concentration, daytime sleepiness, and irritability). Discovering the existence of OSA at an early stage can save lives and reduce the cost of treatment. The computer-aided diagnosis (CAD) system can quickly detect OSA by examining the electrocardiogram (ECG) signals. Over-serving ECG using a visual procedure is challenging for physicians, time-consuming, expensive, and subjective. In general, automated detection of the ECG signal’s arrhythmia is a complex task due to the complexity of the data quantity and clinical content. Moreover, ECG signals are usually affected by noise (i.e., patient movement and disturbances generated by electric devices or infrastructure), which reduces the quality of the collected data. Machine learning (ML) and Deep Learning (DL) gain a higher interest in health care systems due to its ability of achieving an excellent performance compared to traditional classifiers. We propose a CAD system to diagnose apnea events based on ECG in an automated way in this work. The proposed system follows the following steps: (1) remove noise from the ECG signal using a Notch filter. (2) extract nine features from the ECG signal (3) use thirteen ML and four types of DL models for the diagnosis of sleep apnea. The experimental results show that our proposed approach offers a good performance of DL classifiers to detect OSA. The proposed model achieves an accuracy of 86.25% in the validation stage.
-
Data classification is a challenging problem. Data classification is very sensitive to the noise and high dimensionality of the data. Being able to reduce the model complexity can help to improve the accuracy of the classification model performance. Therefore, in this research, we propose a novel feature selection technique based on Binary Harris Hawks Optimizer with Time-Varying Scheme (BHHO-TVS). The proposed BHHO-TVS adopts a time-varying transfer function that is applied to leverage the influence of the location vector to balance the exploration and exploitation power of the HHO. Eighteen well-known datasets provided by the UCI repository were utilized to show the significance of the proposed approach. The reported results show that BHHO-TVS outperforms BHHO with traditional binarization schemes as well as other binary feature selection methods such as binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), binary bat algorithm (BBA), binary whale optimization algorithm (BWOA), and binary salp swarm algorithm (BSSA). Compared with other similar feature selection approaches introduced in previous studies, the proposed method achieves the best accuracy rates on 67% of datasets.
-
The operation and planning of distribution grids require the joint processing of measurements from different grid locations. Since measurement devices in low-and medium-voltage grids lack precise clock synchronization, it is important for data management platforms of distribution system operators to be able to account for the impact of nonideal clocks on measurement data. This paper formally introduces a metric termed Additive Alignment Error to capture the impact of misaligned averaging intervals of electrical measurements. A trace-driven approach for retrieval of this metric would be computationally costly for measurement devices, and therefore, it requires an online estimation procedure in the data collection platform. To overcome the need of transmission of high-resolution measurement data, this paper proposes and assesses an extension of a Markov-modulated process to model electrical traces, from which a closed-form matrix analytic formula for the Additive Alignment Error is derived. A trace-driven assessment confirms the accuracy of the model-based approach. In addition, the paper describes practical settings where the model can be utilized in data management platforms with significant reductions in computational demands on measurement devices. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
-
This paper focuses on how zoom touchscreen gestures can be used to continuously authenticate and identify smartphone users. The zoom gesture is critically under-researched as a behavioral biometric despite richness of data found in this gesture. Furthermore, analysing how the zoom gesture performs over time is a novel line of inquiry. Zoom samples from three different data collection sessions were sourced. In these sessions, each participant zoomed in and out on three images. Eighty-five features were extracted from each gesture. The classification models used were Support Vector Machine (SVM), Random Forest (RF), and K-nearest Neighbor (KNN). The best authentication performance of AUC 0.937 and EER 10.6% were achieved using the SVM classifier. The best identification performance of 65.5% accuracy, 69.6% precision, and 67.9% recall were achieved using the RF classifier. In terms of stability over time, SVM proved to be the most stable classifier, with an AUC degradation of only 0.007 after two weeks had elapsed. This analysis proves that zoom gestures demonstrate promise for use in continuous smartphone authentication and identification applications. © 2021 Elsevier Ltd
-
Authorship attribution identifies the true author of an unknown document. Authorship attribution plays a crucial role in plagiarism detection and blackmailer identification, however, the existing studies on authorship attribution in Bengali are limited. In this paper, we propose an instance-based deep authorship attribution model, called DAAB, to identify authors in Bengali. Our DAAB model fuses features from convolutional neural networks and another set of features from an artificial neural network to learn the stylometry of an author for authorship attribution. Extensive experiments with three real benchmark datasets such as Bengali-Quora and two online Bengali Corpus demonstrate the superiority of our authorship attribution model. © 2021 IEEE.
-
A multi-biometric verification system lowers the verification errors by fusing information from multiple biometric sources. Information can be fused in parallel or serial modes. While parallel fusion gives a higher accuracy, it may suffer from a serious problem of taking a longer verification time. Serial fusion can alleviate this problem by allowing the users to submit a subset of the available biometric characteristics. Unfortunately, several studies show that serial fusion may not reach the level of accuracy of parallel fusion. In this paper, we propose a fusion framework which combines the advantages of both parallel and serial fusion. The core of the framework is a new concept of “confident reject region” which incurs nearly zero verification error. We evaluate our framework by performing experiments on two multi-biometric verification systems built with NIST biometric scores set release 1. The experimental results show that our framework achieves a lower equal error rate and takes a shorter verification time than standard parallel fusion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
-
In this paper, we develop a new point-of-entry security measure for smartphone users. We devise a concept, the “Quad Swipe Pattern”, which includes four swipes from a user in four directions and utilizes the user’s swipe behavior for authentication. The Quad Swipe Pattern overcomes several shortcomings present in current point-of-entry security measures. We performed several experiments to demonstrate the effectiveness of the Quad Swipe Pattern in smartphone user authentication. We evaluated the Quad Swipe Pattern using five machine learning classifiers, three datasets of different sizes, and five different fingers. In addition, we studied how fusion of information from multiple fingers and multiple classifiers can improve the performance of Quad Swipe Pattern. All of our experimental results show significant promise of the Quad Swipe Pattern as a new point-of-entry security measure for smartphones. With a Neural Network model, the Quad Swipe Pattern achieves the Accuracy of 99.7%, False Acceptance Rate of 0.4%, and False Rejection Rate of 0%. With Support Vector Machine, the Quad Swipe Pattern achieves the Accuracy of 99.5%, False Acceptance Rate of 0.4%, and False Rejection Rate of 1.7%. With fusion of two best fingers, the Quad Swipe Pattern demonstrates an excellent performance of a zero Equal Error Rate.
-
Image reconstruction for industrial applications based on Electrical Capacitance Tomography (ECT) has been broadly applied. The goal of image reconstruction based ECT is to locate the distribution of permittivity for the dielectric substances along the cross-section based on the collected capacitance data. In the ECT-based image reconstruction process: (1) the relationship between capacitance measurements and permittivity distribution is nonlinear, (2) the capacitance measurements collected during image reconstruction are inadequate due to the limited number of electrodes, and (3) the reconstruction process is subject to noise leading to an ill-posed problem. Thence, constructing an accurate algorithm for real images is critical to overcoming such restrictions. This paper presents novel image reconstruction methods using Deep Learning for solving the forward and inverse problems of the ECT system for generating high-quality images of conductive materials in the Lost Foam Casting (LFC) process. Here, Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) models were implemented to predict the distribution of metal filling for the LFC process-based ECT. The recurrent connection and the gating mechanism of the LSTM is capable of extracting the contextual information that is repeatedly passing through the neural network while filtering out the noise caused by adverse factors. Experimental results showed that the presented ECT-LSTM-RNN model is highly reliable for industrial applications and can be utilized for other manufacturing processes. © 2013 IEEE.
-
Person detection is often critical for personal safety, property protection, and national security. Most person detection technologies implement unimodal classification, making predictions based on a single sensor data modality, which is most often vision. There are many ways to defeat unimodal person detectors, and many more reasons to ensure technologies responsible for detecting the presence of a person are accurate and precise. In this paper, we design and implement a multimodal person detection system which can acquire data from multiple sensors and detect persons based on a variety of unimodal classifications and multimodal fusions. We present two methods of generating system-level predictions: (1) device perspectives which makes a final decision based on multiple device-level predictions and (2) system perspectives which combines data samples from multiple devices into a single data sample and then makes a decision. Our experimental results show that system-level predictions from system perspectives are generally more accurate than system-level predictions from device perspectives. We achieve an accuracy of 100%, zero false positive rate and zero false negative rate with fusion of system perspectives motion and distance data. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.
-
Cardiovascular disease (CVD) is reported to be the leading cause of mortality in the middle eastern countries, including Qatar. But no comprehensive study has been conducted on the Qatar specific CVD risk factors identification. The objective of this case-control study was to develop machine learning (ML) model distinguishing healthy individuals from people having CVD, which could ultimately reveal the list of potential risk factors associated to CVD in Qatar. To the best of our knowledge, this study considered the largest collection of biomedical measurements representing the anthropometric measurements, clinical biomarkers, bioimpedance, spirometry, VICORDER readings, and behavioral factors of the CVD group from Qatar Biobank (QBB). CatBoost model achieved 93% accuracy, thereby outperforming the existing model for the same purpose. Interestingly, combining multimodal datasets into the proposed ML model outperformed the ML model built upon currently known risk factors for CVD, emphasizing the importance of incorporating other clinical biomarkers into consideration for CVD diagnosis plan. The ablation study on the multimodal dataset from QBB revealed that physio-clinical and bioimpedance measurements have the most distinguishing power to classify these two groups irrespective of gender and age of the participants. Multiple feature subset selection techniques confirmed known CVD risk factors (blood pressure, lipid profile, smoking, sedentary life, and diabetes), and identified potential novel risk factors linked to CVD-related comorbidities such as renal disorder (e.g., creatinine, uric acid, homocysteine, albumin), atherosclerosis (intima media thickness), hypercoagulable state (fibrinogen), and liver function (e.g., alkaline phosphate, gamma-glutamyl transferase). Moreover, the inclusion of the proposed novel factors into the ML model provides better performance than the model with traditional known risk factors for CVD. The association of the proposed risk factors and comorbidities are required to be investigated in clinical setup to understand their role in CVD better. © 2013 IEEE.
-
Poor security practices among smartphone users, such as the use of simple, easily guessed passcodes for logins, are a result of the effort required to memorize stronger ones. In this paper, we devise a concept of “open code” biometric tap pad to authenticate smartphone users, which eliminates the need of memorizing secret codes. A biometric tap pad consists of a grid of buttons each labeled with a unique digit. The user attempting to log into the phone will tap these buttons in a given sequence. He/she will not memorize this tap sequence. Instead, the sequence will be displayed on the screen. The focus here is how the user types the sequence. This typing behavior is used for authentication. An open code biometric tap pad has several advantages, such as (1) users do not need to memorize passcodes, (2) manufacturers do not need to include extra sensors, and (3) onlookers have no chance to practice shoulder-surfing. We designed three tap pads and incorporated them into an Android app. We evaluated the performance of these tap pads by experimenting with three sequence styles and five different fingers: two thumbs, two index fingers, and the “usual” finger. We collected data from 33 participants over two weeks. We tested three machine learning algorithms: Support Vector Machine, Artificial Neural Network, and Random Forest. Experimental results show significant promise of open code biometric tap pads as a solution to the problem of weak smartphone security practices used by a large segment of the population.
-
Diabetes is one of the leading fatal diseases globally, putting a huge burden on the global healthcare system. Early diagnosis of diabetes is hence, of utmost importance and could save many lives. However, current techniques to determine whether a person has diabetes or has the risk of developing diabetes are primarily reliant upon clinical biomarkers. In this article, we propose a novel deep learning architecture to predict if a person has diabetes or not from a photograph of his/her retina. Using a relatively small-sized dataset, we develop a multi-stage convolutional neural network (CNN)-based model DiaNet that can reach an accuracy level of over 84% on this task, and in doing so, successfully identifies the regions on the retina images that contribute to its decision-making process, as corroborated by the medical experts in the field. This is the first study that highlights the distinguishing capability of the retinal images for diabetes patients in the Qatari population to the best of our knowledge. Comparing the performance of DiaNet against the existing clinical data-based machine learning models, we conclude that the retinal images contain sufficient information to distinguish the Qatari diabetes cohort from the control group. In addition, our study reveals that retinal images may contain prognosis markers for diabetes and other comorbidities like hypertension and ischemic heart disease. The results led us to believe that the inclusion of retinal images into the clinical setup for the diagnosis of diabetes is warranted in the near future.
-
The performance of any meta-heuristic algorithm depends highly on the setting of dependent parameters of the algorithm. Different parameter settings for an algorithm may lead to different outcomes. An optimal parameter setting should support the algorithm to achieve a convincing level of performance or optimality in solving a range of optimization problems. This paper presents a novel enhancement method for the salp swarm algorithm (SSA), referred to as enhanced SSA (ESSA). In this ESSA, the following enhancements are proposed: First, a new position updating process was proposed. Second, a new dominant parameter different from that used in SSA was presented in ESSA. Third, a novel lifetime convergence method for tuning the dominant parameter of ESSA using ESSA itself was presented to enhance the convergence performance of ESSA. These enhancements to SSA were proposed in ESSA to augment its exploration and exploitation capabilities to achieve optimal global solutions, in which the dominant parameter of ESSA is updated iteratively through the evolutionary process of ESSA so that the positions of the search agents of ESSA are updated accordingly. These improvements on SSA through ESSA support it to avoid premature convergence and efficiently find the global optimum solution for many real-world optimization problems. The efficiency of ESSA was verified by testing it on several basic benchmark test functions. A comparative performance analysis between ESSA and other meta-heuristic algorithms was performed. Statistical test methods have evidenced the significance of the results obtained by ESSA. The efficacy of ESSA in solving real-world problems and applications is also demonstrated with five well-known engineering design problems and two real industrial problems. The comparative results show that ESSA imparts better performance and convergence than SSA and other meta-heuristic algorithms.
Explore
Department
Resource type
- Conference Paper (4)
- Journal Article (18)
Publication year
Resource language
- English (20)