Your search
Results 291 resources
-
In this paper, we present a Neighborhood Search Genetic Algorithms (NSGAs) for mobile robot path planning. GAs have been used successfully in a variety of path planning problem because they can search the space of all possible paths and provide the optimal one. The convergence process of GAs might be lengthy compared to traditional search techniques that depend on local search methods. We propose a hybrid approach that allows GAs to combine both the advantages of GAs and local search algorithms. GAs will create a multiple waypoint path allowing a mobile robot to navigate through static obstacles and finding the optimal path in order to approach the target location without collision. The proposed NSGAs has been examined over four different path planning case studies with varying complexity. The performance of the enhanced GA has been compared with A-star algorithm (A∗) standard GA, particle swarm optimization (PSO) algorithm. The obtained results show that the proposed approach is able to get good results compared to other algorithms. © 2019 ACM.
-
With the growing access to technology in the medical domain, an increased volume of medical data is recorded. The size and complexity of these data make the process of analysis of meaningful discoveries of beneficial patterns more challenging. This problem has attracted numerous researchers around the world. Statistical methods have been employed to handle medical data for diagnosis purposes. Unfortunately, these methods were less capable of dealing with these massive and complex datasets. To solve this problem, we suggest a process to classify medical data which includes feature selection and classification using a number of supervised learning techniques. Binary Brain Storm Optimization (BBSO) is used for feature selection, which is a population search approach that simulates the process of electing the best idea (solution), among others. We simulated six different classifiers: Naive-Bayes, K-Nearest Neighbor, Support Vector Machine, Linear Discriminant Analysis, Decision Tree and Random Forest. Five datasets adopted from the UCI Machine Learning Repository, (Breast Cancer, Diabetes, Heart Disease, Chronic Kidney, and SPECT), are employed as a benchmark test data. The performance of BBSO is evaluated using accuracy on the datasets using the various classifiers. Experimental results show that the proposed approach improves the classification performance for better medical diagnosis. © 2019 ACM.
-
Keystroke dynamics has been used as a form of one-time user authentication and continuous verification especially when it comes to securing the cyberspace. In this paper, we present the idea of using keystroke dynamics as a form of second layer authentication in web applications. We showed that this method can authenticate a user with high accuracy and can be used as an alternate to CAPTCHA tests, security questions and image selections that are being used today. We have developed a working web-based platform in a browser environment that enforces the proposed second-layer security. We performed penetration test experiments by launching a total of 598,500 impostor and genuine authentication attempts and found the Equal Error Rate (EER) as 10.5%. © 2019 IEEE.
-
The growth in using various smart wireless devices in the last few decades has given rise to indoor localization service (ILS). Indoor localization is defined as the process of locating a user location in an indoor environment. Indoor device localization has been widely studied due to its popular applications in public settlement planning, health care zones, disaster management, the implementation of location-based services (LBS) and the Internet of Things (IoT). The ILS problem can be formulated as a learning problem utilizing Wi-Fi technology. The measured Wi-Fi signal strength can be used as an indication of the distribution of users in a various indoor location. Developing a classification model with high accuracy can be achieved using a machine learning approach. Artificial Neural Network is one of the most successful trends in machine learning. In this article, we provide our initial idea of using Cascaded Layered Recurrent Neural Network (L-RNN) for the classification of user localization in an indoor environment. Several neural network models were trained, with the best performance attainment is reported. The experimental results marked that the presented L-RNN model is highly accurate for indoor localization and can be utilized for many applications. © 2019 IEEE.
-
Web applications are built to be accessible to everyone. Unfortunately, many web applications are inaccessible to people with special needs or disabilities. In this work, we show a methodology used to make web applications more accessible to a diverse group of people. The process includes two phases: evaluation and improvement. In the first phase, the Web Accessibility Barrier (WAB) score metric together with the Accessibility Failure Rate (AFR) metric are used to evaluate web applications. In the second phase changes suggested by accessibility checker tool are implemented in the software to enhance the metrics values and reach the target level of accessibility. The open-source chat application, Zulip, is used as a case study to show the effectiveness of this approach. © 2021 IEEE.
-
Our research objective is to compare the effectiveness of standard online learning methods versus the utilization of virtual reality in education in terms of student focus and information retention. Our proposed platform will have identical lesson plans in virtual reality as our online learning methods. Eye gaze tracking and a recall test will be used on both platforms to measure focus on the screen and retention, respectively. The ultimate goal of the project is to use this data to evaluate the effectiveness of VR as a digital learning environment. © 2021 IEEE.
-
The pervasive nature of long non-coding RNA (lncRNA) transcription in the mammalian genomes has changed our protein-centric view of genomes. But the identification of lncRNAs is an important task to discover their functional role in species. The rapid development of next-generation sequencing technology leveraged the opportunity to discover many lncRNA transcripts. However, the cost and time-consuming nature of transcriptomics verification techniques barred the research community from focusing on lncRNA identification. To overcome these challenges we developed LNCRI (Long Non-Coding RNA Identifier), a novel machine learning (ML)-based tool for the identification of lncRNA transcripts. We leveraged weighted k-mer, pseudo nucleotide composition, hexamer usage bias, Fickett score, information of open reading frame, UTR regions, and HMMER score as a feature set to develop LNCRI. LNCRI outperformed other existing models in the task of distinguishing lncRNA transcripts from protein-coding mRNA transcripts with high accuracy in human and mouse. LNCRI also outperformed the existing tools for cross-species prediction on chimpanzee, monkey, gorilla, orangutan, cow, pig, frog and zebrafish. We applied the SHAP algorithm to demonstrate the importance of most dominating features that were leveraged in the model. We believe our tool will support the research community to identify the lncRNA transcripts in a highly accurate manner. The benchmark datasets and source code are available in GitHub: http://github.com/smusleh/LNCRI. © 2013 IEEE.
-
Crow Search Algorithm (CSA) is a promising meta-heuristic method developed based on the intelligent conduct of crows in nature. This algorithm lacks a good representation of its individuals’ memory, and as with many other meta-heuristics it faces a problem in efficiently balancing exploration and exploitation. These defects may lead to early convergence to local optima. To cope with such issues, we proposed a Memory based Hybrid CSA (MHCSA) with the use of Particle Swarm Optimization (PSO) algorithm. This hybridization approach was proposed to reinforce the diversity ability of CSA and balance its search abilities for promising solutions to achieve robust search performance. The memory element of MHCSA was initialized with the best solution (pbest) of PSO to exploit the most promising search areas. The best positions of the CSA’s individuals are improved using the best solution found so far (gbest) and (pbest) of PSO. Another flaw of CSA is the use of fixed flight length and awareness probability for crows to control exploration and exploitation features, respectively. This issue was circumvented here by replacing these constants with adaptive functions in order to provide a better balance between exploration and exploitation over the course of iterations. The competence of MHCSA was revealed by testing it on seventy-three standard and computationally complex benchmark functions. Its applicability was substantiated by solving seven engineering design problems. The results showed that the problem of early convergence was eliminated by MHCSA and that the balance of exploration and exploitation was further improved. Further, MHCSA ranked first among CSA, PSO, robust variants of CSA and other strong competing methods in terms of accuracy and stability. © 2022, The Author(s), under exclusive licence to Springer Nature B.V.
-
Modeling of nonlinear industrial systems embraces two key stages: selection of a model structure with a compact parameter list, and selection of an algorithm to estimate the parameter list values. Thus, there is a need to develop a sufficiently adequate model to characterize the behavior of industrial systems to represent experimental data sets. The data collected for many industrial systems may be subject to the existence of high non-linearity and multiple constraints. Meanwhile, creating a thoroughgoing model for an industrial process is essential for model-based control systems. In this work, we explore the use of a proposed Enhanced version of the Cuckoo Search (ECS) algorithm to address a parameter estimation problem for both linear and nonlinear model structures of a real winding process. The performance of the developed models was compared with other mainstream meta-heuristics when they were targeted to model the same process. Moreover, these models were compared with other models developed based on some conventional modeling methods. Several evaluation tests were performed to judge the efficiency of the developed models based on ECS, which showed superior performance in both training and testing cases over that achieved by other modeling methods. © 2022, This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply.
-
Acute Lymphoblastic Leukemia (ALL) is a life-threatening type of cancer wherein mortality rate is unquestionably high. Early detection of ALL can reduce both the rate of fatality as well as improve the diagnosis plan for patients. In this study, we developed the ALL Detector (ALLD), which is a deep learning-based network to distinguish ALL patients from healthy individuals based on blast cell microscopic images. We evaluated multiple DL-based models and the ResNet-based model performed the best with 98% accuracy in the classification task. We also compared the performance of ALLD against state-of-the-art tools utilized for the same purpose, and ALLD outperformed them all. We believe that ALLD will support pathologists to explicitly diagnose ALL in the early stages and reduce the burden on clinical practice overall. © 2022 The authors and IOS Press.
-
E-commerce giants like Amazon rely on consumer reviews to allow buyers to inform other potential buyers about a product’s pros and cons. While these reviews can be useful, they are less so when the number of reviews is large; no consumer can be expected to read hundreds or thousands of reviews in order to gain better understanding about a product. In an effort to provide an aggregate representation of reviews, Amazon offers an average user rating represented by a 1- to 5-star score. This score only represents how reviewers feel about a product without providing insight into why they feel that way. In this work, we propose an AI technique that generates an easy-to-read, concise summary of a product based on its reviews. It provides an overview of the different aspects reviewers emphasize in their reviews and, crucially, how they feel about those aspects. Our methodology generates a list of the topics most-mentioned by reviewers, conveys reviewer sentiment for each topic and calculates an overall summary score that reflects reviewers’ overall sentiment about the product. These sentiment scores adapt the same 1- to 5-star scoring scale in order to remain familiar to Amazon users. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
-
Dual-energy X-ray absorptiometry (DXA) has been traditionally used to assess body composition covering bone, fat and muscle content. Cardiovascular disease (CVD) has deleterious effects on bone health and fat composition. Therefore, early detection of bone health, fat and muscle composition would help to anticipate a proper diagnosis and treatment plan for CVD patients. In this study, we leveraged machine learning (ML)-based models to predict CVD using DXA, demonstrating that it can be considered an innovative approach for early detection of CVD. We leveraged state-of-the-art ML models to classify the CVD group from non-CVD group. The proposed logistic regression-based model achieved nearly 80% accuracy. Overall, the bone mineral density, fat content, muscle mass and bone surface area measurements were elevated in the CVD group compared to non-CVD group. Ablation study revealed a more successful discriminatory power of fat content and bone mineral density than muscle mass and bone areas. To the best of our knowledge, this work is the first ML model to reveal the association between DXA measurements and CVD in the Qatari population. We believe this study will open new avenues of introducing DXA in creating the diagnosis and treatment plan of cardiovascular diseases. © 2022 The authors and IOS Press.
-
The students’ performance prediction (SPP) problem is a challenging problem that managers face at any institution. Collecting educational quantitative and qualitative data from many resources such as exam centers, virtual courses, e-learning educational systems, and other resources is not a simple task. Even after collecting data, we might face imbalanced data, missing data, biased data, and different data types such as strings, numbers, and letters. One of the most common challenges in this area is the large number of attributes (features). Determining the highly valuable features is needed to improve the overall students’ performance. This paper proposes an evolutionary-based SPP model utilizing an enhanced form of the Whale Optimization Algorithm (EWOA) as a wrapper feature selection to keep the most informative features and enhance the prediction quality. The proposed EWOA combines the Whale Optimization Algorithm (WOA) with Sine Cosine Algorithm (SCA) and Logistic Chaotic Map (LCM) to improve the overall performance of WOA. The SCA will empower the exploitation process inside WOA and minimize the probability of being stuck in local optima. The main idea is to enhance the worst half of the population in WOA using SCA. Besides, LCM strategy is employed to control the population diversity and improve the exploration process. As such, we handled the imbalanced data using the Adaptive Synthetic (ADASYN) sampling technique and converting WOA to binary variant employing transfer functions (TFs) that belong to different families (S-shaped and V-shaped). Two real educational datasets are used, and five different classifiers are employed: the Decision Trees (DT), k-Nearest Neighbors (k-NN), Naive Bayes (NB), Linear Discriminant Analysis (LDA), and LogitBoost (LB). The obtained results show that the LDA classifier is the most reliable classifier with both datasets. In addition, the proposed EWOA outperforms other methods in the literature as wrapper feature selection with selected transfer functions.
-
Quadrotor UAVs are one of the most preferred types of small unmanned aerial vehicles, due to their modest mechanical structure and propulsion precept. However, the complex non-linear dynamic behavior of the Proportional Integral Derivative (PID) controller in these vehicles requires advanced stabilizing control of their movement. Additionally, locating the appropriate gain for a model-based controller is relatively complex and demands a significant amount of time, as it relies on external perturbations and the dynamic modeling of plants. Therefore, developing a method for the tuning of quadcopter PID parameters may save effort and time, and better control performance can be realized. Traditional methods, such as Ziegler–Nichols (ZN), for tuning quadcopter PID do not provide optimal control and might leave the system with potential instability and cause significant damage. One possible approach that alleviates the tough task of nonlinear control design is the use of meta-heuristics that permit appropriate control actions. This study presents PID controller tuning using meta-heuristic algorithms, such as Genetic Algorithms (GAs), the Crow Search Algorithm (CSA) and Particle Swarm Optimization (PSO) to stabilize quadcopter movements. These meta-heuristics were used to control the position and orientation of a PID controller based on a fitness function proposed to reduce overshooting by predicting future paths. The obtained results confirmed the efficacy of the proposed controller in felicitously and reliably controlling the flight of a quadcopter based on GA, CSA and PSO. Finally, the simulation results related to quadcopter movement control using PSO presented impressive control results, compared to GA and CSA.
-
Obstructive sleep apnea (OSA) is a well-known sleep ailment. OSA mostly occurs due to the shortage of oxygen for the human body, which causes several symptoms (i.e., low concentration, daytime sleepiness, and irritability). Discovering the existence of OSA at an early stage can save lives and reduce the cost of treatment. The computer-aided diagnosis (CAD) system can quickly detect OSA by examining the electrocardiogram (ECG) signals. Over-serving ECG using a visual procedure is challenging for physicians, time-consuming, expensive, and subjective. In general, automated detection of the ECG signal’s arrhythmia is a complex task due to the complexity of the data quantity and clinical content. Moreover, ECG signals are usually affected by noise (i.e., patient movement and disturbances generated by electric devices or infrastructure), which reduces the quality of the collected data. Machine learning (ML) and Deep Learning (DL) gain a higher interest in health care systems due to its ability of achieving an excellent performance compared to traditional classifiers. We propose a CAD system to diagnose apnea events based on ECG in an automated way in this work. The proposed system follows the following steps: (1) remove noise from the ECG signal using a Notch filter. (2) extract nine features from the ECG signal (3) use thirteen ML and four types of DL models for the diagnosis of sleep apnea. The experimental results show that our proposed approach offers a good performance of DL classifiers to detect OSA. The proposed model achieves an accuracy of 86.25% in the validation stage.
-
Data classification is a challenging problem. Data classification is very sensitive to the noise and high dimensionality of the data. Being able to reduce the model complexity can help to improve the accuracy of the classification model performance. Therefore, in this research, we propose a novel feature selection technique based on Binary Harris Hawks Optimizer with Time-Varying Scheme (BHHO-TVS). The proposed BHHO-TVS adopts a time-varying transfer function that is applied to leverage the influence of the location vector to balance the exploration and exploitation power of the HHO. Eighteen well-known datasets provided by the UCI repository were utilized to show the significance of the proposed approach. The reported results show that BHHO-TVS outperforms BHHO with traditional binarization schemes as well as other binary feature selection methods such as binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), binary bat algorithm (BBA), binary whale optimization algorithm (BWOA), and binary salp swarm algorithm (BSSA). Compared with other similar feature selection approaches introduced in previous studies, the proposed method achieves the best accuracy rates on 67% of datasets.
-
The operation and planning of distribution grids require the joint processing of measurements from different grid locations. Since measurement devices in low-and medium-voltage grids lack precise clock synchronization, it is important for data management platforms of distribution system operators to be able to account for the impact of nonideal clocks on measurement data. This paper formally introduces a metric termed Additive Alignment Error to capture the impact of misaligned averaging intervals of electrical measurements. A trace-driven approach for retrieval of this metric would be computationally costly for measurement devices, and therefore, it requires an online estimation procedure in the data collection platform. To overcome the need of transmission of high-resolution measurement data, this paper proposes and assesses an extension of a Markov-modulated process to model electrical traces, from which a closed-form matrix analytic formula for the Additive Alignment Error is derived. A trace-driven assessment confirms the accuracy of the model-based approach. In addition, the paper describes practical settings where the model can be utilized in data management platforms with significant reductions in computational demands on measurement devices. © 2021 by the authors. Licensee MDPI, Basel, Switzerland.
-
This paper focuses on how zoom touchscreen gestures can be used to continuously authenticate and identify smartphone users. The zoom gesture is critically under-researched as a behavioral biometric despite richness of data found in this gesture. Furthermore, analysing how the zoom gesture performs over time is a novel line of inquiry. Zoom samples from three different data collection sessions were sourced. In these sessions, each participant zoomed in and out on three images. Eighty-five features were extracted from each gesture. The classification models used were Support Vector Machine (SVM), Random Forest (RF), and K-nearest Neighbor (KNN). The best authentication performance of AUC 0.937 and EER 10.6% were achieved using the SVM classifier. The best identification performance of 65.5% accuracy, 69.6% precision, and 67.9% recall were achieved using the RF classifier. In terms of stability over time, SVM proved to be the most stable classifier, with an AUC degradation of only 0.007 after two weeks had elapsed. This analysis proves that zoom gestures demonstrate promise for use in continuous smartphone authentication and identification applications. © 2021 Elsevier Ltd
-
Authorship attribution identifies the true author of an unknown document. Authorship attribution plays a crucial role in plagiarism detection and blackmailer identification, however, the existing studies on authorship attribution in Bengali are limited. In this paper, we propose an instance-based deep authorship attribution model, called DAAB, to identify authors in Bengali. Our DAAB model fuses features from convolutional neural networks and another set of features from an artificial neural network to learn the stylometry of an author for authorship attribution. Extensive experiments with three real benchmark datasets such as Bengali-Quora and two online Bengali Corpus demonstrate the superiority of our authorship attribution model. © 2021 IEEE.
-
A multi-biometric verification system lowers the verification errors by fusing information from multiple biometric sources. Information can be fused in parallel or serial modes. While parallel fusion gives a higher accuracy, it may suffer from a serious problem of taking a longer verification time. Serial fusion can alleviate this problem by allowing the users to submit a subset of the available biometric characteristics. Unfortunately, several studies show that serial fusion may not reach the level of accuracy of parallel fusion. In this paper, we propose a fusion framework which combines the advantages of both parallel and serial fusion. The core of the framework is a new concept of “confident reject region” which incurs nearly zero verification error. We evaluate our framework by performing experiments on two multi-biometric verification systems built with NIST biometric scores set release 1. The experimental results show that our framework achieves a lower equal error rate and takes a shorter verification time than standard parallel fusion. © 2021, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)