Your search
Results 291 resources
-
In this paper, we develop a new point-of-entry security measure for smartphone users. We devise a concept, the “Quad Swipe Pattern”, which includes four swipes from a user in four directions and utilizes the user’s swipe behavior for authentication. The Quad Swipe Pattern overcomes several shortcomings present in current point-of-entry security measures. We performed several experiments to demonstrate the effectiveness of the Quad Swipe Pattern in smartphone user authentication. We evaluated the Quad Swipe Pattern using five machine learning classifiers, three datasets of different sizes, and five different fingers. In addition, we studied how fusion of information from multiple fingers and multiple classifiers can improve the performance of Quad Swipe Pattern. All of our experimental results show significant promise of the Quad Swipe Pattern as a new point-of-entry security measure for smartphones. With a Neural Network model, the Quad Swipe Pattern achieves the Accuracy of 99.7%, False Acceptance Rate of 0.4%, and False Rejection Rate of 0%. With Support Vector Machine, the Quad Swipe Pattern achieves the Accuracy of 99.5%, False Acceptance Rate of 0.4%, and False Rejection Rate of 1.7%. With fusion of two best fingers, the Quad Swipe Pattern demonstrates an excellent performance of a zero Equal Error Rate.
-
Image reconstruction for industrial applications based on Electrical Capacitance Tomography (ECT) has been broadly applied. The goal of image reconstruction based ECT is to locate the distribution of permittivity for the dielectric substances along the cross-section based on the collected capacitance data. In the ECT-based image reconstruction process: (1) the relationship between capacitance measurements and permittivity distribution is nonlinear, (2) the capacitance measurements collected during image reconstruction are inadequate due to the limited number of electrodes, and (3) the reconstruction process is subject to noise leading to an ill-posed problem. Thence, constructing an accurate algorithm for real images is critical to overcoming such restrictions. This paper presents novel image reconstruction methods using Deep Learning for solving the forward and inverse problems of the ECT system for generating high-quality images of conductive materials in the Lost Foam Casting (LFC) process. Here, Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) models were implemented to predict the distribution of metal filling for the LFC process-based ECT. The recurrent connection and the gating mechanism of the LSTM is capable of extracting the contextual information that is repeatedly passing through the neural network while filtering out the noise caused by adverse factors. Experimental results showed that the presented ECT-LSTM-RNN model is highly reliable for industrial applications and can be utilized for other manufacturing processes. © 2013 IEEE.
-
In this paper, we provide a consistent, inexpensive, and easy to use graphical user interface (GUI) smart phone application named Sleep Apnea Screener (SAS) that can diagnosis Obstructive Sleep Apnea (OSA) based on demographic data such as: gender, age, height, BMI, neck circumference, waist, etc., allowing a tentative diagnosis of OSA without the need for overnight tests. The developed smart phone application can diagnosis sleep apnea using a model trained with 620 samples collected from a sleep center in Corpus Christi, TX. Two machine learning classifiers (i.e., Logistic Regression (LR) and Support Vector Machine (SVM)) were used to diagnosis OSA. Our preliminary results show that at-home OSA screening is indeed possible, and that our application is effective method for covering large numbers of undiagnosed cases.
-
Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012 – achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also present experiments that provide insight into what the network learns, revealing a rich hierarchy of image features. Source code for the complete system is available at http://www.cs.berkeley.edu/ rbg/rcnn.
-
This paper will discuss Internet based data collection and analysis utilizing a Windows 95 and a UNIX hosted system. Forms on the World Wide Web (WWW) that illustrate the use of this technology in medical research, conference registration, and. patient care will be highlighted. Some of the details involved with creating data collection forms will be presented. The paper concludes with the recommendation that the Health Telematics curricula should include a unit on creating Internet based data collection forms.
-
Promoter region of protein-coding genes are gradually being well understood, yet no comparable studies exist for the promoter of long non-coding RNA (lncRNA) genes which has emerged as a global potential regulator in multiple cellular process and different diseases for human. To understand the difference in the transcriptional regulation pattern of these genes, previously, we proposed a machine learning based model to classify the promoter of protein-coding genes and lncRNA genes. In this study, we are presenting DeepCNPP (deep coding non-coding promoter predictor), an improved model based on deep learning (DL) framework to classify the promoter of lncRNA genes and protein-coding genes. We used convolution neural network (CNN) based deep network to classify the promoter of these two broad categories of human genes. Our computational model, built upon the sequence information only, was able to classify these two groups of promoters from human at a rate of 83.34% accuracy and outperformed the existing model. Further analysis and interpretation of the output from DeepCNPP architecture will enable us to understand the difference in transcription regulatory pattern for these two groups of genes.
-
Human genes often, through alternative splicing of pre-messenger RNAs, produce multiple mRNAs and protein isoforms that may have similar or completely different functions. Identification of splice sites is, therefore, crucial to understand the gene structure and variants of mRNA and protein isoforms produced by the primary RNA transcripts. Although many computational methods have been developed to detect the splice sites in humans, this is still substantially a challenging problem and further improvement of the computational model is still foreseeable. Accordingly, we developed DeepDSSR (deep donor splice site recognizer), a novel deep learning based architecture, for predicting human donor splice sites. The proposed method, built upon publicly available and highly imbalanced benchmark dataset, is comparable with the leading deep learning based methods for detecting human donor splice sites. Performance evaluation metrics show that DeepDSSR outperformed the existing deep learning based methods. Future work will improve the predictive capabilities of our model, and we will build a model for the prediction of acceptor splice sites.
-
Person detection is often critical for personal safety, property protection, and national security. Most person detection technologies implement unimodal classification, making predictions based on a single sensor data modality, which is most often vision. There are many ways to defeat unimodal person detectors, and many more reasons to ensure technologies responsible for detecting the presence of a person are accurate and precise. In this paper, we design and implement a multimodal person detection system which can acquire data from multiple sensors and detect persons based on a variety of unimodal classifications and multimodal fusions. We present two methods of generating system-level predictions: (1) device perspectives which makes a final decision based on multiple device-level predictions and (2) system perspectives which combines data samples from multiple devices into a single data sample and then makes a decision. Our experimental results show that system-level predictions from system perspectives are generally more accurate than system-level predictions from device perspectives. We achieve an accuracy of 100%, zero false positive rate and zero false negative rate with fusion of system perspectives motion and distance data. © 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC part of Springer Nature.
-
Cardiovascular disease (CVD) is reported to be the leading cause of mortality in the middle eastern countries, including Qatar. But no comprehensive study has been conducted on the Qatar specific CVD risk factors identification. The objective of this case-control study was to develop machine learning (ML) model distinguishing healthy individuals from people having CVD, which could ultimately reveal the list of potential risk factors associated to CVD in Qatar. To the best of our knowledge, this study considered the largest collection of biomedical measurements representing the anthropometric measurements, clinical biomarkers, bioimpedance, spirometry, VICORDER readings, and behavioral factors of the CVD group from Qatar Biobank (QBB). CatBoost model achieved 93% accuracy, thereby outperforming the existing model for the same purpose. Interestingly, combining multimodal datasets into the proposed ML model outperformed the ML model built upon currently known risk factors for CVD, emphasizing the importance of incorporating other clinical biomarkers into consideration for CVD diagnosis plan. The ablation study on the multimodal dataset from QBB revealed that physio-clinical and bioimpedance measurements have the most distinguishing power to classify these two groups irrespective of gender and age of the participants. Multiple feature subset selection techniques confirmed known CVD risk factors (blood pressure, lipid profile, smoking, sedentary life, and diabetes), and identified potential novel risk factors linked to CVD-related comorbidities such as renal disorder (e.g., creatinine, uric acid, homocysteine, albumin), atherosclerosis (intima media thickness), hypercoagulable state (fibrinogen), and liver function (e.g., alkaline phosphate, gamma-glutamyl transferase). Moreover, the inclusion of the proposed novel factors into the ML model provides better performance than the model with traditional known risk factors for CVD. The association of the proposed risk factors and comorbidities are required to be investigated in clinical setup to understand their role in CVD better. © 2013 IEEE.
-
Poor security practices among smartphone users, such as the use of simple, easily guessed passcodes for logins, are a result of the effort required to memorize stronger ones. In this paper, we devise a concept of “open code” biometric tap pad to authenticate smartphone users, which eliminates the need of memorizing secret codes. A biometric tap pad consists of a grid of buttons each labeled with a unique digit. The user attempting to log into the phone will tap these buttons in a given sequence. He/she will not memorize this tap sequence. Instead, the sequence will be displayed on the screen. The focus here is how the user types the sequence. This typing behavior is used for authentication. An open code biometric tap pad has several advantages, such as (1) users do not need to memorize passcodes, (2) manufacturers do not need to include extra sensors, and (3) onlookers have no chance to practice shoulder-surfing. We designed three tap pads and incorporated them into an Android app. We evaluated the performance of these tap pads by experimenting with three sequence styles and five different fingers: two thumbs, two index fingers, and the “usual” finger. We collected data from 33 participants over two weeks. We tested three machine learning algorithms: Support Vector Machine, Artificial Neural Network, and Random Forest. Experimental results show significant promise of open code biometric tap pads as a solution to the problem of weak smartphone security practices used by a large segment of the population.
-
Diabetes is one of the leading fatal diseases globally, putting a huge burden on the global healthcare system. Early diagnosis of diabetes is hence, of utmost importance and could save many lives. However, current techniques to determine whether a person has diabetes or has the risk of developing diabetes are primarily reliant upon clinical biomarkers. In this article, we propose a novel deep learning architecture to predict if a person has diabetes or not from a photograph of his/her retina. Using a relatively small-sized dataset, we develop a multi-stage convolutional neural network (CNN)-based model DiaNet that can reach an accuracy level of over 84% on this task, and in doing so, successfully identifies the regions on the retina images that contribute to its decision-making process, as corroborated by the medical experts in the field. This is the first study that highlights the distinguishing capability of the retinal images for diabetes patients in the Qatari population to the best of our knowledge. Comparing the performance of DiaNet against the existing clinical data-based machine learning models, we conclude that the retinal images contain sufficient information to distinguish the Qatari diabetes cohort from the control group. In addition, our study reveals that retinal images may contain prognosis markers for diabetes and other comorbidities like hypertension and ischemic heart disease. The results led us to believe that the inclusion of retinal images into the clinical setup for the diagnosis of diabetes is warranted in the near future.
-
Roads should always be in a reliable con-dition and maintained regularly. One of the problems that should be maintained well is the pavement cracks problem. This a challenging problem that faces road engineers, since maintaining roads in a stable condition is needed for both drivers and pedestrians. Many meth-ods have been proposed to handle this problem to save time and cost. In this paper, we proposed a two-stage method to detect pavement cracks based on Principal Component Analysis (PCA) and Convolutional Neural Network (CNN) to solve this classification problem. We employed a Principal Component Analysis (PCA) method to extract the most significant features with a di˙erent number of PCA components. The proposed approach was trained using a Mendeley Asphalt Crack dataset, which contains 400 images of road cracks with a 480×480 resolution. The obtained results show how PCA helped in speeding up the learning process of CNN.
-
In urban planning and transportation management, the centrality characteristics of urban streets are vital measures to consider. Centrality can help in understanding the structural properties of dense traffic networks that affect both human life and activity in cities. Many cities classify urban streets to provide stakeholders with a group of street guidelines for possible new rehabilitation such as sidewalks, curbs, and setbacks. Transportation research always considers street networks as a connection between different urban areas. The street functionality classification defines the role of each element of the urban street network (USN). Some potential factors such as land use mix, accessible service, design goal, and administrators’ policies can affect the movement pattern of urban travelers. In this study, nine centrality measures are used to classify the urban roads in four cities evaluating the structural importance of street segments. In our work, a Stacked Denoising Autoencoder (SDAE) predicts a street’s functionality, then logistic regression is used as a classifier. Our proposed classifier can differentiate between four different classes adopted from the U.S. Department of Transportation (USDT): principal arterial road, minor arterial road, collector road, and local road. The SDAE-based model showed that regular grid configurations with repeated patterns are more influential in forming the functionality of road networks compared to those with less regularity in their spatial structure.
-
Weaknesses in smartphone security pose a severe privacy threat to users. Currently, smartphones are secured through methods such as passwords, fingerprint scanners, and facial recognition cameras. To explore new methods and strengthen smartphone security, we developed a capacitive swipe based user authentication and identification technique. Swipe is a gesture that a user performs throughout the usage of a smartphone. Our methodology focuses on using the capacitive touchscreen to capture the user's swipe. While the user swipes, a series of capacitive frames are captured for each swipe. We developed an algorithm to process this series of capacitive frames pertaining to the swipe. While different swipes may contain different numbers of capacitive frames, our algorithm normalizes the frames by constructing the same number of frames for every swipe. After applying the algorithm, we transform the normalized frames into gray scale images. We apply principal component analysis (PCA) to these images to extract principal components, which are then used as features to authenticate/identify the user. We tested random forest (RF) and support vector machine (SVM) algorithms as classifiers. For authentication, the performance of SVM (tested with left swipes) was more promising than RF, yielding a maximum accuracy of 79.88% with an FAR and FRR of 15.84% and 50%, respectively. SVM (tested with right swipes) produced our maximum identification accuracy at 57.81% along with an FAR and FRR of 0.60% and 42.18%, respectively. © 2020 IEEE.
-
The performance of any meta-heuristic algorithm depends highly on the setting of dependent parameters of the algorithm. Different parameter settings for an algorithm may lead to different outcomes. An optimal parameter setting should support the algorithm to achieve a convincing level of performance or optimality in solving a range of optimization problems. This paper presents a novel enhancement method for the salp swarm algorithm (SSA), referred to as enhanced SSA (ESSA). In this ESSA, the following enhancements are proposed: First, a new position updating process was proposed. Second, a new dominant parameter different from that used in SSA was presented in ESSA. Third, a novel lifetime convergence method for tuning the dominant parameter of ESSA using ESSA itself was presented to enhance the convergence performance of ESSA. These enhancements to SSA were proposed in ESSA to augment its exploration and exploitation capabilities to achieve optimal global solutions, in which the dominant parameter of ESSA is updated iteratively through the evolutionary process of ESSA so that the positions of the search agents of ESSA are updated accordingly. These improvements on SSA through ESSA support it to avoid premature convergence and efficiently find the global optimum solution for many real-world optimization problems. The efficiency of ESSA was verified by testing it on several basic benchmark test functions. A comparative performance analysis between ESSA and other meta-heuristic algorithms was performed. Statistical test methods have evidenced the significance of the results obtained by ESSA. The efficacy of ESSA in solving real-world problems and applications is also demonstrated with five well-known engineering design problems and two real industrial problems. The comparative results show that ESSA imparts better performance and convergence than SSA and other meta-heuristic algorithms.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)