Your search
Results 291 resources
-
A multi-stage biometric verification system serially activates its verifiers and improves performance-cost trade-off by allowing users to submit a subset of the available biometrics. In the heart of a verifier in multi-stage systems lies the concept of ‘reject option’ where a reject region is used to identify a bad quality test sample. If the match-score falls inside the reject region, no binary (genuine/impostor) decision is made in the current stage and the verifier in the next stage is activated. Recent studies have demonstrated a significant promise of the ‘symmetric rejection method’ in choosing a suitable reject region for multi-stage verification systems. In this paper, we delve into the symmetric rejection method to gain more insights into its error reduction capabilities. Specifically, we develop a theory which mathematically proves that the symmetric rejection method reduces the false accept rate and false reject rate. Then, we empirically validate our theory. Results show that the symmetric rejection method significantly reduces the error rates, both the false accept rate and false reject rate. © 2022, The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature.
-
Obstructive sleep apnea syndrome (OSAS) is a pervasive disorder with an incidence estimated at 5–14 percent among adults aged 30–70 years. It carries significant morbidity and mortality risk from cardiovascular disease, including ischemic heart disease, atrial fibrillation, and cerebrovascular disease, and risks related to excessive daytime sleepiness. The gold standard for diagnosis of OSAS is the polysomnography (PSG) test which requires overnight evaluation in a sleep laboratory and expensive infrastructure, which renders it unsuitable for mass screening and diagnosis. Alternatives such as home sleep testing need patients to wear diagnostic instruments overnight, but accuracy continues to be suboptimal while access continues to be a barrier for many. Hence, there is a continued significant underdiagnosis and under-recognition of sleep apnea in the community, with at least one study suggesting that 80–90% of middle-aged adults with moderate to severe sleep apnea remain undiagnosed. Recently, we have seen a surge in applications of artificial intelligence and neural networks in healthcare diagnostics. Several studies have attempted to examine its application in the diagnosis of OSAS. Signals included in data analytics include Electrocardiogram (ECG), photo-pletysmography (PPG), peripheral oxygen saturation (SpO2), and audio signals. A different approach is to study the application of machine learning to use demographic and standard clinical variables and physical findings to try and synthesize predictive models with high accuracy in assisting in the triage of high-risk patients for sleep testing. The current paper will review this latter approach and identify knowledge gaps that may serve as potential avenues for future research.
-
This paper reports a two-part study examining the relationship between fear of missing out (FoMO) and maladaptive behaviors in college students. This project used a cross-sectional study to examine whether college student FoMO predicts maladaptive behaviors across a range of domains (e.g., alcohol and drug use, academic misconduct, illegal behavior). Participants (N = 472) completed hard copy questionnaire packets assessing trait FoMO levels and questions pertaining to unethical and illegal behavior while in college. Part 1 utilized traditional statistical analyses (i.e., hierarchical regression modeling) to identify any relationships between FoMO, demographic variables (socioeconomic status, living situation, and gender) and the behavioral outcomes of interest. Part 2 looked to quantify the predictive power of FoMO, and demographic variables used in Part 1 through the convergent approach of supervised machine learning. Results from Part 1 indicate that college student FoMO is indeed related to many diverse maladaptive behaviors spanning the legal and illegal spectrum. Part 2, using various techniques such as recursive feature elimination (RFE) and principal component analysis (PCA) and models such as logistic regression, random forest, and Support Vector Machine (SVM), showcased the predictive power of implementing machine learning. Class membership for these behaviors (offender vs. non-offender) was predicted at rates well above baseline (e.g., 50% at baseline vs 87% accuracy for academic misconduct with just three input variables). This study demonstrated FoMO’s relationships with these behaviors as well as how machine learning can provide additional predictive insights that would not be possible through inferential statistical modeling approaches typically employed in psychology, and more broadly, the social sciences. Research in the social sciences stands to gain from regularly utilizing the more traditional statistical approaches in tandem with machine learning.
-
Online markets offer sellers access to buyers’ information and, thus, the potential to alter prices and products accordingly. In light of this, we undertook an empirical analysis to test for individualization on Amazon.com. We collect data from individuals recruited to shop for household items. Our results indicate evidence of individualization of search results and net prices (via coupons). We found, contrary to what was expected, that demographic, geolocation, and account information play an insignificant role in individualization of search results. Thus, we conclude that individualization is based on more dynamic information, e.g., online browsing behavior. This highlights the fact that sellers’ need for (and use of) buyer information goes beyond the simple information accessible from the buyers’ accounts to a more rigorous monitoring of buyers’ online behavior.
-
Diabetes mellitus (DM) and osteoporosis/osteopenia affect millions of people globally and are major health conditions in several countries including Qatar. Bone mineral density (BMD) is a widely accepted indicator for diagnosing osteoporosis (OP) and osteopenia (OPN). The best method for determining bone mineral density and OP/OPN risk is via dual energy X-ray absorptiometry (DXA) technology. The risk of osteoporosis-related fracture may increase for people with diabetes. Therefore, it is necessary to develop a system that can support the early detection of OP/OPN in diabetic patients. In this study, we analyzed Qatar diabetic cohorts including 500 subjects, among which 68 were OP/OPN (target) subjects and 432 were without osteoporosis/osteopenia (control) subjects. The objective of this study is to develop an ML model to distinguish diabetic OP/OPN patients from diabetic non-OP/non-OPN subjects based on their bone health indicators from full body DXA scan measurements. Based on our experiments, AdaBoost model performed the best for classifying the target group from the control group. 10-fold cross validation-based results indicate that the proposed ML model was able to distinguish the target group from the control group at 80% sensitivity, 96% specificity. To the best of our knowledge, our study is the first ML-based approach to detect the early onset of OP/OPN in diabetic cohort from Qatar. Our analyses revealed the higher level of lean mass, fat mass and bone mass for the control group compared to the target group. Higher levels of BMC, BMD from different body parts in the control group compared to the osteoporosis/osteopenia group indicate the protective effects of obesity on bone health in the Qatari diabetic cohort. Moreover, higher value of anthropometric measurements in troch, lumbar spine (L1, L2, L3, L4), pelvis and other body parts in the control group indicates that the WHO guideline can be applied to the Qatari diabetic cohort for the early detection of OP/OPN based on the proposed ML model. Further research on OP/OPN in diabetic patients is warranted in future to confirm the role of DM on bone health.
-
Breastfeeding has health benefits for both infants and mothers, yet Black mothers and infants are less likely to receive these benefits. Despite research showing no difference in breastfeeding intentions by race or ethnicity, inequities in breastfeeding rates persist, suggesting that Black mothers face unique barriers to meeting their breastfeeding intentions. The aim of this study is to identify barriers and facilitators that Black women perceive as important determinants of exclusively breastfeeding their children for at least 3 months after birth. Utilizing a Barrier Analysis approach, we conducted six focus group discussions, hearing from Black mothers who exclusively breastfed for 3 months and those who did not. Transcripts were coded starting with a priori parent codes based on theory-derived determinants mapped onto the Socioecological Model; themes were analysed for differences between groups. Facilitators found to be important specifically for women who exclusively breastfed for 3 months include self-efficacy, lactation support, appropriate lactation supplies, support of mothers and partners, prior knowledge of breastfeeding, strong intention before birth and perceptions of breastfeeding as money-saving. Barriers that arose more often among those who did not exclusively breastfeed for 3 months include inaccessible lactation support and supplies, difficulties with pumping, latching issues and perceptions of breastfeeding as time-consuming. Lack of access to and knowledge of breastfeeding laws and policies, as well as negative cultural norms or stigma, were important barriers across groups. This study supports the use of the Socioecological Model to design multicomponent interventions to increase exclusive breastfeeding outcomes for Black women.
-
Maintaining the excellent state of the road is critical to secure driving and is an obligation of both transportation and regulatory maintenance authorities. For a safe driving environment, it is essential to inspect road surfaces for defects or degradation frequently. This process is found to be labor-intensive and necessitates primary expertise. Therefore, it is challenging to examine road cracks visually; thus, we must effectively employ computer visualization and robotics tools to support this mission. This research provides our initial idea of simulating an Autonomous Robot System (ARS) to perform pavement assessments. The ARS for crack inspection is a camera-equipped mobile robot (i.e., an Android phone) to collect images on the road. The proposed system is simulated using an mBot robot armed with an Android phone that gathers video streams to be processed on a server that has a pre-training Convolutional Neural Networks (CNN) that can recognize crack existence. The proposed CNN model attained 99.0% accuracy in the training case and 97.5% in the testing case. The results of this research are suitable for application with a commercial mobile robot as an autonomous platform for pavement inspections. © 2022 Little Lion Scientific.
-
Economic load dispatch (ELD) is a challenge optimization problem to minimize the total cost of the thermally generated power that satisfies a set of equality and inequality constraints. We need to maximize the power network load under several operational constraints to solve this problem. Meanwhile, we need to minimize the cost of power generation and minimize the loss in the network transmission. Traditional optimization methods were used to solve such problems as linear programming. Meta-heuristic search algorithms have shown encouraging performance in solving various real-life engineering problems. This paper attempts to provide a comprehensive comparison between nine meta-heuristic search algorithms, including Genetic Algorithms (GAs), Particle Swarm Optimization (PSO), Crow Search Algorithm (CSA), Differential Evolution (DE), Salp Swarm Algorithm (SSA), Harmony Search (HS), Sine Cosine Algorithm (SCA), Multi-Verse Optimizer (MVO), and Moth-Flame Optimization Algorithm (MFO) for solving the economic load dispatch problem. Our developed results demonstrated that meta-heuristics search algorithms (i.e., CSA and DE) offer the optimal power set for each power station. These computed power fulfill the supply needs and maintain both minimum power costs and power losses in power transmission.
-
We incorporate deep learning techniques into capacitive images of body parts (ear, four fingers, and thumb) to improve the performance of user authentication in smartphones. Use of a capacitive touchscreen as an image sensor has several advantages, such as it is less sensitive to poor illumination conditions, occlusions, and pose variations. Also, it does not need an additional hardware like iris or fingerprint scanner. Use of capacitive images for user authentication is not new. However, the performance, specially, false reject rates (FRRs) of the state-of-the-art capacitive image-based systems are poor. In this paper, we focus on improving the performance and leverage deep learning. Deep learning techniques demonstrated spectacular performance in previous physical biometrics-based research. However, to our knowledge, effectiveness of deep learning is still unexplored in capacitive touchscreen-based user authentication. In order to bridge this research gap, we devise a multi-modal deep learning model, namely UASNet, and compare its performance with a large set of uni- and multi-modal baselines. Using the UASNet, we achieve an accuracy of 99.77%, an EER of 0.48%, and an FRR of 1.19% at FAR of 0.06%.
-
Cardiovascular diseases (CVD) are the leading cause of death worldwide. People affected by CVDs may go undiagnosed until the occurrence of a serious heart failure event such as stroke, heart attack, and myocardial infraction. In Qatar, there is a lack of studies focusing on CVD diagnosis based on non-invasive methods such as retinal image or dual-energy X-ray absorptiometry (DXA). In this study, we aimed at diagnosing CVD using a novel approach integrating information from retinal images and DXA data. We considered an adult Qatari cohort of 500 participants from Qatar Biobank (QBB) with an equal number of participants from the CVD and the control groups. We designed a case-control study with a novel multi-modal (combining data from multiple modalities—DXA and retinal images)—to propose a deep learning (DL)-based technique to distinguish the CVD group from the control group. Uni-modal models based on retinal images and DXA data achieved 75.6% and 77.4% accuracy, respectively. The multi-modal model showed an improved accuracy of 78.3% in classifying CVD group and the control group. We used gradient class activation map (GradCAM) to highlight the areas of interest in the retinal images that influenced the decisions of the proposed DL model most. It was observed that the model focused mostly on the centre of the retinal images where signs of CVD such as hemorrhages were present. This indicates that our model can identify and make use of certain prognosis markers for hypertension and ischemic heart disease. From DXA data, we found higher values for bone mineral density, fat content, muscle mass and bone area across majority of the body parts in CVD group compared to the control group indicating better bone health in the Qatari CVD cohort. This seminal method based on DXA scans and retinal images demonstrate major potentials for the early detection of CVD in a fast and relatively non-invasive manner.
-
In the last decade, a wide range of machine learning approaches were proposed and experimented to model highly nonlinear manufacturing processes. However, improving the performance of such models is challenging due to the complexity and high dimensionality of the manufacturing processes in general. In this paper, we propose bidirectional echo state reservoir networks (Bi-ESNs) trained using support vector machine privileged information method (SVM$$+$$) to model a winding machine process. The proposed model will be applied, tested and compared to reported models in the literature such as classical ESN with linear regression, ESN with a linear SVM readout, genetic programming, feedfoward neural network with backpropagation, radial basis function network, adaptive neural fuzzy inference system and local linear wavelet neural network. The developed results show that Bi-ESNs trained with SVM$$+$$are promising. It was able to provide better generalization performance compared to other models.
-
In online social networks (OSN), followers count is a sign of the social influence of an account. Some users expect to increase the followers count by following more accounts. However, in reality more followings do not generate more followers. In this paper, we propose a two player follow-unfollow game model and then introduce a factor for promoting cooperation. Based on the two player follow-unfollow game, we create an evolutionary follow-unfollow game with more players to simulate a miniature social network. We design an algorithm and conduct the simulation. From the simulation, we find that our algorithm for the evolutionary follow-unfollow game is able to converge and produce a stable network. Results obtained with different values of the cooperation promotion factor show that the promotion factor increases the total connections in the network especially through increasing the number of the follow follow connections.
-
The Internet contains large amounts of adult content. With only a few taps, or mis-taps, an under-aged user can be exposed to age-inappropriate content. Currently, this can be avoided by creating age-restricted profiles or restricting users to child-friendly applications (apps). However, these existing measures are time-consuming, laborious, and require a higher level of technical literacy than many parents can afford. We believe a better solution is to use a browser or an app that automatically detects the user's age then applies any appropriate content filters. For such a browser/app to be developed, we must learn that age estimation can indeed be performed with an acceptable rate of error. To that end, we created an Android app that collects biometric touchscreen data from elementary school, middle school, high school, and university students. Touch samples were collected from participants aged 5 to 61 on both smartphones and tablets. We focused exclusively on zoom-in and zoom-out touchscreen data samples. We made this decision because we found the zoom gesture to be rich with data and highly used among the most popular applications. Furthermore, we identify a niche within the current research landscape: no other machine learning experiments have leveraged the benefits of the zoom gesture for age estimation. We collected a total of 41,911 zoom data samples. From each zoom sample, 90 features were extracted. Those features were then used to train and test on six regressors and six classifiers to build a method that can accurately estimate the user's age from their touchscreen behavior. The regressors performed with the best mean absolute errors (MAEs) of 2.27 and 2.54 years for smartphones and tablets, respectively. The classifiers performed with the best accuracies of 90% and 91% for smartphones and tablets, respectively. Given these results, it is our belief that not only is touch-based age estimation viable, but developing a child-safe browser or a parental control app with this underlying technology is a worthwhile endeavor. © 2022 Elsevier Ltd
-
Following Mandelbrot's fractal theory, it was found that the fractal dimension could be obtained in medical images by the concept of fractional Brownian motion. An estimation concept for determination of the fractal dimension based upon the concept of fractional Brownian motion was discussed. Two applications were found: 1) classification; 2) edge enhancement and detection. For the purpose of classification, a normalized fractional Brownian motion feature vector was defined from this estimation concept. It represented the normalized average absolute intensity difference of pixel pairs on a surface at different scales. The feature vector used relatively few data items to represent the statistical characteristics of the medical image surface and was invariant to linear intensity transformation. Finally, by calculating normalized fractional Brownian motion feature vectors in five different ultrasonic image surfaces, it was found that the classification of normal and abnormal ultrasonic liver images could be obtained from the differences between their feature vectors. For edge enhancement and detection application, a transformed image was obtained by calculating the fractal dimension of each pixel over the whole medical image. The fractal dimension value of each pixel was obtained by calculating the fractal dimension of a 7 x 7 pixel block centered on this pixel. Preliminary results using projection radiographs suggest that the fractal based image transformation appears to hold promise as an edge enhancement and preprocessing algorithm that does not increase noise in the way that gradient operators do. © 1989 IEEE
-
Qualitative analysis is important because it is not subjective and does not have the potential for variation from one observer to another. A description is given of how statistical hypothesis testing can be used to select the quantitative descriptors best capable of distinguishing between normal and abnormal liver texture. Information is also presented on how both parametric and nonparametric discriminant analysis can be applied to determine how well the quantitative analysis compares with the qualitative diagnosis supplied for each case studied.
-
While cooccurrence matrices have been shown to be helpful in quantitating image texture, the amount of data associated with them can rapidly become unmanageable because a separate cooccurrence matrix can be calculated for each displacement vector chosen. Here, a method for choosing the direction of the displacement vector that is based on the most dominant edge obtained from gradient analysis is discussed. Also, the anatomy of the liver is used to suggest the most important intersample spacing in constructing cooccurrence matrices for the evaluation of diffuse liver disease.
-
Spectral analysis of Doppler ultrasound has been known to yield valuable information to assess the state of circulation in the peripheral blood vessels. In the past, the raw Doppler data have been directly input into a dedicated spectrum analyzer or, more recently, transformed on a microcomputer with the fast Fourier technique. The fast Hartley technique is used to transform these data. The Hartley transform has the advantages of being a purely real-numbered transform, and therefore for real Doppler data, is not only more conceptually straightforward, but also requires less computer memory, is simpler to calculate, and is better suited to large-scale integration implementation. © 1988 IEEE
-
Recent developments in image digitization have made possible a more quantitative analysis of ultrasonic imagery of the liver, which could lead to a more sensitive method for changes in liver texture as an aid in the diagnosis of liver disease. The approach described is the statistical analysis of one-dimensional intensity (gray-level) histograms obtained from B-mode ultrasonic images. First-order statistical parameters are used to characterize the location, variability, skewness and kurtosis of the histograms. One typical normal study and one typical abnormal study are presented to shown the type of results that have been obtained.
-
A new type of RC op oscillator has been designed. For amplitude stabilization, diodes are added in the feedback of the linear circuit. A model has been developed for a nonlinear element, which affects the frequency of oscillation. The model can be used to design the oscillator for different frequencies and to calculate frequency and amplitude sensitivity with respect to the parameter of the system.
-
A personal computer applications course has been developed. This course is a follow up to an introductory programming course for non-computer science majors. The primary objective of the course is to introduce the major personal computer applications areas: operating system use, word processing, spreadsheet programming, data base management, and communications. For each area, there will be a discussion of its use and related problems. Students will use a representative and a comparison will be made with other systems. The course will be taught using Apple IIe's or Commodore 64 computers. A course outline has been created and approved. The course will be offered for the first time in the Spring of 1984. Budget considerations, the practical difficulties involved with students using copyrighted software, and a desire to have students leave with software they can take with them, make it attractive to use public domain software when possible. Current research is directed towards finding and documenting public domain software for use in this course. Principal sources being investigated are the program libraries of personal computer users groups and educational cooperatives.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)