Your search
Results 295 resources
-
Weaknesses in smartphone security pose a severe privacy threat to users. Currently, smartphones are secured through methods such as passwords, fingerprint scanners, and facial recognition cameras. To explore new methods and strengthen smartphone security, we developed a capacitive swipe based user authentication and identification technique. Swipe is a gesture that a user performs throughout the usage of a smartphone. Our methodology focuses on using the capacitive touchscreen to capture the user's swipe. While the user swipes, a series of capacitive frames are captured for each swipe. We developed an algorithm to process this series of capacitive frames pertaining to the swipe. While different swipes may contain different numbers of capacitive frames, our algorithm normalizes the frames by constructing the same number of frames for every swipe. After applying the algorithm, we transform the normalized frames into gray scale images. We apply principal component analysis (PCA) to these images to extract principal components, which are then used as features to authenticate/identify the user. We tested random forest (RF) and support vector machine (SVM) algorithms as classifiers. For authentication, the performance of SVM (tested with left swipes) was more promising than RF, yielding a maximum accuracy of 79.88% with an FAR and FRR of 15.84% and 50%, respectively. SVM (tested with right swipes) produced our maximum identification accuracy at 57.81% along with an FAR and FRR of 0.60% and 42.18%, respectively. © 2020 IEEE.
-
The performance of any meta-heuristic algorithm depends highly on the setting of dependent parameters of the algorithm. Different parameter settings for an algorithm may lead to different outcomes. An optimal parameter setting should support the algorithm to achieve a convincing level of performance or optimality in solving a range of optimization problems. This paper presents a novel enhancement method for the salp swarm algorithm (SSA), referred to as enhanced SSA (ESSA). In this ESSA, the following enhancements are proposed: First, a new position updating process was proposed. Second, a new dominant parameter different from that used in SSA was presented in ESSA. Third, a novel lifetime convergence method for tuning the dominant parameter of ESSA using ESSA itself was presented to enhance the convergence performance of ESSA. These enhancements to SSA were proposed in ESSA to augment its exploration and exploitation capabilities to achieve optimal global solutions, in which the dominant parameter of ESSA is updated iteratively through the evolutionary process of ESSA so that the positions of the search agents of ESSA are updated accordingly. These improvements on SSA through ESSA support it to avoid premature convergence and efficiently find the global optimum solution for many real-world optimization problems. The efficiency of ESSA was verified by testing it on several basic benchmark test functions. A comparative performance analysis between ESSA and other meta-heuristic algorithms was performed. Statistical test methods have evidenced the significance of the results obtained by ESSA. The efficacy of ESSA in solving real-world problems and applications is also demonstrated with five well-known engineering design problems and two real industrial problems. The comparative results show that ESSA imparts better performance and convergence than SSA and other meta-heuristic algorithms.
-
Meta-heuristic search algorithms were successfully used to solve a variety of problems in engineering, science, business, and finance. Meta-heuristic algorithms share common features since they are population-based approaches that use a set of tuning parameters to evolve new solutions based on the natural behavior of creatures. In this paper, we present a novel nature-inspired search optimization algorithm called the capuchin search algorithm (CapSA) for solving constrained and global optimization problems. The key inspiration of CapSA is the dynamic behavior of capuchin monkeys.The basic optimization characteristics of this new algorithm are designed by modeling the social actions of capuchins during wandering and foraging over trees and riverbanks in forests while searching for food sources. Some of the common behaviors of capuchins during foraging that are implemented in this algorithm are leaping, swinging, and climbing. Jumping is an effective mechanism used by capuchins to jump from tree to tree. The other foraging mechanisms exercised by capuchins, known as swinging and climbing, allow the capuchins to move small distances over trees, tree branches, and the extremities of the tree branches. These locomotion mechanisms eventually lead to feasible solutions of global optimization problems. The proposed algorithm is benchmarked on 23 well-known benchmark functions, as well as solving several challenging and computationally costly engineering problems. A broad comparative study is conducted to demonstrate the efficacy of CapSA over several prominent meta-heuristic algorithms in terms of optimization precision and statistical test analysis. Overall results show that CapSA renders more precise solutions with a high convergence rate compared to competitive meta-heuristic methods. © 2020, Springer-Verlag London Ltd., part of Springer Nature.
-
A multistage biometric verification system uses multiple biometrics and/or multiple biometric verifiers to generate a verification decision. The core of a multistage biometric verification system is reject option which allows a stage not to give a genuine/impostor decision when it is not confident enough. This paper studies the effectiveness of symmetric rejection for multistage biometric verification systems. The symmetric rejection method determines the reject region by symmetrically rejecting equal proportion of genuine and impostor scores. The applicability of a multistage biometric verification system depends on how secure and user convenient it is, which is measured by the performance–cost trade-off. This paper analyzes the performance–cost trade-off of symmetric rejection method by conducting extensive experiments. Experiments are performed on two biometric databases: (1) publicly available NIST database and (2) a keystroke database. In addition, the symmetric rejection method is empirically compared with two existing rejection methods: (1) sequential probability ratio test-based method, which uses score-fusion and (2) Marcialis et al.’s method, which does not use score fusion. Results demonstrate strong effect of symmetric rejection method on creating a secure and user convenient multistage biometric verification system.
-
Image creation and retention are growing at an exponential rate. Individuals produce more images today than ever in history and often these images contain family. In this paper, we develop a framework to detect or identify family in a face image dataset. The ability to identify family in a dataset of images could have a critical impact on finding lost and vulnerable children, identifying terror suspects, social media interactions, and other practical applications. We evaluated our framework by performing experiments on two facial image datasets, the Y-Face and KinFaceW, comprising 37 and 920 images, respectively. We tested two feature extraction techniques, namely PCA and HOG, and three machine learning algorithms, namely K-Means, agglomerative hierarchical clustering, and K nearest neighbors. We achieved promising results with a maximum detection rate of 94.59% using K-Means, 89.18% with agglomerative clustering, and 77.42% using K-nearest neighbors. © 2020 World Scientific Publishing Company.
-
Ozone is a toxic gas with massive distinct chemical components from oxygen. Breathing ozone in the air can cause severe effects on human health, especially people who have asthma. It can cause long-lasting damage to the lungs and heart attacks and might lead to death. Forecasting the ozone concentration levels and related pollutant attribute is critical for developing sophisticated environment safety policies. In this paper, we present three artificial neural network (ANN) models to forecast the daily ozone (O3), coarse particulate matter (PM10), and particulate matter (PM2.5) concentrations in a highly polluted city in the Republic of China. The proposed models are (1) recurrent multilayer perceptron (RMLP), (2) recurrent fuzzy neural network (RFNN), and (3) hybridization of RFNN and grey wolf optimizer (GWO), which are referred to as RMLP-ANN, RFNN, and RFNN-GWO models, respectively. The performance of the proposed models is compared with other conventional models previously reported in the literature. The comparative results showed that the proposed models presented outstanding performance. The RFNN-GWO model revealed superior results in the modeling of O3, PM10, and PM2.5 compared with the RMLP-ANN and RFNN models. © 2020, Springer Nature B.V.
-
Retweeting is an important way of information propagation on Twitter. In this paper, we investigate the sentiment correlation between regular tweets and retweets. We anticipate our investigation sheds a light on how the sentiment of regular tweets impacts the retweets of different sentiments. We propose a method for measuring the sentiment of tweets. We categorize the Twitter users into different groups by different norms, which are the follower count, the betweenness connectivity, a combination of follower count and betweenness centrality,and the amount of tweets. Then, we calculate the sentiment correlation for different groups to examine the influential factors for retweeting a message with a certain sentiment.We find that the users with higher betweenness centrality and higher tweets amount tend to exhibit a higher sentiment correlation. The users with medium-level followers_count show the highest sentiment correlation compared to the low-level and high-level followers_count. After combining the two factors of followers_count and betweenness centrality, we discover that specifically at low-level betweenness centrality the users with medium-level followers_count have the highest sentiment correlation. Our last observation is that the difference for correlation coefficients exists between different types of users. Our study on the sentiment correlation provides instructional information for modeling information propagation in human society. © 2020, Springer-Verlag GmbH Austria, part of Springer Nature.
-
Alzheimer's Disease (AD) is a neurodegenerative disease that causes complications with thinking capability, memory and behavior. AD is a major public health problem among the elderly in developed and developing countries. With the growth of AD around the world, there is a need to further expand our understanding of the roles different clinical measurements can have in the diagnosis of AD. In this work, we propose a machine learning-based technique to distinguish control subjects with no cognitive impairments, AD subjects, and subjects with mild cognitive impairment (MCI), often seen as precursors of AD. We utilized several machine learning (ML) techniques and found that Gradient Boosting Decision Trees achieved the highest performance above 84% classification accuracy. Also, we determined the importance of the features (clinical biomarkers) contributing to the proposed multi-class classification system. Further investigation on the biomarkers will pave the way to introduce better treatment plan for AD patients. © 2020 The authors and IOS Press.
-
The aptitudes and abilities required for the position of programmer, within the computer industry, have yet to be fully studied and their inter-relationships known. Although the industry is relatively new, a substantial amount of research in the areas of personnel selection, evaluation and job requirements has been undertaken. Yet these studies have confined themselves primarily to the use of interest scales, aptitude and achievement tests as overall predictors for on-the-job success rather than in the study of the cognitive factors pertinent to the tasks of which programming is composed. In a study by Deutsch and Shea, Inc. (1963), the relationship between the programmer and the computer is seen as analogous to that of the mahout and his elephant. As with the mahout, the programmer uses his intelligence, skills and abilities in the control and guidance of a powerful and flexible, yet non-intelligent, tool in the performance of specific finite operations which contribute to the completion of more complex tasks. It is the programmer who, when presented with a problem from science, engineering or business, must work out a solution. John and Miller (1957) state that all problems have two general parts: the specific components involved (i.e., data, etc.) and the relationships which are the orderings of or changes to the specific components.
-
The paper presents methods of space allocation applicable to architectural design. These techniques have been developed in the past twenty years and are presented in this paper in such a way that they mav also be applied to other disciplines. Four categories are presented that identify the variations in the dimensioning of the elements, either unit dimension or variable dimension, and the variation in the shape of the boundary, either a simple rectangle or a multi-faceted boundary.
-
The use of gradient operators for image enhancement has been widely reported in the literature, but they have not been used routinely in the medical arena, particularly in the most common radiographic plain film procedure, chest radiographs. Gradient operators such as Sobel and Roberts operators, not only enhance image edges but also tend to enhance noise. Overall, the Sobel operator was found to be superior to the Roberts operator in edge enhancement. A theoretical explanation for the superior performance of the Sobel operator was developed based on the concept of analyzing the x and y Sobel masks as linear Alters. By applying pill box, Gaussian, or median filtering prior to applying a gradient operator, noise was reduced, but the pill box and Gaussian filters were much more computationally efficient than the median filter with approximately equal effectiveness in noise reduction. © 1988 IEEE
-
Temporal analysis has been applied to a sequence of cloud top pressure (CTP) images and cloud optical thickness (TAU) images stored in the International Satellite Cloud Climatology Project (ISCCP) D1 database located at the NASA Goddard Institute for Space Studies (GISS). Each pixel in the D1 data set has a resolution of 2.5 degrees or 280 kilometers. These images were collected in consecutive three-hour intervals for the entire month of April 1989. The primary objective of this project was to develop a sequence of storm tracks from the satellite images to follow the formation, progression and dissipation of storm systems over time. Composite images where created by projecting ahead in time and substituting the first available valid pixel for missing data and a variety of CTP and TAU cut-off values were used to identify regions of interest. Region correspondences were determined from one time frame to another yielding the coordinates of storm centers. These tracks were compared to storm tracks computed from sea level pressure data obtain from the National Meteorological Center (NMC) for the same time period. The location of sea level storm center provides insight as to whether storms have occurred anywhere in a region and can be helpful in determining the presence or absence of storms in a general geographic region.
-
The primary objective of this project is to define a methodology to depict the motion of deep convective cloud systems as observed form satellite imagery. These clouds are defined as clusters of pixels with Cloud Top Pressure (IPC) <EQ 440 millibars and Cloud Optical Thickness (TAU) >= 23 which are high in the atmosphere and sufficiently thick to produce significant rainfall. Clouds are one of the major factors in understanding the earth's climate. Evaluating cloud motion is important in understanding atmospheric dynamics and visualizations are vital because they provide a good way to observe change. IPC and TAU values have been collected for April of 1989 from the International Satellite Cloud Climatology Project, low resolution database for the northern latitudes between 30 and 60 degrees. Each of the 240 IPC and 240 TAU images consisted of 12 rows and 144 columns with each pixel representing a 280 km square on the globe collected in three-hour intervals. Individual images were color coded according to land, sea and clouds before being put into motion. Six animations have been produced which start with the original images, progress to include daily composite images and culminate with a collage. Animations of the original images have the advantage of relatively short intervals between still frames but have many undefined pixels, which are eliminated in the composites. The results of this project can serve as an example of how to improve the visualization of time varying image sequences.
-
The objective of this study is to compare statistical and unsupervised neural network techniques for determination of correspondences between storm system regions extracted from sequences of satellite images. Analysis was applied to the International Satellite Cloud Climatology Project (ISCCP) low resolution D1 database for selected storm systems during the period April 5 - 9, 1989. Cloud top pressure was used to delineate regions of interest and cloud optical thickness combined with spatial location was used to track regions throughout a given time sequence. The ability of the k-nearest neighbor classifier and of self-organizing maps to determine correspondences between storm regions was assessed. The two techniques generally yielded similar associations between regions of interest throughout the time sequence. Differences in final tracking results between the two techniques occurred primarily as a result of differences in the collections of points from a region in a time step t<SUB>2</SUB> that corresponded to a region in an earlier time step t<SUB>1</SUB>. The tracking results were also compared to the results obtained at the NASA Goddard Institute for Space Studies using sea level pressure data from the National Meteorological Center (NMC). For the storm systems investigated in this study, the storm tracks exhibited the same general tracking behavior with expected variations between cloud system storm centers and low sea level pressure centers.
-
A view of interactions in the undergraduate classroom is presented from several perspectives. Topics discussed include class perceptions of teacher as facilitator/authority/leader, grades versus performance appraisals, mixed-gender interactions, and subtle forms of cultural variations.
-
Accurate identification and tracking of synoptic-scale storm systems in the northern midlatitudes is important for understanding the structure and movement of the midlatitude cloud field which plays a major role in climate change. In this paper, a hybrid neural network/genetic algorithm (NN/GA) approach is presented that analyzes the behavior of storm systems from one time frame to the next. The goal of the hybrid neural network algorithm is to improve classifier output by reducing the number of infeasible solutions using constraint optimization techniques. The input to the hybrid neural network algorithm is the output from a traditional backpropagation neural network. The hybrid NN/GA analyzes the backpropagation neural network output for logical consistencies and makes changes to the classification results based on strength of neural network classifications and satisfaction of logical constraints. The results are compared with classification results obtained using linear discriminant analysis, k-nearest neighbor rule, and backpropagation neural network techniques.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (126)
- Journal Article (133)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(242)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (120)