Your search
Results 291 resources
-
The primary goal of this research was to investigate the ability of quantitative variables to confirm qualitative improvements of the deconvolution algorithm as a preprocessing step in evaluating micro CT bone density images. The analysis of these types of images is important because they are necessary to evaluate various countermeasures used to reduce or potentially reverse bone loss experienced by some astronauts when exposed to extended weightlessness during space travel. Nine low resolution (17.5 microns) CT bone density image sequences, ranging from between 85 to 88 images per sequence, were processed with three preprocessing treatment groups consisting of no preprocessing, preprocessing with a deconvolution algorithm and preprocessing with a Gaussian filter. The quantitative parameters investigated consisted of Bone Volume to Total Volume Ratio, the Structured Model Index, Fractal Dimension, Bone Area Ratio, Bone Thickness Ratio, Euler's Number and the Measure of Enhancement. Trends found in these quantitative variables appear to corroborate the visual improvements observed in the past and suggest which quantitative parameters may be capable of distinguishing between groups that experience bone loss and others that do not.
-
This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASA's Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.
-
We defined a set of quantifiable features for authorship categorization. We performed our experiments on public domain literature - all books analyzed were obtained in plain text format through Project Gutenberg's online repository of classic books. We tested three machine learning algorithms: Artificial Neural Network, Naïve Bayes Classifier, and Support Vector Machine with our features. We found that certain features, such as punctuation and various suffixes result in a higher accuracy. In addition, the Support Vector Machine classifier produces repeatedly higher accuracies than other classifiers and seems to be a far superior method of classification in terms of authorship categorization. © 2016 IEEE.
-
We give the theoretical foundation for finding a reject region which gives the minimum equal error rate in serial fusion based biometric verification. Given a user-specified tolerance of x percent genuine score reject rate, we prove that there exists a unique reject region inside which the false alarm rate and impostor pass rate curves overlap, and this reject region gives the minimum equal error rate. Our theory leads to new algorithms for finding reject regions, which have two key advantages over the state-of-the-art: (1) the algorithms allow the system administrator to control the proportion of genuine scores that a reject region can erroneously reject and (2) the algorithms determine reject regions directly from the scores, without the need to estimate score distributions. Our proofs do not rely on data belonging to any particular distribution, which makes them applicable to a wide range of biometric modalities including face, finger, iris, speech, gait, and keystrokes. © 2016 IEEE.
-
Smartphones, while providing users ease of access to sensitive information on the go, also present severe security risks if an attacker is able to gain access to them. To strengthen the user authentication and identification in a smartphone, we develop a biometric authentication and identification system which uses the capacitive touchscreen that is featured in all current smartphones. Our methodology focuses on using the touchscreen as a sensor to capture the image of a user's ear, thumb or four fingers. We extract the capacitive raw data from the touched body part to obtain a capacitive image, and then use it to capture geometric features (e.g., length and width of a finger) and principal components. After that, we experiment with Support Vector Machine (SVM) and Random Forest (RF) classifiers to verify and also identify each user. We achieved the maximum authentication accuracy of 98.84% by four fingers with SVM, and maxinum identification accuracy of 97.61% by four fingers with RF. © 2016 IEEE.
-
Search task difficulty has been attracting much research attention in recent years, mostly regarding its relationship with searchers' behaviors and the prediction of task difficulty from search behaviors. However, it remains unknown what makes searchers feel the difficulty. A study consisting of 48 undergraduate students was conducted to explore this question. Each participant was given 4 search tasks that were carefully designed following a task classification scheme. Questionnaires were used to elicit participants' ratings on task difficulty and why they gave those ratings. Based on the collected difficulty reasons, a coding scheme was developed, which covered various aspects of task, user, and user-task interaction. Difficulty reasons were then categorized following this scheme. Results showed that searchers reported some common reasons leading to task difficulty in different tasks, but most of the difficulty reasons varied across tasks. In addition, task difficulty had some common reasons between searchers with low and high levels of topic knowledge, although there were also differences in top task difficulty reasons between high and low knowledge users. These findings further our understanding of search task difficulty, the relationship between task difficulty and task type, and that between task difficulty and knowledge level. The findings can also be helpful with designing tasks for information search experiments, and have implications on search system design both in general and for personalization based on task type and searchers' knowledge. (C) 2014 Elsevier Ltd. All rights reserved.
-
Data dissemination protocols govern interaction and exchange of data among nodes in a distributed system. An understanding of data transfer protocols provides insight into efficient middleware management. Due to their simplicity, scalability and fault-tolerance, gossip-based protocols are researched widely as an effective communication strategy. The Shuffle protocol presented in [1], is an example of a decentralized, gossip-based data transfer protocol used to spread information in a wireless network via probabilistic exchange of data. This paper presents, an asynchronous variant of the Shuffle protocol and a system model that captures variability in data transmission times. This transmission time variability is inherent in dynamic networks, where such algorithms are typically deployed. A simulation-based analysis of the protocol's performance behavior is presented. Results show the effects of transmission variability, on data replication and its coverage. Also examined is the relationship between available storage and the performance of the protocol, expressed using measures such as propagation time and work. © 2015 IEEE.
-
Manufacturers of CPUs publish a document that contains information about the processor that includes: list of registers, function of each register, size of data bus, size of address bus and list of instructions that can be executed by the CPU. Each CPU has a known instruction set that a programmer can use to write assembly language programs. Instruction sets are specific to each type of processor. That being said, Pentium processors use a different instruction set than ARM processors. Using the Instructions a of processor to write a program is called assembly language and function of an assembler is to convert assembly language to machine code (binary) that the CPU can understand.
-
ARM offers variety of the core processor base on their applications and they are: Cortex A series: Cortex A series is a High performance processor for open operating system, the Cortex-A50 is a 64 bit process, application of Cortex-A series are Smart phones, Netbook, Digital TV, and eBook readers
-
The data transfer instructions are used to transfer data from memory to registers and from registers to memory and they are Load (LDR) and Store (STR) instructions.
-
Advanced RISC Machine (ARM) was developed by the Acorn Company. ARM is a leader supplier of microprocessors in the world, ARM develop the core CPU and thousand of suppliers add more functional units to the core. ARM uses two types instruction called Thumb and Thumb-2. Thumb instructions are 16 bits and thumb-2 instructions are 32 bits, currently most ARM processors uses 32 bit instructions.
-
The basic components of an Integrated Circuit (IC) is logic gates which made of transistors, in digital system there are three basic logic operations and they are called AND, OR and NOT.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)