Your search
Results 295 resources
-
Spectral analysis of Doppler ultrasound has been known to yield valuable information to assess the state of circulation in the peripheral blood vessels. In the past, the raw Doppler data have been directly input into a dedicated spectrum analyzer or, more recently, transformed on a microcomputer with the fast Fourier technique. The fast Hartley technique is used to transform these data. The Hartley transform has the advantages of being a purely real-numbered transform, and therefore for real Doppler data, is not only more conceptually straightforward, but also requires less computer memory, is simpler to calculate, and is better suited to large-scale integration implementation. © 1988 IEEE
-
Recent developments in image digitization have made possible a more quantitative analysis of ultrasonic imagery of the liver, which could lead to a more sensitive method for changes in liver texture as an aid in the diagnosis of liver disease. The approach described is the statistical analysis of one-dimensional intensity (gray-level) histograms obtained from B-mode ultrasonic images. First-order statistical parameters are used to characterize the location, variability, skewness and kurtosis of the histograms. One typical normal study and one typical abnormal study are presented to shown the type of results that have been obtained.
-
A new type of RC op oscillator has been designed. For amplitude stabilization, diodes are added in the feedback of the linear circuit. A model has been developed for a nonlinear element, which affects the frequency of oscillation. The model can be used to design the oscillator for different frequencies and to calculate frequency and amplitude sensitivity with respect to the parameter of the system.
-
A personal computer applications course has been developed. This course is a follow up to an introductory programming course for non-computer science majors. The primary objective of the course is to introduce the major personal computer applications areas: operating system use, word processing, spreadsheet programming, data base management, and communications. For each area, there will be a discussion of its use and related problems. Students will use a representative and a comparison will be made with other systems. The course will be taught using Apple IIe's or Commodore 64 computers. A course outline has been created and approved. The course will be offered for the first time in the Spring of 1984. Budget considerations, the practical difficulties involved with students using copyrighted software, and a desire to have students leave with software they can take with them, make it attractive to use public domain software when possible. Current research is directed towards finding and documenting public domain software for use in this course. Principal sources being investigated are the program libraries of personal computer users groups and educational cooperatives.
-
Pattern recognition techniques for cloud type and cloud amount classification were applied to digital infrared SMS-1 data. The cloud classification results were used in a numerical radiation model to determine solar radiation during Phase III of the GARP Atlantic Tropical Experiment. In order to assess the effects on radiation computations of cloud information derived from both satellite and ship data, cloud analyses based on both data sources were prepared for input into the numerical radiation model. -from Authors
-
The backpropagation method is modified by replacing sigmoid function by sinusoidal function. The leaving law is also modified. The modified procedure shows great improvement over the original BP in terms of the number of neurons and the learning time. © 1992 IEEE.
-
New systolic architectures are proposed for the computation of the Fourier transform based on the generation of the coefficients of the transform during the computation. These architectures require less input/output pins on the chip. The new architectures are also extremely modular and cascadeable, thus, amenable for efficient VLSI implementation. VLSI complexity of the architectures are compared with the existing parallel architectures. © 1992 IEEE.
-
The increasing prevalence of multiprocessor and distributed systems in modern society is making it imperative to introduce the underlying principles of parallel/distributed computing to students at the undergraduate level. In order to meet the needs of our students for training in this critical area, the Computer Science Department at Southern Connecticut State University (SCSU) is currently in the process of implementing a curricular and laboratory development project that integrates key concepts and practical experiences in parallel computing throughout the undergraduate curriculum. The goal of this project is to build a strong foundation in parallel computing which would optionally culminate in advanced, senior-level specialized courses in parallel computing and/or senior research projects. This paper describes the laboratory facility we developed to support instruction in parallel and distributed computing and the parallel computing modules which were incorporated into three of our core undergraduate courses: data structures, operating systems, and programming languages. The laboratory facility enables us to provide our students with "hands-on" experiences in shared memory, distributed memory, and network parallelism. The modules and laboratory exercises give students the opportunity to experiment with a wide array of software and hardware environments and to gain a systematic exposure to the principles and techniques of parallel programming.
-
For the TREC 2007 conference, the CRM114 team considered three non-Bayesian methods of spam filtration in the CRM114 framework - an SVM based on the "hyperspace" feature==document paradigm, a bit-entropy matcher, and substring compression based on LZ77. As a calibration yardstick, we used the well-tested and widely used CRM114 OSB markov random field system (basically unchanged since 2003). The results show that the SVM has a spam-filtering accuracy of about a factor of two to three better accuracy than the OSB system, that substring compression is somewhat more accurate than OSB, and that bit entropy is somewhat less accurate for the TREC 2007 test sets.
-
There have been a large number of projects based on the Distributed Object Oriented (DOO) approach for solving complex problems in various scientific fields. The Mismatch problem is one of the most important problems facing the DOO system, where the initial design of the DOO application does not give the best class distribution. In such a case, the DOO software may need to be restructured. In this paper, we propose a methodology for efficiently restructuring the DOO software classes to be mapped on a distributed system consisting of a set of nodes. The proposed methodology consists of two phases. The first phase introduces a recursive graph clustering technique to partition the OO system into subsystems with low coupling. The second phase is concerned with mapping the generated partitions to the set of available machines in the target distributed architecture. A simulation evaluation was carried out for a set of randomly generated DOO software designs. Then the results were compared with those of the K-Partitioning algorithm in terms of the overall inter-class communication cost. © 2008 IEEE.
-
In this paper we describe an information system that we have designed for students and researchers to conduct atmospheric studies using data that they have collected from multiple atmospheric instruments including two laser radar (lidar) systems. The lidar systems available for research include a monostatic Micro Pulse Lidar System and a bistatic imaging CLidar system. Complementary instruments for data analysis and ground truth specification include a nephelometer, sunphotometers and a weather station. Information structures within the system allow users to 1) label, describe and archive raw and derived datasets from multiple atmospheric instruments with associated metadata using NetCDF format, 2) link together coincident and co-located datasets from different instruments and 3) identify owner and verify user access rights of raw and derived datasets. Data analysis software tools have been developed in MATLAB to characterize and remove instrument artifacts based on experimental lidar studies, to analyze clear sky data to determine variability in atmospheric aerosol content over time and altitude, and to investigate cloud and aerosol patterns.
-
In real-time software systems, meeting deadlines is crucial. Software engineers face many challenges to model the object-oriented software system to handle complex real-time constraints. The accurate estimating of the performance time is a key criterion for a precise scheduling decision. This paper presents an object-oriented performance model that analyzes the behavior of the real-time objects' tasks whose executions are controlled by a scheduler. Each task is subject to a time/utility function (TUF) that determines the accrued utility of the task according to its completion time. The scheduling scheme uses both the estimated time generated by the object-oriented performance model and the time utility function (TUF) of each task in the object-oriented system in order to maximize the total accrued utility. In addition, we implemented a software tool to conduct experimental study in order to show the effectiveness of our approach. © 2011 IEEE.
-
This book provides a hands-on approach to learning ARM assembly language with the use of a TI microcontroller. The book starts with an introduction to computer architecture and then discusses number systems and digital logic. The text covers ARM Assembly Language, ARM Cortex Architecture and its components, and Hardware Experiments using TILM3S1968. Written for those interested in learning embedded programming using an ARM Microcontroller. © Springer International Publishing Switzerland 2015.
-
Exfiltrating sensitive information from smartphones has become one of the most significant security threats. We have built a system to identify HTTP-based information exfiltration of malicious Android applications. In this paper, we discuss the method to track the propagation of sensitive information in Android applications using static taint analysis. We have studied the leaked information, destinations to which information is exfiltrated, and their correlations with types of sensitive information. The analysis results based on 578 malicious Android applications have revealed that a significant portion of these applications are interested in identity-related sensitive information. The vast majority of malicious applications leak multiple types of sensitive information. We have also identified servers associated with three country codes including CN, US, and SG are most active in collecting sensitive information. The analysis results have also demonstrated that a wide range of non-default ports are used by suspicious URLs. © 2018 IEEE.
-
We demonstrate that a nonzero strangeness contribution to the spacelike electromagnetic form factor of the nucleon is evidence for a strange-antistrange asymmetry in the nucleon's light-front wave function, thus implying different nonperturbative contributions to the strange and antistrange quark distribution functions. A recent lattice QCD calculation of the nucleon strange quark form factor predicts that the strange quark distribution is more centralized in coordinate space than the antistrange quark distribution, and thus the strange quark distribution is more spread out in light-front momentum space. We show that the lattice prediction implies that the difference between the strange and antistrange parton distribution functions, s(x)-s(x), is negative at small-x and positive at large-x. We also evaluate the strange quark form factor and s(x)-s(x) using a baryon-meson fluctuation model and a novel nonperturbative model based on light-front holographic QCD. This procedure leads to a Veneziano-like expression of the form factor, which depends exclusively on the twist of the hadron and the properties of the Regge trajectory of the vector meson which couples to the quark current in the hadron. The holographic structure of the model allows us to introduce unambiguously quark masses in the form factors and quark distributions preserving the hard scattering counting rule at large-Q2 and the inclusive counting rule at large-x. Quark masses modify the Regge intercept which governs the small-x behavior of quark distributions, therefore modifying their small-x singular behavior. Both nonperturbative approaches provide descriptions of the strange-antistrange asymmetry and intrinsic strangeness in the nucleon consistent with the lattice QCD result. © 2018 authors. Published by the American Physical Society.
-
The internet has changed the way that many people access written works. Books and articles, of various lengths, in several formats can be bought and accessed online, both legally and illegally. Texts in even shorter form are originating through forums, SMS, blogs, emails, and social media. Automating the process of determining the authorship of posted texts would help combat online piracy of copyrighted text and plagiarism. In addition, authorship identification could help detect fraudulent email messages from dangerous sources and combat cyberattacks by identifying authentic sources. We experiment with several machine learning algorithms on a limited set of public domain literature to identify the most efficient method of authorship identification using the least amount of samples. Different sized data sets are created by 5 predefined rounds of random sampling of 1500 word blocks on a total of 28 text books from a corpus of 7 authors. Traditional methods of authorship identification, such as Naive Bayes, Artificial Neural Network, and Support Vector Machine are implemented in addition to using a modern Deep Learning Neural Network for classification. Thirteen stylometric features are extracted ranging from character based, word based, and syntactic features. Our model consistently showed that Support Vector Machine out performs other classification methods. © 2020
-
Biologists and bioinformaticians heavily rely on data portals and repositories accessible through web application. While they mostly agree that the data is valuable, they find the interfaces hard to use and non-intuitive. In this paper we present a user-centered design of a database for classification and annotation for major and minor introns in various species. Our design is based on surveying and interviewing minor intron researchers and comparing the features of existing databases. In addition to its ease of use, the proposed database, Major and Minor Intron Annotation Database (MMIAD) offers high flexibility in querying and downloading subsets of information that interest the user in multiple commonly used file formats. © Proceedings of the 14th IADIS International Conference Interfaces and Human Computer Interaction 2020, IHCI 2020 and Proceedings of the 13th IADIS International Conference Game and Entertainment Technologies 2020, GET 2020 - Part of the 14th Multi Conference on Computer Science and Information Systems, MCCSIS 2020. All rights reserved.
-
Adult content on the Internet may be accessed by children with only a few keystrokes. While separate child-safe accounts may be established, a better approach could be incorporating automatic age estimation capability into the browser. We envision a safer browsing experience by implementing child-safe browsers combined with Internet content rating similar to the film industry. Before such a browser is created it was necessary to test the age estimation module to see whether acceptable error rates are possible. We created an Android application for collecting biometric touch data, specifically tapping data. We arranged with an elementary school, a middle school, a high school, and a university and collected samples from 262 user sessions (ages 5 to 61). From the tapping data, feature vectors were constructed, which were used to train and test 14 regressors and classifiers. Results for regression show the best mean absolute errors of 3.451 and 3.027 years, respectively, for phones and tablets. Results for classification show the best accuracies of 73.63% and 82.28%, respectively, for phones and tablets. These results demonstrate that age estimation, and hence, a child-safe browser, is feasible, and is a worthwhile objective. © 2020 IEEE.
-
Traditional keyboards remain the input device of choice for typing-heavy environments. When attached to sensitive data, security is a major concern. To continuously authenticate users in these environments, use of keystroke dynamics can be a preferred choice. An integral part of user enrollment in a keystroke based continuous authentication system is the writing instruction (prompt) given to the users, to use as a basis for their improvised writing. There are many prompts possible, and they directly impact the performance of authentication systems. Hence, prompts should be designed carefully, and with purpose. In this paper, we bridge the gap between cognitive psychology and computer science and attempt to influence the mental state of the users to acquire a better authentication performance. We compare two kinds of writing prompts, creative and factual, for generating reference samples. In addition, we perform two robustness tests: robustness to dissimilar writing style (e.g., creative reference and factual test) and robustness to surface (e.g., hard surface reference and soft surface test). We collect data from thirty participants in four weekly sessions. We experiment with three features: key interval, key press, and key hold latencies. We use Relative (R) measure to generate the match score between the reference and test samples. Results show that creative writing consistently performs better than the factual one. Both writing prompts perform well with dissimilar style in testing, i.e., continuous authentication is found robust to writing style. Also, we find that the surface (hard or soft) used in testing need not match that used for the reference, thus continuous authentication is also surface robust. © 2020 IEEE.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (126)
- Journal Article (133)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(242)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (120)