Your search
Results 291 resources
-
In this paper we have discussed different types of JDBC drivers under the context of a two-tier client/server model. However, it is entirely possible to use them to develop a multi-tier client/server application. The integration of web servers with database servers via the use of JAVA applets and JDBC drivers is useful for the teaching of database programming and web-based application development. The applet that we have developed, along with our experience of configuring the JDBC and JAVA environment, was used in a database course. Students built more complicated database/web applications on top of this sample applet. Future extension of our work may involve the following items: • the security implication of using JDBC drivers in a multiple, heterogeneous DBMS environment • the possible interaction of JDBC with firewalls and proxy servers • the evaluation of JDBC drivers under the context of real-world applications, especially their reliability and performance.
-
The objective of this research is to automate the classification of clouds from satellite images providing a method for studying their properties over time. Analysis was applied to the International Satellite Cloud Climatology Project (ISCCP) low resolution (2.5 degrees per pixel) database for January 1987. Our approach differs from earlier studies by taking advantage of cloud top pressure and optical thickness from the ISCCP database, providing more accurate measures of cloud height with less dependency on the sun's angle of illumination. A total of 365 regions of interest (ROI), each classified Storm or Non Storm were used in the analysis. The algorithms used were Backpropagation Artificial Neural Network and Nearest Neighbor Pattern Classification. Each ROI was assigned on identification number between 1 and 365. One third of the ROIs were randomly selected for testing using a random number generator and the remaining ROIs were assigned to be training set. This process was repeated 29 times resulting in a mean classification error of 5.76% for the nearest neighbor algorithm and 3.97% for the backpropagation neural network.
-
Cloud analyses provide information which is vital to the detection, understanding and prediction of meteorological trends and environmental changes. This paper compares statistical, neural network and genetic algorithm methods for recognition and tracking of midlatitude storm clouds in sequences of low-resolution cloud-top pressure data sets. Regions of interest are identified and tracked from one image frame to the next consecutive frame in an eight-frame sequence. Classification techniques are used to determine the relationships between regions of interest in consecutive time frames. A genetic algorithm procedure is then used to revise classifier outputs to ensure that consistency constraints are not violated. © 1997 Elsevier Science B.V.
-
Backpropagation neural networks are applied to the problem of characterization of ultrasonic image texture to detect abnormalities in tissue texture which are indicative of liver disease. Twenty-one texture features were extracted from regions of interest in digitized ultrasonic images. A feature subset, identified by a stepwise selection process, formed the sample input to the networks together with the physician-supplied diagnosis. The classification performance of the backpropagation network is evaluated using a jackknife testing procedure. The performance of the networks is compared with results obtained from linear discriminant analysis and logistic regression techniques. © Springer-Verlag Berlin Heidelberg 1995.
-
A nested case‐control study was conducted to investigate whether an excess of pancreatic cancer, identified in a cohort mortality study with follow‐up from 1946 through 1988. was associated with potential workplace exposures at a New Jersey plastics manufacturing and research and development facility. The study population included 28 male pancreatic cancer cases and 140 randomly selected controls, matched on year of birth and at risk (alive) at the time of the case death. Using plant work history records, department assignments for the two groups were compared according to duration and time since first assignment. Workers assigned to a work area that processed vinyl resins and polyethylene (PE) were shown to be at increased risk. Men assigned more than 16 years to this department had a significantly increased risk ratio of 7.15 (95% confidence intervals [CI]: 1.28–40.1). No excess was seen with shorter duration assignments. Seven of the nine cases began working in this area in the 1940s. Average latency was 32 years, and all but three cases worked 20 years or more in this unit. Over the study period, significant exposure‐related process changes occurred, in addition to the use of numerous chemical additives. Although vinyl and PE processing operations could not be analyzed separately, the pancreatic cancer excess is more likely to be related to vinyl processing. Identification of a causative agent or combination of agents would require investigations with more detailed exposure information. Copyright © 1995 Wiley Periodicals, Inc., A Wiley Company
-
Over the past several years we have been interested in the supervised classification of ultrasonic images of the liver based on quantitative texture features. Our most recent efforts are concerned with the inclusion of features computed from Markov random fields. After adding four such features to our existing model containing 17 features, we employed stepwise discriminant analysis to identify the features that could best discriminate among 184 previously classified normal and abnormal ultrasonic images. Three of the four features derived from Markov random field models were identified by stepwise discriminant analysis as being good discrimination along with 6 existing features. From these results we constructed a backpropagation neural network with an input layer consisting of 9 nodes. We found that this new model yielded slightly better results when compared to earlier models. Our most recent results yielded a sensitivity of 81%, a specificity of 77% and an overall accuracy of 79%.
-
Constraint-based spatial reasoning problems frequently arise in the area of military mission planning. In this domain, mission planners employ complex criteria, which may include numeric and optimization constraints in addition to logical constraints and rules, to develop engineering construction and resource deployment plans. Automated planning aid systems for the military must have the capability to represent the various types of constraints used in human decision-making and must be able to provide efficient and optimal or near optimal solutions to the resulting constraint satisfaction problems. This paper describes a methodology for transforming constraint satisfaction problems into nonlinear optimization problems and for solving the resulting optimization problems using a hybrid neural network/genetic algorithm procedure. The method is applied to illustrative problems which employ different types of constraints for determination of future construction sites. The results of the experiments demonstrate the potential of this methodology for finding feasible and optimal solutions to nonlinear optimization problems. © 1994, Elsevier B.V.
-
In the past fractal dimension has often been computed using a stochastic approach based on a random walk process, which has been found to be very time consuming. More recently, mathematical morphology has been used to compute the fractal dimension in a more timely fashion. This paper describes how the fractal dimension computed using mathematical morphology can be used in the texture analysis of ultrasonic imagery. The discriminatory ability of the fractal dimension as a pattern recognition feature is evaluated and compared to more traditional parameters. This analysis includes comparisons with statistical features in which each parameter is treated as an independent variable and in which interactions between those variables are evaluated. Pattern recognition techniques include Stepwise Discriminant Analysis, Linear Discriminant Analysis, and Nearest Neighbor Analyisis in addition to Backpropagation Neural Network Classifiers. Our results identify the fractal dimension as one of the most important parameters for distinguishing between normal and abnormal livers. In this study, consisting of 186 images, a significant statistical difference was found for both the mean and standard deviation of the fractal dimension between the normal and abnormal groups using parametric and nonparametric statistical techniques. © 1993 SPIE. All rights reserved.
-
Hybrid knowledge bases (HKBs), proposed by Nerode and Subrahmanian, provide a uniform theoretical framework for dealing with the mixed data types and multiple reasoning modes required for solving logical deployment problems. Algorithms based on mixed integer linear programming techniques have been developed for the syntactic subset of HKBs corresponding to function-free Prolog-like logic programs. In this study, we examine the ability of neural networks to solve a more comprehensive set of problems expressed within the hybrid knowledge base framework. The objective of this research is to design and implement a nonlinear optimization procedure for solving extended logic programs with neural networks. We focus upon two types of extensions which are typically required in the formulation of logical deployment problems. The first type of extension, which we shall refer to as a Type I extension, consists of embedding numerical and geometric constraints into logic programs. The second type of extension, which we shall call a Type II extension, consists of incorporating optimization problems into logic clauses. © 1993 SPIE. All rights reserved.
-
One of the major problems in the development of computer- A ssisted systems for geologic mapping is how to individualize the system to meet user needs. Ideally, the system should be responsive to specifications of desired types of output structures. Also the system should be able to incorporate the user's knowledge of regional characteristics into the feature extraction/selection and classification components. Automatic techniques for classification of remote sensing data typically require relatively large, labeled training sets which are well-organized with respect to the desired mapping between input and output patterns. The present paper focuses on the feature extraction/selection component of the system. Kohonen self-organizing feature maps in conjunction with image processing procedures for linear feature extraction are used for explorative data analysis, feature selection, and construction of exemplar patterns. The results of training Kohonen feature maps with different pattern sets and different feature combinations provide insight into the nature of pattern relationships which enables the user to develop sets of positive and negative training patterns for the classification component. © 1992 SPIE. All rights reserved.
-
A new multi-threshold Perceptron capable of handling both binary and analog input is presented and discussed. The modified Perceptron replaces the sigmoid function with sinusoidal function. A computer program has been developed to simulate behavior of a network utilizing the modified Perceptron. Both XOR and Parity Check problems were solved using a single-layer network utilizing this modified Perceptron. Based on the results obtained from the simulation the modified Perceptron is capable of solving problems (such as XOR) that can not be solved using a single-layer of the classical Perceptron. Also a network utilizing this modified Perceptron requires fewer number of iterations to converge to a solution than that of a multi-layer Perceptron network using back propagation. 1.
-
The purpose of this study was to compare the classification capabilities of the backpropagation algorithm and linear discriminant analysis for detecting liver metastisis using image texture features obtained from ultrasonic images of the liver. Twenty-one quantitative parameters were obtained from 134 regions of interest of equal size. The images were collected by the same radiologist on the same imager with the controls adjusted for variations in patient body size so as to produce images of consistent quality. Quantitative features were divided so that 13 were first-order statistics, 6 were second-order statistics, and 2 were image gradient parameters. The same features were processed by both the backpropagation algorithm and linear discriminant analysis using `jack-knife'' testing and the results of each computer- generated classification was compared to the supplied diagnosis in an effort to determine which method could best identify patterns. For this particular application, the backpropagation neural network was found to have slightly superior classification results (87) than linear discriminant analysis (83).
-
In this paper the ability of two common statistical discriminant analysis procedures are compared with two commercial neural network software packages. The major objective of this study was to determine which of the procedures could best discriminate between normal and abnormal ultrasonic liver textures. The same set of features were input into both statistical discriminant analysis procedures and both neural network models. Preliminary results have found the restricted Coulomb Energy (RCE) neural network model to have a testing accuracy of 90.6% which is approximately 10% better than any of the other techniques investigated. © 1991.
-
Backpropagation neural networks have been developed for detection of geological lineaments in the Landsat Thematic Mapper (TM) imagery of the Canadian Shield using edge images as input and digitized lineament maps as the desired output. Lineament detection is a challenging problem for traditional image processing and pattern recognition techniques. Many linear features observable in geological image data do not represent lineaments, and the presence and extent of lineaments must be inferred from contextual information. In order to compare the ability of neural networks and conventional classifiers to recognize lineaments prior to performing edge/line element grouping operations, various gradient and curvature features are extracted from the image data set. Selected features from this group formed the inputs to backpropagation neural networks, linear discriminant classifiers, and nearest-neighbor classifiers. The neural network results were compared with the results obtained using conventional classifiers for sample training and test sets. The trained neural network was then applied to the edge image to mask out those edge points which had been classified as non- lineament points.
-
The primary objective of this research is the development and testing of neural network models for two fundamental computer vision tasks: edge/line detection and texture analysis. In order to test the ability of the neural network models to detect patterns in images we used both remote sensing data and medical imagery. Neural network models for edge and line detection were used to detect geological lineaments in Landsat data. Neural network models for the analysis of image texture variations were used on ultrasonic images to distinguish patients with normal liver scans from patients with diffuse liver disease. 1.
-
In scientific imaging, it is crucial to obtain precise images to facilitate accurate observations for the given application. However, often times the imaging equipment used to acquire such images introduces error into the observed image. Therefore, there is a fundamental need to remove the error associated with these images in order to facilitate accurate observations. This study investigates the effectiveness of an image processing technique utilizing an iterative deconvolution algorithm to remove error from micro-CT images. This technique is applied to several sets of in-vivo micro CT scans of mice, and its effectiveness is evaluated by qualitative comparison of the resultant thresholded binary images to thresholded binary images produced by more conventional image processing techniques; namely Gaussian filtering and straight thresholding. Results for this study suggest that iterative deconvolution as a pre-processing step produces superior qualitative results as compared to the more conventional methods tested. The groundwork for future quantitative verification is motivated. ©2005 IEEE.
-
The primary goal of this research was to provide image processing support to aid in the identification of those subjects most affected by bone loss when exposed to weightlessness and provide insight into the causes for large variability. Past research has demonstrated that genetically distinct strains of mice exhibit different degrees of bone loss when subjected to simulated weightlessness. Bone loss is quantified by in vivo computed tomography (CT) imaging. The first step in evaluating bone density is to segment gray scale images into separate regions of bone and background. Two of the most common methods for implementing image segmentation are thresholding and edge detection. Thresholding is generally considered the simplest segmentation process which can be obtained by having a user visually select a threshold using a sliding scale. This is a highly subjective process with great potential for variation from one observer to another. One way to reduce inter-observer variability is to have several users independently set the threshold and average their results but this is a very time consuming process. A better approach is to apply an objective adaptive technique such as the Riddler / Calvard method. In our study we have concluded that thresholding was better than edge detection and pre-processing these images with an iterative deconvolution algorithm prior to adaptive thresholding yields superior visualization when compared with images that have not been pre-processed or images that have been pre-processed with a filter.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)