Your search

In authors or contributors
  • The objective of this research is to automate the classification of the temporal behavior of storm cloud systems based on measurements derived from consecutive satellite images. The motivation behind this study is to develop improved descriptions of cloud dynamics which can be used in general circulation models for prediction of global climate change. Analysis was applied to the International Satellite Cloud Climatology Project (ISCCP) low resolution cloud top pressure database for the first six days in April, 1989. A total of 296 midlatitude storm cloud components were tracked between consecutive 3-hour time frames. For each pair of components, temporal correspondence events were classified as either 1.) direct, 2.) merge, 3.) split, or 4.) reject. The reject class, which was used primarily to categorize pairs of unrelated systems, included storm cloud system dissipation and creation as well. Statistical, neural network, and evolutionary techniques were developed for finding solutions to the storm cloud correspondence problem. Evolutionary techniques applied to the problem consisted of 1.) a constraint-handling hybrid evolutionary technique and 2.) a genetic local search algorithm. The results demonstrate the potential of evolutionary techniques to yield meteorologically-feasible solutions, given appropriate constraints, to the two-frame storm tracking problem. © 1998 SPIE. All rights reserved.

  • In the past fractal dimension has often been computed using a stochastic approach based on a random walk process, which has been found to be very time consuming. More recently, mathematical morphology has been used to compute the fractal dimension in a more timely fashion. This paper describes how the fractal dimension computed using mathematical morphology can be used in the texture analysis of ultrasonic imagery. The discriminatory ability of the fractal dimension as a pattern recognition feature is evaluated and compared to more traditional parameters. This analysis includes comparisons with statistical features in which each parameter is treated as an independent variable and in which interactions between those variables are evaluated. Pattern recognition techniques include Stepwise Discriminant Analysis, Linear Discriminant Analysis, and Nearest Neighbor Analyisis in addition to Backpropagation Neural Network Classifiers. Our results identify the fractal dimension as one of the most important parameters for distinguishing between normal and abnormal livers. In this study, consisting of 186 images, a significant statistical difference was found for both the mean and standard deviation of the fractal dimension between the normal and abnormal groups using parametric and nonparametric statistical techniques. © 1993 SPIE. All rights reserved.

  • Temporal analysis has been applied to a sequence of cloud top pressure (CTP) images and cloud optical thickness (TAU) images stored in the International Satellite Cloud Climatology Project (ISCCP) D1 database located at the NASA Goddard Institute for Space Studies (GISS). Each pixel in the D1 data set has a resolution of 2.5 degrees or 280 kilometers. These images were collected in consecutive three-hour intervals for the entire month of April 1989. The primary objective of this project was to develop a sequence of storm tracks from the satellite images to follow the formation, progression and dissipation of storm systems over time. Composite images where created by projecting ahead in time and substituting the first available valid pixel for missing data and a variety of CTP and TAU cut-off values were used to identify regions of interest. Region correspondences were determined from one time frame to another yielding the coordinates of storm centers. These tracks were compared to storm tracks computed from sea level pressure data obtain from the National Meteorological Center (NMC) for the same time period. The location of sea level storm center provides insight as to whether storms have occurred anywhere in a region and can be helpful in determining the presence or absence of storms in a general geographic region.

  • Over the past several years we have been interested in the supervised classification of ultrasonic images of the liver based on quantitative texture features. Our most recent efforts are concerned with the inclusion of features computed from Markov random fields. After adding four such features to our existing model containing 17 features, we employed stepwise discriminant analysis to identify the features that could best discriminate among 184 previously classified normal and abnormal ultrasonic images. Three of the four features derived from Markov random field models were identified by stepwise discriminant analysis as being good discrimination along with 6 existing features. From these results we constructed a backpropagation neural network with an input layer consisting of 9 nodes. We found that this new model yielded slightly better results when compared to earlier models. Our most recent results yielded a sensitivity of 81%, a specificity of 77% and an overall accuracy of 79%.

  • The objective of this research is to automate the classification of clouds from satellite images providing a method for studying their properties over time. Analysis was applied to the International Satellite Cloud Climatology Project (ISCCP) low resolution (2.5 degrees per pixel) database for January 1987. Our approach differs from earlier studies by taking advantage of cloud top pressure and optical thickness from the ISCCP database, providing more accurate measures of cloud height with less dependency on the sun's angle of illumination. A total of 365 regions of interest (ROI), each classified Storm or Non Storm were used in the analysis. The algorithms used were Backpropagation Artificial Neural Network and Nearest Neighbor Pattern Classification. Each ROI was assigned on identification number between 1 and 365. One third of the ROIs were randomly selected for testing using a random number generator and the remaining ROIs were assigned to be training set. This process was repeated 29 times resulting in a mean classification error of 5.76% for the nearest neighbor algorithm and 3.97% for the backpropagation neural network.

  • One of the major problems in the development of computer- A ssisted systems for geologic mapping is how to individualize the system to meet user needs. Ideally, the system should be responsive to specifications of desired types of output structures. Also the system should be able to incorporate the user's knowledge of regional characteristics into the feature extraction/selection and classification components. Automatic techniques for classification of remote sensing data typically require relatively large, labeled training sets which are well-organized with respect to the desired mapping between input and output patterns. The present paper focuses on the feature extraction/selection component of the system. Kohonen self-organizing feature maps in conjunction with image processing procedures for linear feature extraction are used for explorative data analysis, feature selection, and construction of exemplar patterns. The results of training Kohonen feature maps with different pattern sets and different feature combinations provide insight into the nature of pattern relationships which enables the user to develop sets of positive and negative training patterns for the classification component. © 1992 SPIE. All rights reserved.

  • The primary goal of this research was to investigate the ability of quantitative variables to confirm qualitative improvements of the deconvolution algorithm as a preprocessing step in evaluating micro CT bone density images. The analysis of these types of images is important because they are necessary to evaluate various countermeasures used to reduce or potentially reverse bone loss experienced by some astronauts when exposed to extended weightlessness during space travel. Nine low resolution (17.5 microns) CT bone density image sequences, ranging from between 85 to 88 images per sequence, were processed with three preprocessing treatment groups consisting of no preprocessing, preprocessing with a deconvolution algorithm and preprocessing with a Gaussian filter. The quantitative parameters investigated consisted of Bone Volume to Total Volume Ratio, the Structured Model Index, Fractal Dimension, Bone Area Ratio, Bone Thickness Ratio, Euler's Number and the Measure of Enhancement. Trends found in these quantitative variables appear to corroborate the visual improvements observed in the past and suggest which quantitative parameters may be capable of distinguishing between groups that experience bone loss and others that do not.

  • Backpropagation neural networks have been developed for detection of geological lineaments in the Landsat Thematic Mapper (TM) imagery of the Canadian Shield using edge images as input and digitized lineament maps as the desired output. Lineament detection is a challenging problem for traditional image processing and pattern recognition techniques. Many linear features observable in geological image data do not represent lineaments, and the presence and extent of lineaments must be inferred from contextual information. In order to compare the ability of neural networks and conventional classifiers to recognize lineaments prior to performing edge/line element grouping operations, various gradient and curvature features are extracted from the image data set. Selected features from this group formed the inputs to backpropagation neural networks, linear discriminant classifiers, and nearest-neighbor classifiers. The neural network results were compared with the results obtained using conventional classifiers for sample training and test sets. The trained neural network was then applied to the edge image to mask out those edge points which had been classified as non- lineament points.

  • The primary goal of this research was to provide image processing support to aid in the identification of those subjects most affected by bone loss when exposed to weightlessness and provide insight into the causes for large variability. Past research has demonstrated that genetically distinct strains of mice exhibit different degrees of bone loss when subjected to simulated weightlessness. Bone loss is quantified by in vivo computed tomography (CT) imaging. The first step in evaluating bone density is to segment gray scale images into separate regions of bone and background. Two of the most common methods for implementing image segmentation are thresholding and edge detection. Thresholding is generally considered the simplest segmentation process which can be obtained by having a user visually select a threshold using a sliding scale. This is a highly subjective process with great potential for variation from one observer to another. One way to reduce inter-observer variability is to have several users independently set the threshold and average their results but this is a very time consuming process. A better approach is to apply an objective adaptive technique such as the Riddler / Calvard method. In our study we have concluded that thresholding was better than edge detection and pre-processing these images with an iterative deconvolution algorithm prior to adaptive thresholding yields superior visualization when compared with images that have not been pre-processed or images that have been pre-processed with a filter.

  • Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their size may give them properties that are very different from bulk materials. Further advancement of nanotechnology cannot be obtained without an increased understanding of nanoparticle properties such as size (diameter) and size distribution frequently evaluated using transmission electron microscopy (TEM). In the past, these parameters have been obtained from digitized TEM images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. More recently, computer imaging particle analysis has emerged as an objective alternative by counting and measuring objects in a binary image. This paper will describe the procedures used to preprocess a set of gray scale TEM images so that they could be correctly thresholded into binary images. This allows for a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Several preprocessing methods including pseudo flat field correction and rolling ball background correction were investigated with the rolling ball algorithm yielding the best results. Examples of particle analysis will be presented for different types of materials and different magnifications. In addition, a method based on the results of particle analysis for identifying and removing small noise particles will be discussed. This filtering technique is based on identifying the location of small particles in the binary image and removing them without affecting the size of other larger particles.

  • Thresholding is an image processing procedure used to convert an image consisting of gray level pixels into a black and white binary image. One application of thresholding is particle analysis. Once foreground objects are separated from the background, a quantitative analysis that characterizes the number, size and shape of particles is obtained which can then be used to evaluate a series of nanoparticle samples. Numerous thresholding techniques exist differing primarily in how they deal with variations in noise, illumination and contrast. In this paper, several popular thresholding algorithms are qualitatively and quantitatively evaluated on transmission electron microscopy (TEM) and atomic force microscopy (AFM) images. Initially, six thresholding algorithms were investigated: Otsu, Riddler-Calvard, Kittler, Entropy, Tsai and Maximum Likelihood. The Riddler-Calvard algorithm was not included in the quantitative analysis because it did not produce acceptable qualitative results for the images in the series. Two quantitative measures were used to evaluate these algorithms. One is based on comparing object area the other on diameter before and after thresholding. For AFM images the Kittler algorithm yielded the best results followed by the Entropy and Maximum Likelihood techniques. The Tsai algorithm yielded the top results for TEM images followed by the Entropy and Kittler methods.

  • The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology 1. Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective 2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community 3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful 4. © 2009 SPIE-IS&T.

Last update from database: 3/13/26, 4:15 PM (UTC)