Your search
Results 11 resources
-
This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASA's Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.
-
Nanoparticles are of interest in many applications since their decreased size may give them properties that are very different from bulk material. Often nanoparticle properties such as size (diameter) and size distribution are evaluated using transmission electron microscopy (TEM). These parameters, size and size distribution, can be more easily obtained from digitized TEM images by mapping particle signal to black and background pixel to white in a process known as thresholding then performing an algorithm known as a particle analysis. The goal of this study was to compare the ability of several popular thresholding algorithms to segment TEM images. Performance of the thresholding algorithms was evaluated through qualitative and quantitative measures. Results show that the choice of a thresholding algorithm will strongly affect the results obtained from particle analysis. © 2007 Materials Research Society.
-
In scientific imaging, it is crucial to obtain precise images to facilitate accurate observations for the given application. However, often times the imaging equipment used to acquire such images introduces error into the observed image. Therefore, there is a fundamental need to remove the error associated with these images in order to facilitate accurate observations. This study investigates the effectiveness of an image processing technique utilizing an iterative deconvolution algorithm to remove error from micro-CT images. This technique is applied to several sets of in-vivo micro CT scans of mice, and its effectiveness is evaluated by qualitative comparison of the resultant thresholded binary images to thresholded binary images produced by more conventional image processing techniques; namely Gaussian filtering and straight thresholding. Results for this study suggest that iterative deconvolution as a pre-processing step produces superior qualitative results as compared to the more conventional methods tested. The groundwork for future quantitative verification is motivated. ©2005 IEEE.
-
There is an acute and well-documented need for image processing of microscopy data in materials science regarding, for example, the characterization of the structure/property relationship of a given materials system. In our work, image processing has been used as a framework for conducting interdisciplinary team-based research that effectively integrates programs within the Center for Research on Interface Structures and Phenomena (CRISP) Materials Research Science and Engineering Center (MRSEC), e.g. research experiences for undergraduates (REU), teachers (RET) and high school fellowships. This research resulted from a five-year long collaboration between CRISP and the Physics and Computer Science Departments at Southern Connecticut State University (SCSU). This paper will focus on the implementation of team-based research experiences as a vehicle for interdisciplinary science and education. Representative results of several of the studies are presented and discussed. © 2011 Materials Research Society.
-
The primary goal of this research was to investigate the ability of quantitative variables to confirm qualitative improvements of the deconvolution algorithm as a preprocessing step in evaluating micro CT bone density images. The analysis of these types of images is important because they are necessary to evaluate various countermeasures used to reduce or potentially reverse bone loss experienced by some astronauts when exposed to extended weightlessness during space travel. Nine low resolution (17.5 microns) CT bone density image sequences, ranging from between 85 to 88 images per sequence, were processed with three preprocessing treatment groups consisting of no preprocessing, preprocessing with a deconvolution algorithm and preprocessing with a Gaussian filter. The quantitative parameters investigated consisted of Bone Volume to Total Volume Ratio, the Structured Model Index, Fractal Dimension, Bone Area Ratio, Bone Thickness Ratio, Euler's Number and the Measure of Enhancement. Trends found in these quantitative variables appear to corroborate the visual improvements observed in the past and suggest which quantitative parameters may be capable of distinguishing between groups that experience bone loss and others that do not.
-
The primary goal of this research was to provide image processing support to aid in the identification of those subjects most affected by bone loss when exposed to weightlessness and provide insight into the causes for large variability. Past research has demonstrated that genetically distinct strains of mice exhibit different degrees of bone loss when subjected to simulated weightlessness. Bone loss is quantified by in vivo computed tomography (CT) imaging. The first step in evaluating bone density is to segment gray scale images into separate regions of bone and background. Two of the most common methods for implementing image segmentation are thresholding and edge detection. Thresholding is generally considered the simplest segmentation process which can be obtained by having a user visually select a threshold using a sliding scale. This is a highly subjective process with great potential for variation from one observer to another. One way to reduce inter-observer variability is to have several users independently set the threshold and average their results but this is a very time consuming process. A better approach is to apply an objective adaptive technique such as the Riddler / Calvard method. In our study we have concluded that thresholding was better than edge detection and pre-processing these images with an iterative deconvolution algorithm prior to adaptive thresholding yields superior visualization when compared with images that have not been pre-processed or images that have been pre-processed with a filter.
-
Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their size may give them properties that are very different from bulk materials. Further advancement of nanotechnology cannot be obtained without an increased understanding of nanoparticle properties such as size (diameter) and size distribution frequently evaluated using transmission electron microscopy (TEM). In the past, these parameters have been obtained from digitized TEM images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. More recently, computer imaging particle analysis has emerged as an objective alternative by counting and measuring objects in a binary image. This paper will describe the procedures used to preprocess a set of gray scale TEM images so that they could be correctly thresholded into binary images. This allows for a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Several preprocessing methods including pseudo flat field correction and rolling ball background correction were investigated with the rolling ball algorithm yielding the best results. Examples of particle analysis will be presented for different types of materials and different magnifications. In addition, a method based on the results of particle analysis for identifying and removing small noise particles will be discussed. This filtering technique is based on identifying the location of small particles in the binary image and removing them without affecting the size of other larger particles.
-
Thresholding is an image processing procedure used to convert an image consisting of gray level pixels into a black and white binary image. One application of thresholding is particle analysis. Once foreground objects are separated from the background, a quantitative analysis that characterizes the number, size and shape of particles is obtained which can then be used to evaluate a series of nanoparticle samples. Numerous thresholding techniques exist differing primarily in how they deal with variations in noise, illumination and contrast. In this paper, several popular thresholding algorithms are qualitatively and quantitatively evaluated on transmission electron microscopy (TEM) and atomic force microscopy (AFM) images. Initially, six thresholding algorithms were investigated: Otsu, Riddler-Calvard, Kittler, Entropy, Tsai and Maximum Likelihood. The Riddler-Calvard algorithm was not included in the quantitative analysis because it did not produce acceptable qualitative results for the images in the series. Two quantitative measures were used to evaluate these algorithms. One is based on comparing object area the other on diameter before and after thresholding. For AFM images the Kittler algorithm yielded the best results followed by the Entropy and Maximum Likelihood techniques. The Tsai algorithm yielded the top results for TEM images followed by the Entropy and Kittler methods.
-
The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology 1. Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective 2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community 3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful 4. © 2009 SPIE-IS&T.
Explore
Resource type
- Conference Paper (9)
- Journal Article (2)