Your search
Results 291 resources
-
The problem of characterizing the relationship between packet size and network delay has received little attention in the field. Research in that area has been limited to either simulation studies or empirical observations that are detached from analytic traffic modeling. From a queueing viewpoint, it is simple to show that these three variables are inter-related, which necessitates a more careful study. We present a traffic model of a router fed by ON/OFF-type sources with heavy-tailed burst sizes. The traffic model considered is consistent with the evidence that Web traffic is heavy-tailed. The analysis cases that are considered establish a quantitative characterization of the complex relationship among packet payload and header sizes, traffic burstiness, and router queueing delay. © 2004 IEEE.
-
We present a genetic algorithm for heuristically solving a cost minimization problem applied to communication networks with threshold based discounting. The network model assumes that every two nodes can communicate and offers incentives to combine flow from different sources. Namely, there is a prescribed threshold on every link, and if the total flow on a link is greater than the threshold, the cost of this flow is discounted by a factor α. A heuristic algorithm based on genetic strategy is developed and applied to a benchmark set of problems. The results are compared with former branch and bound results using the CPLEX® solver. For larger data instances we were able to obtain improved solutions using less CPU time, confirming the effectiveness of our heuristic approach. Copyright© 2003, Lawrence Erlbaum Associates, Inc.
-
Temporal and spatial analysis was applied to a sequence of cloud top pressure (CTP) images and cloud optical thickness (TAU) images, and a storm tracking algorithm was proposed. A sequence of storm tracks from the satellite images was developed from the satellite images. Composite images were created by projecting ahead in time and substituting the first valid pixel for missing data, and a variety of CTP and TAU cut-off values were used to identify regions of interest. The region correspondences were determined from one time frame to another which yielded the storm center coordinates. The obtained tracks were compared to the storm tracks computed from sea level pressure data by matching the results first in time and then in spatial distance.
-
The objective of this study is to compare geometric-based and evolutionary techniques for tracking storm systems from sequences of satellite images. Analysis was applied to the International Satellite Cloud Climatology Project low resolution D1 database for selected storm systems during the month of September, 1988. During this time period there were two exceptionally long tracks of major hurricane systems, Hurricanes Gilbert and Helene. Cloud top pressure and cloud optical thickness were used to identify storm systems. The ability of the geometric-based and evolutionary techniques to generate tracks through storm regions was assessed. Differences in final tracking results between the two techniques resulted not only from the differences in methodology but also form differences in the type of preprocessed input used by each of the techniques. Tracking results were compared to results disseminated by the Colorado State/Tropical Prediction Center and maintained by the National Hurricane Center in Miami, Florida. For the hurricanes investigated in this study, both techniques were able to generate tracks which followed either most or some of the portions of the hurricanes. The evolutionary algorithm was in general able to maintain good continuity along the tracks but, with no knowledge of overall region movement, was unable to discern which of two possible directions would be best to pursue in cases where there were tow or more equally close storm systems components. The geometric method was able to maintain a smooth track close to the course of the hurricane except for confusion primarily at the beginning and/or end of tracks.
-
Several working or experimental management systems use expert systems techniques for fault management purposes. Although the effort in the area is still growing, most of the expert fault management systems developed were built in an ad-hoc and unstructured basis simply transferring the knowledge of the human expert into an automated system. However, to meet future challenges, a theoretical foundation for fault management must be established aiming to bridge the gap between the working systems and research, and to provide a general structured model easily expandable to future networks. In this paper an algorithm is proposed to simplify the set of clustered alarms. This algorithm is based on the techniques widely used in the Logic Design field to simplify switching functions. The performance of the proposed algorithm is analyzed and the results are compared to those of a traditional algorithm.
-
A quality assurance system is essential for the credibility and structured growth of anaesthesiology-based transoesophageal echocardiography (TEE) programmes. We have developed software (Q/A Kappa), involving a 400- line source code, capable of directly reporting kappa correlation coefficient values, using external reviewer interpretations as the 'gold standard', and thereby allowing systematic assessment of the validity of intraoperative echocardiographic interpretation. This paper presents assessment of the validity of 240 intraoperative anaesthesiologists' echocardiographic interpretations, and, in addition, the results of field testing of this prototypical software. Data, derived from consecutive cardiac surgery patients, consisted of standardized two-dimensional transoesophageal echocardiographic, colour flow and Doppler imaging sequences. Intraoperative and off-line 'gold standard' TEE interpretations were compared for 19 fields or variables using the Q/A Kappa program. The kappa correlation coefficients were highly variable and dependent on the examination field, ranging from 0.08 for apical regional wall motion scores to 1.00 for tricuspid regurgitation grade, left atrial measurement, aortic valve anatomy and left ventricular long axis and short axis global function. The correlation coefficients were also operator dependent. These data (480 interpretations) were also manually integrated into the equation required for calculation of values of the variable kappa correlation coefficient. The relationship between Q/A Kappa-derived values and manually calculated values was highly significant (p < 0.001; r = 1.0). The implications and possible explanations of the results for particular examination fields are discussed. This study also demonstrates successful seamless functioning of this software program from data entry, segmentation into tables and valid statistical analysis. These findings suggest that it is practical to provide sophisticated continuous quality improvement TEE data on a routine basis.
-
During the software lifecycle, the software structure is subject to many changes in order to fulfill the customer's requirements. In Distributed Object Oriented systems, software engineers face many challenges to solve the software-hardware mismatch problem in which the software structure does not match the customer's underlying hardware. A major design problem of Object Oriented software systems is the efficient distribution of software classes among the different nodes in the system while maintaining two features: low-coupling and high software quality. In this paper, we present a new methodology for efficiently restructuring Distributed Object Oriented software systems to improve the overall system performance and to solve the softwarehardware mismatch problem. Our method has two main phases. In the first phase, we use the hierarchical clustering method to restructure the target software application. As a result, all the possible clustering solutions that could be applied to the target software application are generated. In the second phase, we decide on the best-fit clustering solution according to the customer hardware organization.
-
The primary purpose of this study was to investigate the feasibility of using simulated data from the United Kingdom Meteorological Office (UKMO) global climate mathematical model to serve as boundary values for a regional model RM3 which has been used by NASA to make predictions about climate dynamics in West Africa. In the past, historical data has been used successfully as boundary data but this approach limits outcomes to time periods in the past. The advantage of using the UKMO data is its potential to provide input boundary data for future time periods resulting in future regional predictions. This study has provided NASA scientists with graphical and statistical summaries including visual animations that provide qualitative and quantitative information necessary for evaluating whether the UKMO data can be used as a driving force for the RM3 model. One definite conclusion of this investigation is that both spatial and temporal interpolation of UKMO results will be necessary in order to make its results compatible with the RM3 model.
-
The software restructuring techniques present solutions for the software-hardware mismatch problem in which the software structure does not match the available hardware platform. In Distributed Object Oriented (DOO) systems, software engineers face many challenges to solve the software-hardware mismatch problem. One important aspect of DOO software systems is the efficient distribution of software classes among the different nodes while maintaining low-coupling and high software quality. In this paper, we present a new methodology for efficiently restructuring the DOO software systems to improve the performance and to solve the software-hardware mismatch problem. In our method, we use the hierarchical clustering technique to opt the classes to be grouped together and according to the customer hardware organization, we pick the level of the hierarchy that have the appropriate number of clusters to be allocated to the set of available nodes in the customer distributed system.
-
The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology 1. Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective 2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community 3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful 4. © 2009 SPIE-IS&T.
-
A web-based lidar experimentation and data analysis system (LEDAS) was developed, with support from a National Science Foundation award, to support resource sharing of lidar equipment, datasets and data analysis routines and collaboration between members of the Connecticut State University System (CSUS) Lidar Collaboratory. The system allows users at different geographical locations to conduct remote sensing research and education over the Web through remote access and control of a single shared lidar system and web-based data analysis. Users need not have any specialized instrumentation or software at their institutions, thereby making real remote sensing research available to students and faculty from institutions which may not have the internal budgets for such facilities. An original structure providing basic functionality was developed and implemented. This paper describes the second generation data analysis system which provides significant new enhancements and capabilities. © 2008 IEEE.
-
Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their decreased size may give rise to certain properties that are very different from those exhibited by bulk materials. Further advancement of nanotechnology cannot be realized without an increased understanding of nanoparticle properties such as size (diameter) and size distribution. Frequently, these parameters are evaluated using numerous imaging modalities including transmission electron microscopy (TEM) and atomic force microscopy (AFM). In the past, these parameters have been obtained from digitized images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. Recently, computer imaging particle analysis routines that count and measure objects in a binary image1 have emerged as an objective and rapid alternative to manual techniques. In this paper a procedure is described that can be used to preprocess a set of gray scale images so that they are correctly thresholded into binary images prior to a particle analysis ultimately resulting in a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Particle analysis was performed on two types of calibration samples imaged using AFM and TEM. Additionally, results of particle analysis can be used for identifying and removing small noise particles from the image. This filtering technique is based on identifying the location of small particles in the binary image, assessing their size, and removing them without affecting the size of other larger particles.
-
Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their size may give them properties that are very different from bulk materials. Further advancement of nanotechnology cannot be obtained without an increased understanding of nanoparticle properties such as size (diameter) and size distribution frequently evaluated using transmission electron microscopy (TEM). In the past, these parameters have been obtained from digitized TEM images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. More recently, computer imaging particle analysis has emerged as an objective alternative by counting and measuring objects in a binary image. This paper will describe the procedures used to preprocess a set of gray scale TEM images so that they could be correctly thresholded into binary images. This allows for a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Several preprocessing methods including pseudo flat field correction and rolling ball background correction were investigated with the rolling ball algorithm yielding the best results. Examples of particle analysis will be presented for different types of materials and different magnifications. In addition, a method based on the results of particle analysis for identifying and removing small noise particles will be discussed. This filtering technique is based on identifying the location of small particles in the binary image and removing them without affecting the size of other larger particles.
-
Thresholding is an image processing procedure used to convert an image consisting of gray level pixels into a black and white binary image. One application of thresholding is particle analysis. Once foreground objects are separated from the background, a quantitative analysis that characterizes the number, size and shape of particles is obtained which can then be used to evaluate a series of nanoparticle samples. Numerous thresholding techniques exist differing primarily in how they deal with variations in noise, illumination and contrast. In this paper, several popular thresholding algorithms are qualitatively and quantitatively evaluated on transmission electron microscopy (TEM) and atomic force microscopy (AFM) images. Initially, six thresholding algorithms were investigated: Otsu, Riddler-Calvard, Kittler, Entropy, Tsai and Maximum Likelihood. The Riddler-Calvard algorithm was not included in the quantitative analysis because it did not produce acceptable qualitative results for the images in the series. Two quantitative measures were used to evaluate these algorithms. One is based on comparing object area the other on diameter before and after thresholding. For AFM images the Kittler algorithm yielded the best results followed by the Entropy and Maximum Likelihood techniques. The Tsai algorithm yielded the top results for TEM images followed by the Entropy and Kittler methods.
-
This paper provides a description of how the topic of Google hacking was incorporated into a graduate course on web security which was offered in the Fall of 2005. It begins by providing an overview of Google hacking and describes what it is, how it is used, and most importantly how to defend against it. The paper then describes a series of exercises that students must complete providing them with hands-on Google hacking strategies, techniques and countermeasures. Copyright 2007 ACM.
-
We consider cluster systems with multiple nodes where each server is prone to run tasks at a degraded level of service due to some software or hardware fault. The cluster serves tasks generated by remote clients, which are potentially queued at a dispatcher. We present an analytic queueing model of such systems, represented as an M/MMPP/1 queue, and derive and analyze exact numerical solutions for the mean and tail-probabilities of the queue-length distribution. The analysis shows that the distribution of the repair time is critical for these performability metrics. Additionally, in the case of high-variance repair times, the model reveals so-called blow-up points, at which the performance characteristics change dramatically. Since this blowup behavior is sensitive to a change in model parameters, it is critical for system designers to be aware of the conditions under which it occurs. Finally, we present simulation results that demonstrate the robustness of this qualitative blow-up behavior towards several model variations. © 2007 IEEE.
-
Pipelining is the suitable architecture to adopt applications that are naturally divided into stages. Recently, applications tend to be Object Oriented (OO). Within the context of OO, there are a lot of interactions among different objects that result in many communication activities. Besides the feed-forward communication activities, many bypassing activities are generated in the pipeline structure. In this paper, we present a performance model that analyzes and evaluates the execution and communication times of OO software that runs on pipeline architecture. The model realizes both the feed-forward and the bypassing communication. We utilize the model to restructure the target software to achieve better performance. The restructuring algorithm has two phases; the first phase is concerned with maximizing the throughput. The second phase aims to minimize the latency and fully exploit the system resources. © 2006 - IOS Press and the authors. All rights reserved.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)