Your search
Results 35 resources
-
Work in the area of Google Hacking resulted in two presentations, 2 publications, a grant of $2500, and incorporation of results in two courses (Web Security CSC553)and (Information Security CSC453).
-
"This thorough primer examines the technological aspects of networking through a practical approach. Readers will gain knowledge of local area networks (LANs), wide area networks (WANs), the Internet, wireless LANs, wireless MANs, voice over IP (VoIP), as well as asynchronous transfer mode (ATM) and network security. Introductory chapters on foundational topics, such as data communications and computer architecture give readers the knowledge base they need to understand more complex networking concepts. This book effectively utilizes a practical approach to networking rather than a strict focus on theory or math, and requires no prior background in communications technology."--BOOK JACKET.
-
For the TREC 2007 conference, the CRM114 team considered three non-Bayesian methods of spam filtration in the CRM114 framework - an SVM based on the "hyperspace" feature==document paradigm, a bit-entropy matcher, and substring compression based on LZ77. As a calibration yardstick, we used the well-tested and widely used CRM114 OSB markov random field system (basically unchanged since 2003). The results show that the SVM has a spam-filtering accuracy of about a factor of two to three better accuracy than the OSB system, that substring compression is somewhat more accurate than OSB, and that bit entropy is somewhat less accurate for the TREC 2007 test sets.
-
There have been a large number of projects based on the Distributed Object Oriented (DOO) approach for solving complex problems in various scientific fields. The Mismatch problem is one of the most important problems facing the DOO system, where the initial design of the DOO application does not give the best class distribution. In such a case, the DOO software may need to be restructured. In this paper, we propose a methodology for efficiently restructuring the DOO software classes to be mapped on a distributed system consisting of a set of nodes. The proposed methodology consists of two phases. The first phase introduces a recursive graph clustering technique to partition the OO system into subsystems with low coupling. The second phase is concerned with mapping the generated partitions to the set of available machines in the target distributed architecture. A simulation evaluation was carried out for a set of randomly generated DOO software designs. Then the results were compared with those of the K-Partitioning algorithm in terms of the overall inter-class communication cost. © 2008 IEEE.
-
In scientific imaging, it is crucial to obtain precise images to facilitate accurate observations for the given application. However, often times the imaging equipment used to acquire such images introduces error into the observed image. Therefore, there is a fundamental need to remove the error associated with these images in order to facilitate accurate observations. This study investigates the effectiveness of an image processing technique utilizing an iterative deconvolution algorithm to remove error from micro-CT images. This technique is applied to several sets of in-vivo micro CT scans of mice, and its effectiveness is evaluated by qualitative comparison of the resultant thresholded binary images to thresholded binary images produced by more conventional image processing techniques; namely Gaussian filtering and straight thresholding. Results for this study suggest that iterative deconvolution as a pre-processing step produces superior qualitative results as compared to the more conventional methods tested. The groundwork for future quantitative verification is motivated. ©2005 IEEE.
-
The primary goal of this research was to provide image processing support to aid in the identification of those subjects most affected by bone loss when exposed to weightlessness and provide insight into the causes for large variability. Past research has demonstrated that genetically distinct strains of mice exhibit different degrees of bone loss when subjected to simulated weightlessness. Bone loss is quantified by in vivo computed tomography (CT) imaging. The first step in evaluating bone density is to segment gray scale images into separate regions of bone and background. Two of the most common methods for implementing image segmentation are thresholding and edge detection. Thresholding is generally considered the simplest segmentation process which can be obtained by having a user visually select a threshold using a sliding scale. This is a highly subjective process with great potential for variation from one observer to another. One way to reduce inter-observer variability is to have several users independently set the threshold and average their results but this is a very time consuming process. A better approach is to apply an objective adaptive technique such as the Riddler / Calvard method. In our study we have concluded that thresholding was better than edge detection and pre-processing these images with an iterative deconvolution algorithm prior to adaptive thresholding yields superior visualization when compared with images that have not been pre-processed or images that have been pre-processed with a filter.
-
The problem of characterizing the relationship between packet size and network delay has received little attention in the field. Research in that area has been limited to either simulation studies or empirical observations that are detached from analytic traffic modeling. From a queueing viewpoint, it is simple to show that these three variables are inter-related, which necessitates a more careful study. We present a traffic model of a router fed by ON/OFF-type sources with heavy-tailed burst sizes. The traffic model considered is consistent with the evidence that Web traffic is heavy-tailed. The analysis cases that are considered establish a quantitative characterization of the complex relationship among packet payload and header sizes, traffic burstiness, and router queueing delay. © 2004 IEEE.
-
We present a genetic algorithm for heuristically solving a cost minimization problem applied to communication networks with threshold based discounting. The network model assumes that every two nodes can communicate and offers incentives to combine flow from different sources. Namely, there is a prescribed threshold on every link, and if the total flow on a link is greater than the threshold, the cost of this flow is discounted by a factor α. A heuristic algorithm based on genetic strategy is developed and applied to a benchmark set of problems. The results are compared with former branch and bound results using the CPLEX® solver. For larger data instances we were able to obtain improved solutions using less CPU time, confirming the effectiveness of our heuristic approach. Copyright© 2003, Lawrence Erlbaum Associates, Inc.
-
Temporal and spatial analysis was applied to a sequence of cloud top pressure (CTP) images and cloud optical thickness (TAU) images, and a storm tracking algorithm was proposed. A sequence of storm tracks from the satellite images was developed from the satellite images. Composite images were created by projecting ahead in time and substituting the first valid pixel for missing data, and a variety of CTP and TAU cut-off values were used to identify regions of interest. The region correspondences were determined from one time frame to another which yielded the storm center coordinates. The obtained tracks were compared to the storm tracks computed from sea level pressure data by matching the results first in time and then in spatial distance.
-
The objective of this study is to compare geometric-based and evolutionary techniques for tracking storm systems from sequences of satellite images. Analysis was applied to the International Satellite Cloud Climatology Project low resolution D1 database for selected storm systems during the month of September, 1988. During this time period there were two exceptionally long tracks of major hurricane systems, Hurricanes Gilbert and Helene. Cloud top pressure and cloud optical thickness were used to identify storm systems. The ability of the geometric-based and evolutionary techniques to generate tracks through storm regions was assessed. Differences in final tracking results between the two techniques resulted not only from the differences in methodology but also form differences in the type of preprocessed input used by each of the techniques. Tracking results were compared to results disseminated by the Colorado State/Tropical Prediction Center and maintained by the National Hurricane Center in Miami, Florida. For the hurricanes investigated in this study, both techniques were able to generate tracks which followed either most or some of the portions of the hurricanes. The evolutionary algorithm was in general able to maintain good continuity along the tracks but, with no knowledge of overall region movement, was unable to discern which of two possible directions would be best to pursue in cases where there were tow or more equally close storm systems components. The geometric method was able to maintain a smooth track close to the course of the hurricane except for confusion primarily at the beginning and/or end of tracks.
-
Several working or experimental management systems use expert systems techniques for fault management purposes. Although the effort in the area is still growing, most of the expert fault management systems developed were built in an ad-hoc and unstructured basis simply transferring the knowledge of the human expert into an automated system. However, to meet future challenges, a theoretical foundation for fault management must be established aiming to bridge the gap between the working systems and research, and to provide a general structured model easily expandable to future networks. In this paper an algorithm is proposed to simplify the set of clustered alarms. This algorithm is based on the techniques widely used in the Logic Design field to simplify switching functions. The performance of the proposed algorithm is analyzed and the results are compared to those of a traditional algorithm.
-
A quality assurance system is essential for the credibility and structured growth of anaesthesiology-based transoesophageal echocardiography (TEE) programmes. We have developed software (Q/A Kappa), involving a 400- line source code, capable of directly reporting kappa correlation coefficient values, using external reviewer interpretations as the 'gold standard', and thereby allowing systematic assessment of the validity of intraoperative echocardiographic interpretation. This paper presents assessment of the validity of 240 intraoperative anaesthesiologists' echocardiographic interpretations, and, in addition, the results of field testing of this prototypical software. Data, derived from consecutive cardiac surgery patients, consisted of standardized two-dimensional transoesophageal echocardiographic, colour flow and Doppler imaging sequences. Intraoperative and off-line 'gold standard' TEE interpretations were compared for 19 fields or variables using the Q/A Kappa program. The kappa correlation coefficients were highly variable and dependent on the examination field, ranging from 0.08 for apical regional wall motion scores to 1.00 for tricuspid regurgitation grade, left atrial measurement, aortic valve anatomy and left ventricular long axis and short axis global function. The correlation coefficients were also operator dependent. These data (480 interpretations) were also manually integrated into the equation required for calculation of values of the variable kappa correlation coefficient. The relationship between Q/A Kappa-derived values and manually calculated values was highly significant (p < 0.001; r = 1.0). The implications and possible explanations of the results for particular examination fields are discussed. This study also demonstrates successful seamless functioning of this software program from data entry, segmentation into tables and valid statistical analysis. These findings suggest that it is practical to provide sophisticated continuous quality improvement TEE data on a routine basis.
-
During the software lifecycle, the software structure is subject to many changes in order to fulfill the customer's requirements. In Distributed Object Oriented systems, software engineers face many challenges to solve the software-hardware mismatch problem in which the software structure does not match the customer's underlying hardware. A major design problem of Object Oriented software systems is the efficient distribution of software classes among the different nodes in the system while maintaining two features: low-coupling and high software quality. In this paper, we present a new methodology for efficiently restructuring Distributed Object Oriented software systems to improve the overall system performance and to solve the softwarehardware mismatch problem. Our method has two main phases. In the first phase, we use the hierarchical clustering method to restructure the target software application. As a result, all the possible clustering solutions that could be applied to the target software application are generated. In the second phase, we decide on the best-fit clustering solution according to the customer hardware organization.
-
The primary purpose of this study was to investigate the feasibility of using simulated data from the United Kingdom Meteorological Office (UKMO) global climate mathematical model to serve as boundary values for a regional model RM3 which has been used by NASA to make predictions about climate dynamics in West Africa. In the past, historical data has been used successfully as boundary data but this approach limits outcomes to time periods in the past. The advantage of using the UKMO data is its potential to provide input boundary data for future time periods resulting in future regional predictions. This study has provided NASA scientists with graphical and statistical summaries including visual animations that provide qualitative and quantitative information necessary for evaluating whether the UKMO data can be used as a driving force for the RM3 model. One definite conclusion of this investigation is that both spatial and temporal interpolation of UKMO results will be necessary in order to make its results compatible with the RM3 model.
Explore
Department
Resource type
- Book (3)
- Conference Paper (20)
- Journal Article (10)
- Report (2)