Your search
Results 4,523 resources
-
The Ecology of Education: Knowledge Systems for Sustainable Development and SustainabilityResearch in knowledge systems for sustainable development (KSSD) seeks to determine how science and technology can be put into effective action at a local level. Teachers in education for sustainability attempt to achieve the same goal. KSSD research has indicated that success is context driven, that panaceas are inappropriate and that knowledge systems at best provide solutions in evolution. In this paper, we describe a teaching framework that we are developing to support KSSD researchers and teachers in education for sustainability based in ecology of education. While a need for ecology in education may be apparent, there is concurrently an equally important need for ecology of education. We argue that one cannot teach ecology in education adequately without an appropriate ecology of education. This paper first explains why teachers need to know this and then describes how teachers of education for sustainability can implement and assess this approach in the classroom.
-
The purpose of the present study was to explore the themes that counselor education doctoral students perceive as influencing their experience. The results of an exploratory qualitative study in which counselor education doctoral students provided their perceptions of what helped or hindered their progress are presented. Themes identified as both positively and negatively influencing their experiences were departmental culture, mentoring, academics, support systems, and personal issues. Recommendations are provided for counselor educators to consider in their work with doctoral students. © 2009 by the American Counseling Association. All rights reserved.
-
Despite living in disadvantaged urban communities experiencing social and economic hardships, many children emerge with positive outcomes. Social-emotional competence and social support were hypothesized to have strong influences on academic trajectories during the critical period of academic skill acquisition. Participants were 282 third-grade students from six elementary schools in a Northwestern urban community. Beyond the importance of prior levels of academic competence, considerable variance in end-of-year academic outcomes was predicted by initial levels of academic social-emotional competence and improvements in social-emotional competence and perceived teacher support over the course of the year. Noteworthy is that findings were strongest for African-American students, but methodological caveats regarding research with underachieving minority youth were discussed. The findings suggest that school psychologists and others designing interventions to improve achievement of disadvantaged students should address social-emotional competencies and classroom climate, especially teacher support of students.
-
During the software lifecycle, the software structure is subject to many changes in order to fulfill the customer's requirements. In Distributed Object Oriented systems, software engineers face many challenges to solve the software-hardware mismatch problem in which the software structure does not match the customer's underlying hardware. A major design problem of Object Oriented software systems is the efficient distribution of software classes among the different nodes in the system while maintaining two features: low-coupling and high software quality. In this paper, we present a new methodology for efficiently restructuring Distributed Object Oriented software systems to improve the overall system performance and to solve the softwarehardware mismatch problem. Our method has two main phases. In the first phase, we use the hierarchical clustering method to restructure the target software application. As a result, all the possible clustering solutions that could be applied to the target software application are generated. In the second phase, we decide on the best-fit clustering solution according to the customer hardware organization.
-
The primary purpose of this study was to investigate the feasibility of using simulated data from the United Kingdom Meteorological Office (UKMO) global climate mathematical model to serve as boundary values for a regional model RM3 which has been used by NASA to make predictions about climate dynamics in West Africa. In the past, historical data has been used successfully as boundary data but this approach limits outcomes to time periods in the past. The advantage of using the UKMO data is its potential to provide input boundary data for future time periods resulting in future regional predictions. This study has provided NASA scientists with graphical and statistical summaries including visual animations that provide qualitative and quantitative information necessary for evaluating whether the UKMO data can be used as a driving force for the RM3 model. One definite conclusion of this investigation is that both spatial and temporal interpolation of UKMO results will be necessary in order to make its results compatible with the RM3 model.
-
The software restructuring techniques present solutions for the software-hardware mismatch problem in which the software structure does not match the available hardware platform. In Distributed Object Oriented (DOO) systems, software engineers face many challenges to solve the software-hardware mismatch problem. One important aspect of DOO software systems is the efficient distribution of software classes among the different nodes while maintaining low-coupling and high software quality. In this paper, we present a new methodology for efficiently restructuring the DOO software systems to improve the performance and to solve the software-hardware mismatch problem. In our method, we use the hierarchical clustering technique to opt the classes to be grouped together and according to the customer hardware organization, we pick the level of the hierarchy that have the appropriate number of clusters to be allocated to the set of available nodes in the customer distributed system.
-
The use of Transmission Electron Microscopy (TEM) to characterize the microstructure of a material continues to grow in importance as technological advancements become increasingly more dependent on nanotechnology 1. Since nanoparticle properties such as size (diameter) and size distribution are often important in determining potential applications, a particle analysis is often performed on TEM images. Traditionally done manually, this has the potential to be labor intensive, time consuming, and subjective 2. To resolve these issues, automated particle analysis routines are becoming more widely accepted within the community 3. When using such programs, it is important to compare their performance, in terms of functionality and cost. The primary goal of this study was to apply one such software package, ImageJ to grayscale TEM images of nanoparticles with known size. A secondary goal was to compare this popular open-source general purpose image processing program to two commercial software packages. After a brief investigation of performance and price, ImageJ was identified as the software best suited for the particle analysis conducted in the study. While many ImageJ functions were used, the ability to break agglomerations that occur in specimen preparation into separate particles using a watershed algorithm was particularly helpful 4. © 2009 SPIE-IS&T.
-
A web-based lidar experimentation and data analysis system (LEDAS) was developed, with support from a National Science Foundation award, to support resource sharing of lidar equipment, datasets and data analysis routines and collaboration between members of the Connecticut State University System (CSUS) Lidar Collaboratory. The system allows users at different geographical locations to conduct remote sensing research and education over the Web through remote access and control of a single shared lidar system and web-based data analysis. Users need not have any specialized instrumentation or software at their institutions, thereby making real remote sensing research available to students and faculty from institutions which may not have the internal budgets for such facilities. An original structure providing basic functionality was developed and implemented. This paper describes the second generation data analysis system which provides significant new enhancements and capabilities. © 2008 IEEE.
-
Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their decreased size may give rise to certain properties that are very different from those exhibited by bulk materials. Further advancement of nanotechnology cannot be realized without an increased understanding of nanoparticle properties such as size (diameter) and size distribution. Frequently, these parameters are evaluated using numerous imaging modalities including transmission electron microscopy (TEM) and atomic force microscopy (AFM). In the past, these parameters have been obtained from digitized images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. Recently, computer imaging particle analysis routines that count and measure objects in a binary image1 have emerged as an objective and rapid alternative to manual techniques. In this paper a procedure is described that can be used to preprocess a set of gray scale images so that they are correctly thresholded into binary images prior to a particle analysis ultimately resulting in a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Particle analysis was performed on two types of calibration samples imaged using AFM and TEM. Additionally, results of particle analysis can be used for identifying and removing small noise particles from the image. This filtering technique is based on identifying the location of small particles in the binary image, assessing their size, and removing them without affecting the size of other larger particles.
-
Nanoparticles, particles with a diameter of 1-100 nanometers (nm), are of interest in many applications including device fabrication, quantum computing, and sensing because their size may give them properties that are very different from bulk materials. Further advancement of nanotechnology cannot be obtained without an increased understanding of nanoparticle properties such as size (diameter) and size distribution frequently evaluated using transmission electron microscopy (TEM). In the past, these parameters have been obtained from digitized TEM images by manually measuring and counting many of these nanoparticles, a task that is highly subjective and labor intensive. More recently, computer imaging particle analysis has emerged as an objective alternative by counting and measuring objects in a binary image. This paper will describe the procedures used to preprocess a set of gray scale TEM images so that they could be correctly thresholded into binary images. This allows for a more accurate assessment of the size and frequency (size distribution) of nanoparticles. Several preprocessing methods including pseudo flat field correction and rolling ball background correction were investigated with the rolling ball algorithm yielding the best results. Examples of particle analysis will be presented for different types of materials and different magnifications. In addition, a method based on the results of particle analysis for identifying and removing small noise particles will be discussed. This filtering technique is based on identifying the location of small particles in the binary image and removing them without affecting the size of other larger particles.
-
Thresholding is an image processing procedure used to convert an image consisting of gray level pixels into a black and white binary image. One application of thresholding is particle analysis. Once foreground objects are separated from the background, a quantitative analysis that characterizes the number, size and shape of particles is obtained which can then be used to evaluate a series of nanoparticle samples. Numerous thresholding techniques exist differing primarily in how they deal with variations in noise, illumination and contrast. In this paper, several popular thresholding algorithms are qualitatively and quantitatively evaluated on transmission electron microscopy (TEM) and atomic force microscopy (AFM) images. Initially, six thresholding algorithms were investigated: Otsu, Riddler-Calvard, Kittler, Entropy, Tsai and Maximum Likelihood. The Riddler-Calvard algorithm was not included in the quantitative analysis because it did not produce acceptable qualitative results for the images in the series. Two quantitative measures were used to evaluate these algorithms. One is based on comparing object area the other on diameter before and after thresholding. For AFM images the Kittler algorithm yielded the best results followed by the Entropy and Maximum Likelihood techniques. The Tsai algorithm yielded the top results for TEM images followed by the Entropy and Kittler methods.
-
This paper provides a description of how the topic of Google hacking was incorporated into a graduate course on web security which was offered in the Fall of 2005. It begins by providing an overview of Google hacking and describes what it is, how it is used, and most importantly how to defend against it. The paper then describes a series of exercises that students must complete providing them with hands-on Google hacking strategies, techniques and countermeasures. Copyright 2007 ACM.
-
We consider cluster systems with multiple nodes where each server is prone to run tasks at a degraded level of service due to some software or hardware fault. The cluster serves tasks generated by remote clients, which are potentially queued at a dispatcher. We present an analytic queueing model of such systems, represented as an M/MMPP/1 queue, and derive and analyze exact numerical solutions for the mean and tail-probabilities of the queue-length distribution. The analysis shows that the distribution of the repair time is critical for these performability metrics. Additionally, in the case of high-variance repair times, the model reveals so-called blow-up points, at which the performance characteristics change dramatically. Since this blowup behavior is sensitive to a change in model parameters, it is critical for system designers to be aware of the conditions under which it occurs. Finally, we present simulation results that demonstrate the robustness of this qualitative blow-up behavior towards several model variations. © 2007 IEEE.
-
Pipelining is the suitable architecture to adopt applications that are naturally divided into stages. Recently, applications tend to be Object Oriented (OO). Within the context of OO, there are a lot of interactions among different objects that result in many communication activities. Besides the feed-forward communication activities, many bypassing activities are generated in the pipeline structure. In this paper, we present a performance model that analyzes and evaluates the execution and communication times of OO software that runs on pipeline architecture. The model realizes both the feed-forward and the bypassing communication. We utilize the model to restructure the target software to achieve better performance. The restructuring algorithm has two phases; the first phase is concerned with maximizing the throughput. The second phase aims to minimize the latency and fully exploit the system resources. © 2006 - IOS Press and the authors. All rights reserved.
-
The primary goal of this research was to investigate the ability of quantitative variables to confirm qualitative improvements of the deconvolution algorithm as a preprocessing step in evaluating micro CT bone density images. The analysis of these types of images is important because they are necessary to evaluate various countermeasures used to reduce or potentially reverse bone loss experienced by some astronauts when exposed to extended weightlessness during space travel. Nine low resolution (17.5 microns) CT bone density image sequences, ranging from between 85 to 88 images per sequence, were processed with three preprocessing treatment groups consisting of no preprocessing, preprocessing with a deconvolution algorithm and preprocessing with a Gaussian filter. The quantitative parameters investigated consisted of Bone Volume to Total Volume Ratio, the Structured Model Index, Fractal Dimension, Bone Area Ratio, Bone Thickness Ratio, Euler's Number and the Measure of Enhancement. Trends found in these quantitative variables appear to corroborate the visual improvements observed in the past and suggest which quantitative parameters may be capable of distinguishing between groups that experience bone loss and others that do not.
-
This paper describes a collaborative project conducted by the Computer Science Department at Southern Connecticut State University and NASA's Goddard Institute for Space Science (GISS). Animations of output from a climate simulation math model used at GISS to predict rainfall and circulation have been produced for West Africa from June to September 2002. These early results have assisted scientists at GISS in evaluating the accuracy of the RM3 climate model when compared to similar results obtained from satellite imagery. The results presented below will be refined to better meet the needs of GISS scientists and will be expanded to cover other geographic regions for a variety of time frames.
-
UNLABELLED: The purpose of this study was to determine if there was a relationship between fundamental frequency (Fo) and gender identification in standard esophageal (ES) or tracheoesophageal (TE) speakers. Twenty-three male and 20 female ES and TE speakers participated in this study. Recordings of these speakers reading the Rainbow Passage were played to 48 listeners who indicated perceived gender in a forced choice format. Fo was determined using PC-AUDED [Boston University (1991). Using PC-AUDED, Audio-editor and analyses program for the study of periodic segments. Boston: Boston University]. Seventy-nine percent of the speakers were identified correctly for gender. No significant difference was found between the number of male and female or TE and ES speakers identified correctly. A significant correlation was found between Fo and correct gender identification for the female speakers only. Results suggest that Fo plays a part in gender identification for female SE and TE speakers, however, other factors may also be important cues for gender identification in these speakers., LEARNING OUTCOMES: As a result of this activity, the participant will be able to: (1) describe the relationship between Fo and gender identification for male and female standard esophageal (SE) and tracheoesophageal (TE) speakers (2) discuss other variables that may influence gender identification in SE and TE speakers.
-
Twenty-nine youth with autism spectrum disorders and 26 with typical development between 12 and 18 years of age were engaged in structured interviews (ADOS). The interviews were videotaped and rated for atypical conversational behaviors by trained raters, using the Pragmatic Rating Scale (Landa et al. Psychol Med 22:245-254, 1992). The ASD group was divided into AS and HFA/PDD-NOS subgroups. Significant differences were found among groups on approximately one-third of the PRS items. These items involved primarily the management of topics and information, reciprocity, intonation, and gaze management. The only differences to reach significance between the AS and HFA/PDD-NOS group were a greater tendency for overly formal speech on the part of the AS group, and more difficulty with gaze management on the part of the group with HFA/PDD-NOS. The implications of these findings for understanding and treating conversational deficits in ASD are discussed., (C) Plenum Publishing Corporation 2009. All Rights Reserved.
-
Thirty-seven children 15-25 months of age received clinical diagnoses of autism spectrum disorder (ASD) and were re-evaluated two years later. All subjects were judged to have retained a diagnosis of ASD at the follow-up evaluation. Communication scores for the group as a whole during the first visit were significantly lower than nonverbal IQ. However, by the second visit, verbal and nonverbal scores were no longer significantly different. The group was divided into two subgroups, based on expressive language (EL) outcome at the second visit. The two groups were similar in the second year of life in terms of expressive communication skills and autistic symptoms, except for a trend toward more stereotypic and repetitive behavior in the worse outcome group. By the second visit, however, the groups differed significantly on all standard measures of expression and reception, as well as on autistic symptomotology and nonverbal IQ. When assessed during their second year, children who ended up in the better outcome group showed higher average nonverbal cognitive level, receptive language (RL) scores, number of sounds and words produced, use of symbolic play schemes, and response to joint attention bids. Regression analysis revealed that the variables for which significant differences between the two outcome groups in their second year of life were found provided significant prediction of EL outcome at age four. Stepwise regression identified RL and presence of stereotypic and repetitive at the first visit as significantly associated with EL outcome. Implications of these findings for early identification and intervention are discussed.
Explore
Resource type
- Blog Post (5)
- Book (858)
- Book Section (483)
- Conference Paper (211)
- Dataset (1)
- Document (5)
- Encyclopedia Article (1)
- Journal Article (2,655)
- Magazine Article (11)
- Preprint (5)
- Presentation (1)
- Report (269)
- Thesis (17)
- Web Page (1)
Publication year
-
Between 1900 and 1999
(981)
-
Between 1910 and 1919
(1)
- 1916 (1)
- Between 1930 and 1939 (5)
- Between 1940 and 1949 (3)
- Between 1950 and 1959 (15)
- Between 1960 and 1969 (68)
- Between 1970 and 1979 (185)
- Between 1980 and 1989 (210)
- Between 1990 and 1999 (494)
-
Between 1910 and 1919
(1)
-
Between 2000 and 2026
(3,531)
- Between 2000 and 2009 (719)
- Between 2010 and 2019 (1,779)
- Between 2020 and 2026 (1,033)
- Unknown (11)