Your search

In authors or contributors
Publication year
  • With the continuously increasing number of new confirmed COVID-19 cases, many health experts worried about the possibility of a ‘second wave’ outbreak, which might cause more deaths and hit economies even worse. This article looks at the experiences of fighting COVID-19 from three Asia-Pacific countries and discusses whether it is a wise decision to open up America again at this time.

  • This paper will discuss the correlation between the SAT and the Math Inventory Test. Many school districts adopted the Math Inventory as a tool to measure student growth from grades kindergarten through high school. The Math Inventory is a computer-administered test that gives students math problems spanning from counting to high school level math. When completed, the students are given a quantile measure, much like a Lexile score for reading skill. The purpose of this study is to figure out if success on the Math Inventory is a good indicator for performing well on the SAT. For most high schools around the United States, objectives and lessons are aligned with those of the SAT. The goal of high school teachers is for students to excel on the SAT so that they can go to college, which means the tests used in middle school should be aligned with that goal. If the Math Inventory is not, then it might not be a very good use of school time and resources. Data was analyzed from the 2017-2018 school year from ten different high schools in an urban school district to determine the correlation between Math Inventory score, and the math score/sub scores of SAT/PSAT. The value of the Pearson’s correlation coefficient is used to suggest a fairly moderate positive relationship between these two variables. © 2021, International Journal of Information and Education Technology. All rights reserved.

  • Vulnerabilities need to be detected and removed from software. Although previous studies demonstrated the usefulness of employing prediction techniques in deciding about vulnerabilities of software components, the improvement of effectiveness of these prediction techniques is still a grand challenging research question. This paper employed a technique based on a deep neural network with rectifier linear units trained with stochastic gradient descent method and batch normalization, for predicting vulnerable software components. The features are defined as continuous sequences of tokens in source code files. Besides, a statistical feature selection algorithm is then employed to reduce the feature and search space. We evaluated the proposed technique based on some Java Android applications, and the results demonstrated that the proposed technique could predict vulnerable classes, i.e., software components, with high precision, accuracy and recall.

  • In order to reduce students' test anxiety, collaborative testing was suggested as an evaluation strategy. However, few studies have focused on testing group construction, especially when an important factor, i.e., group diversity is taken into consideration. In this paper we conducted a case study to assess the association between group diversity and test anxiety in collaborative testing. The results observed may indicate that: 1) around 20% of students suffered from test anxiety to some extent in either an individual test or a collaborative test; 2) collaborative testing could alleviate test anxiety, whereas the effect is not statistically significant; 3) there exists a moderate positive correlation between group diversity and test anxiety in collaborative testing. The results of the study may suggest limiting group diversity in collaborative testing in order to alleviate test anxiety. © 2015 IEEE.

  • We introduce a novel application of feature ranking methods to the fault localization problem. We envision the problem of localizing causes of failures as instances of ranking program's elements where elements are conceptualized as features. In this paper, we define features as program's statements. However, in its fine-grained definition, the idea of program's features can refer to any traits of programs. This paper proposes feature ranking-based algorithms. The algorithms analyze execution traces of both passing and failing test cases, and extract the bug signatures from the failing test cases. The proposed procedure extracts possible combinations of program's elements when executed together from bug signatures. The feature ranking-based algorithms then order statements according to the suspiciousness of the combinations. When viewed as sequences, the combination of program's elements produced and traced in bug signatures can be utilized to reason about the common longest subsequence. The common longest subsequence of bug signatures represents the common statements executed by all failing test cases and thus provides a means for identifying statements that contain possible faults. Our evaluation indicates that the proposed feature-based fault localization outperforms existing fault localization ranking schemes. © 2017 World Scientific Publishing Company.

  • Vulnerabilities need to be detected and removed from software. Although previous studies demonstrated the usefulness of employing prediction techniques in deciding about vulnerabilities of software components, the accuracy and improvement of effectiveness of these prediction techniques is still a grand challenging research question. This paper proposes a hybrid technique based on combining N-gram analysis and feature selection algorithms for predicting vulnerable software components where features are defined as continuous sequences of token in source code files, i.e., Java class file. Machine learning-based feature selection algorithms are then employed to reduce the feature and search space. We evaluated the proposed technique based on some Java Android applications, and the results demonstrated that the proposed technique could predict vulnerable classes, i.e., software components, with high precision, accuracy and recall. © 2015 IEEE.

  • Due to the complex causality of failure and the special characteristics of test cases, the faults in GUI (Graphic User Interface) applications are difficult to localize. This paper adapts feature selection algorithms to localize GUI-related faults in a given program. Features are defined as the subsequences of events executed. By employing statistical feature ranking techniques, the events can be ranked by the suspiciousness of events being responsible to exhibit faulty behavior. The features defined in a given source code implementing (event handle) the underlying event are then ranked in suspiciousness order. The evaluation of the proposed technique based on some open source Java projects verified the effectiveness of this feature selection based fault localization technique for GUI applications. © 2014 IEEE.

  • Software components, which are vulnerable to being exploited, need to be identified and patched. Employing any prevention techniques designed for the purpose of detecting vulnerable software components in early stages can reduce the expenses associated with the software testing process significantly and thus help building a more reliable and robust software system. Although previous studies have demonstrated the effectiveness of adapting prediction techniques in vulnerability detection, the feasibility of those techniques is limited mainly because of insufficient training data sets. This paper proposes a prediction technique targeting at early identification of potentially vulnerable software components. In the proposed scheme, the potentially vulnerable components are viewed as mislabeled data that may contain true but not yet observed vulnerabilities. The proposed hybrid technique combines the supports vector machine algorithm and ensemble learning strategy to better identify potential vulnerable components. The proposed vulnerability detection scheme is evaluated using some Java Android applications. The results demonstrated that the proposed hybrid technique could identify potentially vulnerable classes with high precision and relatively acceptable accuracy and recall.

  • Due to the considerable advantages of collaborative learning, group work is widely used in tertiary institutions. Previous studies demonstrated that group diversity had positive influence on group work achievement. Therefore, an interesting question that arises is how to achieve maximum group diversity effectively and automatically, especially when the features to be considered are numerous and the number of students is large. In this paper we apply a multi-start algorithm composed by a greedy constructive and strategic oscillation improvement to group students. We evaluated the technique based on a small-scale case study. The results observed indicate that the multi-start algorithm-based grouping model is feasible. It improved the overall and average students diversity within group significantly, and it also enhanced students' collaborative learning outcomes compared to random grouping model. However, we did not find any evidence on monotonic positive relationship between diversity and students' learning outcomes. © 2015 IEEE.

  • Group work is widely used in tertiary institutions due to the considerable advantages of collaborative learning. Previous studies indicated that the group diversity had positive influence on the group work achievement. Therefore, how to achieve diversity within a group effectively and automatically is an interesting question. In this paper we propose a novel clustering-based grouping model. The proposed technique first employs balanced K-means algorithm to divide the students into several size-balanced clusters, such that the students within the same cluster are more similar (in some sense) to each other than to those in other clusters, then adopts one-sample-each-cluster strategy to construct the groups. We evaluated the proposed technique based on two small-scale case studies. The result observed may indicate that the clustering-based grouping model is feasible and effective. © 2014 IEEE.

  • This paper deals with predator–prey dynamics in individual and population perspectives. First, we build a discrete Markov model on predator–prey interactions in individual perspective. By shortening the time gap, from discrete time to continuous time, and increasing the number of individuals to infinity, a continuity equation on the predator–prey interactions is derived in a large population regime. Then, with the leading items of the continuity equation, that is the mean-field equation, following the approximate model, which entails qualitative analysis, we can obtain an asymptotically stable closed orbit or simply put, the parameter conditions where equilibrium point exists. These qualitative conclusions are the performance of individual microscopic interactions on macro-level groups, or can be treated as one component of microscopic models of various random statistical average results.This paper explored the accuracy and operability of the model constructed on individual level, which differs from traditional method, constructing population model directly via differential equations and difference equations. Therefore, by operating variables and data from individual behavior, it is probable for us to construct more accurate models for population dynamic. © 2014, Springer Science+Business Media Dordrecht (outside the USA).

  • The primary goals of this study are to determine if the datasets of positive COVID-19 test cases and CO2 emissions from Connecticut over the span of March 24th, 2020-October 31, 2021 are in any ways correlated. With climate change a prominent issue facing the entire world today, it is important to explore methods of providing records of past patterns of greenhouse gas emissions in order to inform decision making that could reduce future ones. Autoregressive integrated moving average (ARIMA) modeling is also implemented in this paper to provide forecasting based on CO2 emissions in CT starting from 2019. The most significant results from this paper are as follows: the CO2 emission data of transportation sectors including ground transportation, domestics aviation, and international aviation and weekly COVID-19 positive test cases data has a strong relationship during the first 28 weeks of the pandemic with a correlation of -86.34%. The CO2 emissions experienced on average a -22.96% change of pre-pandemic vs during initial quarantine conditions and at most a - 44.48% change when comparing the pre-pandemic mean to the during initial quarantine minimum value. Lastly, the ARIMA model found to have the lowest Akaike information criterion (AIC) was ARIMA (4,0,4). In conclusion, in the event of a collective global pandemic and lockdown conditions, less traveling resulting in a correlated decrease of CO2 emissions. This means that perhaps concentrated efforts on reducing unnecessary travel could help mitigate the levels of carbon dioxide emissions as a more long-term solution to climate change opposed to the pandemic’s short-term example.

  • Low graduation rate is a significant and growing problem in U.S. higher education systems. Although previous studies have demonstrated the usefulness of building statistical models for predicting students' graduation outcomes, advanced machine learning models promise to improve the effectiveness of these models, and hone in on the “difference that makes a difference” not only on the group level, but also on the level of the individual student. In this paper we propose an ensemble support vector machines based model for predicting students' graduation. Up to about 100 features, including a set of psychological-educational factors, were employed to construct the predicting model. We evaluated the proposed model using data taken from a state university's longitudinal, cohort data sets from the incoming classes of students from 2011-2012 (n=350). The experimental results demonstrated the effectiveness of the model, with considerable accuracy, precision, and recall. This paper presents the results of analysis that were conducted in order to gauge the predictive capability of a machine learning algorithm to predict on-time graduation that took into consideration students' learning and development.

  • Agriculture ranks as one of the top contributors to global warming and nutrient pollution. Quantifying life cycle environmental impacts from agricultural production serves as a scientific foundation for forming effective remediation strategies. However, methods capable of accurately and efficiently calculating spatially explicit life cycle global warming (GW) and eutrophication (EU) impacts at the county scale over a geographic region are lacking. The objective of this study was to determine the most efficient and accurate model for estimating spatially explicit life cycle GW and EU impacts at the county scale, with corn production in the U.S.’s Midwest region as a case study. This study compared the predictive accuracies and efficiencies of five distinct supervised machine learning (ML) algorithms, testing various sample sizes and feature selections. The results indicated that the gradient boosting regression tree model built with approximately 4000 records of monthly weather features yielded the highest predictive accuracy with cross-validation (CV) values of 0.8 for the life cycle GW impacts. The gradient boosting regression tree model built with nearly 6000 records of monthly weather features showed the highest predictive accuracy with CV values of 0.87 for the life cycle EU impacts based on all modeling scenarios. Moreover, predictive accuracy was improved at the cost of simulation time. The gradient boosting regression tree model required the longest training time. ML algorithms demonstrated to be one million times faster than the traditional process-based model with high predictive accuracy. This indicates that ML can serve as an alternative surrogate of process-based models to estimate life-cycle environmental impacts, capturing large geographic areas and timeframes.

  • To ensure the function of wireless sensor networks (WSNs), nodes that fail to forward packets must be localized efficiently and then fixed or replaced promptly. The state-of-the-art work frames lossy node localization in WSNs as an optimal sequential testing problem guided by end-to-end data. It combines both the active and passive measurements to minimize the testing cost and the number of iterations. However, this hybrid approach has many limitations. Inspired by the success of coverage-based software debugging, and the similarity between software debugging and lossy node localization, we propose a coverage-based lossy node detection for WSNs. Supported by established statistic theories, this approach greatly boosts the performance. Experiments on randomly generated networks and deployed networks show that the proposed algorithm can significantly reduce testing cost and number of iterations, which are the two optimization goals of previous work. We expect to use this approach for other diagnostic problems in WSNs. © 2001-2012 IEEE.

  • Life Cycle Assessment (LCA) is a foundational method for quantitative assessment of sustainability. Increasing data availability and rapid development of machine learning (ML) approaches offer new opportunities to advance LCA. Here, we review current progress and knowledge gaps in applying ML techniques to support LCA, and identify future research directions for LCAs to better harness the power of ML. This review analyzes forty studies reporting quantitative assessment with a combination of LCA and ML methods. We found that ML approaches have been used for generating life cycle inventories, computing characterization factors, estimating life cycle impacts, and supporting life cycle interpretation. Most of the reviewed studies employed a single ML method, with artificial neural networks (ANNs) as the most frequently applied approach. Both supervised and unsupervised ML techniques were used in LCA studies. For studies using supervised ML, training datasets were derived from diverse sources, such as literature, lab experiments, existing databases, and model simulations. Over 70 % of these reviewed studies trained ML models with less than 1500 sample datasets. Although these reviewed studies showed that ML approaches help improve prediction accuracy, pattern discovery and computational efficiency, multiple areas deserve further research. First, continuous data collection and compilation is needed to support more reliable ML and LCA modeling. Second, future studies should report sufficient details regarding the selection criteria for ML models and present model uncertainty analysis. Third, incorporating deep learning models into LCA holds promise to further improve life cycle inventory and impact assessment. Finally, the complexity of current environmental challenges calls for interdisciplinary collaborative research to achieve deep integration of ML into LCA to support sustainable development.

  • Human mesenchymal stem cells (hMSCs) are multipotent progenitor cells with the potential to differentiate into various cell types, including osteoblasts, chondrocytes, and adipocytes. These cells have been extensively employed in the field of cell-based therapies and regenerative medicine due to their inherent attributes of self-renewal and multipotency. Traditional approaches for assessing hMSCs differentiation capacity have relied heavily on labor-intensive techniques, such as RT-PCR, immunostaining, and Western blot, to identify specific biomarkers. However, these methods are not only time-consuming and economically demanding, but also require the fixation of cells, resulting in the loss of temporal data. Consequently, there is an emerging need for a more efficient and precise approach to predict hMSCs differentiation in live cells, particularly for osteogenic and adipogenic differentiation. In response to this need, we developed innovative approaches that combine live-cell imaging with cutting-edge deep learning techniques, specifically employing a convolutional neural network (CNN) to meticulously classify osteogenic and adipogenic differentiation. Specifically, four notable pre-trained CNN models, VGG 19, Inception V3, ResNet 18, and ResNet 50, were developed and tested for identifying adipogenic and osteogenic differentiated cells based on cell morphology changes. We rigorously evaluated the performance of these four models concerning binary and multi-class classification of differentiated cells at various time intervals, focusing on pivotal metrics such as accuracy, the area under the receiver operating characteristic curve (AUC), sensitivity, precision, and F1-score. Among these four different models, ResNet 50 has proven to be the most effective choice with the highest accuracy (0.9572 for binary, 0.9474 for multi-class) and AUC (0.9958 for binary, 0.9836 for multi-class) in both multi-class and binary classification tasks. Although VGG 19 matched the accuracy of ResNet 50 in both tasks, ResNet 50 consistently outperformed it in terms of AUC, underscoring its superior effectiveness in identifying differentiated cells. Overall, our study demonstrated the capability to use a CNN approach to predict stem cell fate based on morphology changes, which will potentially provide insights for the application of cell-based therapy and advance our understanding of regenerative medicine.

  • Background: Attenuation correction (AC) using CT transmission scanning enables the accurate quantitative analysis of dedicated cardiac SPECT. However, AC is challenging for SPECT-only scanners. We developed a deep learning-based approach to generate synthetic AC images from SPECT images without AC. Methods: CT-free AC was implemented using our customized Dual Squeeze-and-Excitation Residual Dense Network (DuRDN). 172 anonymized clinical hybrid SPECT/CT stress/rest myocardial perfusion studies were used in training, validation, and testing. Additional body mass index (BMI), gender, and scatter-window information were encoded as channel-wise input to further improve the network performance. Results: Quantitative and qualitative analysis based on image voxels and 17-segment polar map showed the potential of our approach to generate consistent SPECT AC images. Our customized DuRDN showed superior performance to conventional network design such as U-Net. The averaged voxel-wise normalized mean square error (NMSE) between the predicted AC images by DuRDN and the ground-truth AC images was 2.01 ± 1.01%, as compared to 2.23 ± 1.20% by U-Net. Conclusions: Our customized DuRDN facilitates dedicated cardiac SPECT AC without CT scanning. DuRDN can efficiently incorporate additional patient information and may achieve better performance compared to conventional U-Net. © 2021, American Society of Nuclear Cardiology.

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource type

Resource language