Your search
Results 291 resources
-
In this paper, we address the resource and virtual machine instance hour minimization problem for directed-acyclic-graph based deadline constrained applications deployed on computer clouds. The allocated resources and instance hours on computer clouds must: (1) guarantee the satisfaction of a deadline constrained application's end-to-end deadline; (2) ensure that the number of virtual machine (VM) instances allocated to the application is minimized; (3) under the allocated number of VM instances, determine application execution schedule that minimizes the application's makespan; and (4) under the decided application execution schedule, determine a VM operation schedule, i.e., when a VM should be turned on or off, that minimizes total VM instance hours needed to execute the application. We first give lower and upper bounds for the number of VM instances needed to guarantee the satisfaction of a deadline constrained application's end-to-end deadline. Based on the bounds, we develop a heuristic algorithm called minimal slack time and minimal distance (MSMD) algorithm that finds the minimum number of VM instances needed to guarantee the application's deadline and schedules tasks on the allocated VM instances so that the application's makespan is minimized. Once the application execution schedule and the number of VM instances needed are determined, the proposed VM instance hour minimization (IHM) algorithm is applied to further reduce the instance hours needed by VMs to complete the application's execution. Our experimental results show that the MSMD algorithm can guarantee applications' end-to-end deadlines with less resources than the HEFT [32], MOHEFT [16], DBUS [9], QoS-base [40] and Auto-Scaling [25] heuristic scheduling algorithms in the literature. Furthermore, under allocated resources, the MSMD algorithm can, on average, reduce an application's makespan by 3.4 percent of its deadline. In addition, with the IHM algorithm we can effectively reduce the application's execution instance hours compared with when IHM is not applied.
-
The periodic task set assignment problem in the context of multiple processors has been studied for decades. Different heuristic approaches have been proposed, such as the Best-Fit (BF), the First-Fit (FF), and the Worst-Fit (WF) task assignment algorithms. However, when processors are not dedicated but only periodically available to the task set, whether existing approaches still provide good performance or if there is a better task assignment approach in the new context are research problems which, to our best knowledge, have not been studied by the real-time research community. In this paper, we present the Best-Harmonically-Fit (BHF) task assignment algorithm to assign periodic tasks on multiple periodic resources. By periodic resource we mean that for every fixed time interval, i.e., the period, the resource always provides the same amount of processing capacity to a given task set. Our formal analysis indicates that if a harmonic task set is also harmonic with a resource's period, the resource capacity can be fully utilized by the task set. Based on this analysis, we present the Best-Harmonically-Fit task assignment algorithm. The experimental results show that, on average, the BHF algorithm results in 53.26 , 42.54 , and 27.79 percent higher resource utilization rate than the Best-Fit Decreasing (BFD), the First-Fit Decreasing (FFD), and the Worst-Fit Decreasing (WFD) task assignment algorithms, respectively; but comparing to the optimal resource utilization rate found by exhaustive search, it is about 11.63 percent lower.
-
Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.
-
In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO) algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM), the Power Model (POWM) and the Delayed S-Shaped Model (DSSM). In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
-
This research introduces the application of an innovative bio-inspired metaheuristic technique, termed the Crow Search Algorithm (CSA), to model a crucial industrial process - hot rolling manufacturing. Inspired by the foraging patterns of crows, the CSA algorithm has demonstrated its prowess in solving diverse optimization challenges. In the context of this study, the CSA algorithm is harnessed to fine-tune the parameters of a simulation model focused on predicting the force exerted during a hot rolling procedure. The proposed model takes into consideration a range of influential factors, including the initial temperature (Ti), width (Ws), carbon equivalent (Ce), gauge (hi), draft (i), and roll diameter (R). The findings underscore the CSA's capability to deliver an exceptional modeling performance characterized by swift convergence and high solution quality. By getting along very well with the proposed model with the CSA algorithm, a robust and efficient avenue to optimize the hot rolling process emerges, with the potential for expansion into other manufacturing domains. The computational and simulation results demonstrated that the proposed approach-based CSA outperformed different meta-heuristic search algorithms, such as the Salp Swarm Algorithm (SSA), Dandelion Optimizer (DO), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), and Moth-Flame Optimization (MFO), in all test cases. The CSA has achieved the highest coefficient of determination (R2), equal to 0.97244, and the lowest mean squared error (MSE), equal to 1904.97, compared to its opponent algorithms. © 2024 IEEE.
-
Urban air pollution, a combination of industry, traffic, forest burning, and agriculture pollutants, significantly impacts human health, plants, and economic growth. Ozone exposure can lead to mortality, heart attacks, and lung damage, necessitating the creation of complex environmental safety regulations by forecasting ozone concentrations and associated pollutants. This study proposes a hybrid method, RFNN-GOA, combining recurrent fuzzy neural network (RFNN) and grasshopper optimization algorithm (GOA) to estimate and forecast the daily ozone (O3) in specific urban areas, specifically Kopački Rit and Osijek city in Croatia, aiming to improve air quality, human health, and ecosystems. Due to the intricate structure of atmospheric particles, modeling of O3 likely poses the biggest challenge in air pollution today. The dataset used by the proposed RFNN-GOA model for the prediction of O3 concentrations in each explored area consists of the following air pollutants, NO, NO2, CO, SO2, O3, PM10, and PM2.5; and five meteorological elements, including temperature, relative humidity, wind direction, speed, and pressure. The RFNN-GOA method optimizes membership functions’ parameters and the rule premise, demonstrating robustness and reliability compared to other identifiers and indicating its superiority over competing methods. The RFNN-GOA method demonstrated superior accuracy in Osijek city and Kopački Rit area, with variance-accounted for (VAF) values of 91.135%, 83.676%, 87.807%, 79.673% compared to the RFNN method’s corresponding values of 85.682%, 80.687%, 80.808%, 74.202% in both training and testing phases, respectively. This reveals that RFNN-GOA increased the average VAF in Osijek city and Kopački Rit area by over 5% and 8%, respectively. © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2024.
-
Barcode-less fruit recognition technology has revolutionized the checkout process by eliminating manual barcode scanning. This technology automatically identifies and adds fruit items to the purchase list, significantly reducing waiting times at the cash register. Faster checkouts enhance customer convenience and optimize operational efficiency for retailers. Adding barcode to fruits require using adhesives on the fruit surface that may cause health hazards. Leveraging deep learning techniques for barcode-less fruit recognition brings valuable advantages to industries, including advanced automation, enhanced accuracy, and increased efficiency. These benefits translate into improved productivity, cost reduction, and superior quality control. This study introduces a Convolutional Neural Network (CNN) designed explicitly for automatic fruit recognition, even in challenging real-world scenarios. The proposed method assists fruit sellers in accurately identifying and distinguishing between different types of fruit that may exhibit similarities. A dataset that includes 44,406 images of different fruit types is used to train and test our technique. Employing a CNN, the developed model achieves an impressive classification accuracy of 97.4% during the training phase and 88.6% during the testing phase respectively, showcasing its effectiveness in precise fruit recognition.
-
In this paper, we develop an indoor positioning system using smartphones. An indoor positioning system plays a vital role in indoor spaces such as home, office, university, airport, and hospital buildings by locating and tracking persons, devices, and assets. Our indoor positioning system is applicable in any indoor spaces which has smart devices such as smartphones, tablets, smartwatches, and robots with a Wi-Fi connection. We used Wi-Fi-based fingerprinting technique t o build o ur indoor positioning system because a Wi-Fi-based system can leverage existing Wi-Fi infrastructure and hence, it is cost effective. A major challenge in implementing a Wi-Fi-based fingerprinting technique is the missed access points (APs) problem. In this paper, we address this critical challenge by proposing a localization procedure called ‘cosine similarity + k-means clustering'. In this localization procedure, we leverage k-means clustering algorithm in identifying the wrong location estimates produced by the cosine similarity measure because of missed APs problem. To evaluate the effectiveness of our proposed localization procedure, we collected data from three different scenarios, specifically, home, office, a nd university f or creating signal m ap a nd performing localization tests. Additionally, we tested both stationary and walk data. Our experimental results prove that our ‘cosine similarity + k-means clustering’ localization procedure is effective in mitigating the detrimental impact of missed APs, and consequently, it significantly improves localization accuracy.
-
The advancement in treating medical data grows significantly daily. An accurate data classification model can help determine patient disease and diagnose disease severity in the medical domain, thus easing doctors' treatment burdens. Nonetheless, medical data analysis presents challenges due to uncertainty, the correlations between various measurements, and the high dimensionality of the data. These challenges burden statistical classification models. Machine Learning (ML) and data mining approaches have proven effective in recent years in gaining a deeper understanding of the importance of these aspects. This research adopts a well-known supervised learning classification model named a Decision Tree (DT). DT is a typical tree structure consisting of a central node, connected branches, and internal and terminal nodes. In each node, we have a decision to be made, such as in a rule-based system. This type of model helps researchers and physicians better diagnose a disease. To reduce the complexity of the proposed DT, we explored using the Feature Selection (FS) method to design a simpler diagnosis model with fewer factors. This concept will help reduce the data collection stage. A comparative analysis has been conducted between the developed DT and other various ML models, such as Logistic Regression (LR), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB), to demonstrate the effectiveness of the developed model. The results of the DT model establish a notable accuracy of 93.78% and an ROC value of 0.94, which beats other compared algorithms. The developed DT model provided promising results and can help diagnose heart disease. © 2024, Zarka Private University. All rights reserved.
-
This chapter presents Hybrid Whale Optimization Algorithm (HWOA) to tackle the stubborn problems of local optima traps and initialization sensitivity of the K-means clustering technique. This work was inspired by the popularity and robustness of meta-heuristic algorithms in providing compelling solutions, which sparked several effective approaches and computational tools to address challenging real-world problems. The Chameleon Swarm Algorithm (CSA) is embedded with the bubble-net mechanism of WOA to help the search agents of HWOA effectively explore and exploit each potential area of the search space, enhancing the capability of both exploitation and exploration aspects of the classic WOA. Additionally, the search agents of HWOA use a rotation mechanism to relocate to new spots outside of nearby areas to conduct global exploration. This process increases the search efficiency of WOA while also enhancing the diversity and intensity behavior of the search agents. These improvements to HWOA increase its capacity for exploitation and broaden the range of search scopes and directions in performing clustering tasks. To assess the effectiveness of the proposed HWOA on clustering activities, a total of ten distinct datasets from the UCI are used, each with a different level of complexity. According to the experimental findings, the proposed HWOA outperforms eight meta-heuristic algorithms-based clustering and the conventional K-means clustering technique by a statistically significant margin in terms of performance distance metric.
-
Diabetes mellitus is a chronic disease affecting over 38.4 million adults worldwide. Unfortunately, 8.7 million were undiagnosed. Early detection and diagnosis of diabetes can save millions of people’s lives. Significant benefits can be achieved if we have the means and tools for the early diagnosis and treatment of diabetes since it can reduce the ratio of cardiovascular disease and mortality rate. It is urgently necessary to explore computational methods and machine learning for possible assistance in the diagnosis of diabetes to support physician decisions. This research utilizes machine learning to diagnose diabetes based on several selected features collected from patients. This research provides a complete process for data handling and pre-processing, feature selection, model development, and evaluation. Among the models tested, our results reveal that Random Forest performs best in accuracy (i.e., 0.945%). This emphasizes Random Forest’s efficiency in precisely helping diagnose and reduce the risk of diabetes.
-
This paper presents the first curricular landscape analysis of transfer pathways for computer science (CS) transfer students in the public higher education system in California, the largest and most complex higher education system in the United States. Drawing on data from 115 community colleges and 31 public universities in California, this study examines and compares computer science Bachelor's degree requirements, curriculum complexities, and both ideal and existing course articulation coverage between schools. We find considerable variation in the CS degree requirements across the system, particularly in the number of math courses required and the overall flexibility of the course requirements. Articulation agreements between community colleges and four-year schools have the potential to (and sometimes do) reduce the complexity of the degree for transfer students significantly, but articulation agreements are not consistently in place across the system. This research both suggests concrete action items and surfaces important areas of further exploration to create a more seamless process for transfer students to complete their CS Bachelor's degrees.
-
The structure of blood vessels in the retina is a crucial factor in identifying and forecasting various eye diseases like cardiovascular diseases, diabetes, and other diseases. Therefore, detecting the structure of blood vessels from retinal fundus images is a critical field of research in healthcare. This study employed a novel deep learning model to segment vessels for different diseases, including Glaucoma, Diabetic Retinopathy (DR), and Age-related Macular Degeneration (AMD). We considered multiple transfer learning-based models and discovered that the ResNet-based U-Net architecture was the most effective for vessel segmentation, achieving the highest Dice Score above 84% for disease-agnostic, and 82%-84% for disease-specific conditions. We believe the proposed methodology will help to advance retinal vessel segmentation process and enhance the screening process of diseases based on retinal fundus images in clinical settings of Qatar Biobank as well as other biobanks across the globe. © 2023 IEEE.
-
Forecasting the daily flows of rivers is a challenging task that have a significant impact on the environment, agriculture, and people life. This paper investigates the river flow forecasting problem using two types of Deep Neural Networks (DNN) structures, Long Short-Term Memory (LSTM) and Layered Recurrent Neural Networks (L-RNN) for two rivers in the USA, Black and Gila rivers. The data sets collected for a period of seven years for Black river (six years for training and one year for testing) and four years for Gila river (three years for training and one year for testing) were used for our experiments. An order selection method based partial auto-correlation sequence was employed to determine the appropriate order for the proposed models in both cases. Mean square errors (MSE), Root mean square errors (RMSE) and Variance (VAF) were used to evaluate to developed models. The obtained results show that the proposed LSTM is able to produce an excellent model in each case study.
-
Design of the Proportional-Integral-Derivative (PID) controller for an industrial process represents a challenge due to process complexity and non-linearity. Traditional methods such as Ziegler-Nichols (ZN) for PID controller tuning do not provide an optimal gain; thus, might leave the system with potential instability condition and cause significant losses and damages to the system. This paper investigates the merits of evolutionary and swarm-based optimization algorithms in fine-tuning the parameters of a PID controller. Here, Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO) algorithm were utilized to optimize the PID controller for a DC motor system. Various fitness functions were provided for the presented algorithms to compute the performance of the controller. A new fitness function was proposed to achieve an outstanding control response for the DC motor system. Results demonstrate the efficacy of the proposed methods in improving closed loop system response.
-
Image clustering presents a hot topic that researchers have chased extensively. There is always a need to a promising clustering technique due to its vital role in further image processing steps. This paper presents a compelling clustering approach for brain tumors and breast cancer in Magnetic Resonance Imaging (MRI). Driven by the superiority of nature-inspired algorithms in providing computational tools to deal with optimization problems, we propose Flower Pollination Algorithm (FPA) and Crow Search Algorithm (CSA) to present a clustering method for brain tumors and breast cancer. Evaluation clustering results of CSA and FPA were judged using two apposite criteria and compared with results of K-means, fuzzy c-means and other metaheuristics when applied to cluster the same benchmark datasets. The clustering method-based CSA and FPA yielded encouraging results, significantly outperforming those obtained by K-means and fuzzy c-means and slightly surpassed those of other metaheuristic algorithms.
-
Sleep is an essential part of health and longevity persons. As people grow older, the quality of their sleep becomes vital. Poor sleep quality can make negative physiological, psychological, and social impacts on the elderly population, causing a range of health problems including coronary heart disease, depression, anxiety, and loneliness. Early detection, proper diagnosis, and treatments for sleep disorders can be achieved by identifying sleep patterns through long-term sleep monitoring. Although many studies developed sleep monitoring systems by using non-invasive measures such as body temperature, pressure, or body movement signal, research is still limited to detect sleep position changes by using a depth camera. The present study is intended (1) to identify concerns on the existing sleep monitoring system based on the literature review and (2) propose to developing a non-invasive sleep monitoring system using an infrared depth camera. For the literature review, various journal/conference papers have been reviewed to understand the characteristics, tools, and algorithms of the existing sleep monitoring systems. For the system development and validation, we collected data for the sleep positions from two subjects (35 years old man and 84 years old women) during the four-hour sleep. Kinect II depth sensor was used for data collection. We found that the averaged depth data is useful measure to notify the participants’ positional changes during the sleep.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (123)
- Journal Article (132)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(238)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (116)