Your search
Results 295 resources
-
Across three online studies, we examined the relationship between the Fear of Missing Out (FoMO) and moral cognition and behavior. Study 1 (N = 283) examined whether FoMO influenced moral awareness, judgments, and recalled and predicted behavior of first-person moral violations in either higher or lower social settings. Study 2 (N = 821) examined these relationships in third-person judgments with varying agent identities in relation to the participant (agent = stranger, friend, or someone disliked). Study 3 (N = 604) examined the influence of recalling activities either engaged in or missed out on these relationships. Using the Rubin Causal Model, we created hypothetical randomized experiments from our real-world randomized experimental data with treatment conditions for lower or higher FoMO (median split), matched for relevant covariates, and compared differences in FoMO groups on moral awareness, judgments, and several other behavioral outcomes. Using a randomization-based approach, we examined these relationships with Fisher Tests and computed 95% Fisherian intervals for constant treatment effects consistent with the matched data and the hypothetical FoMO intervention. All three studies provide evidence that FoMO is robustly related to giving less severe judgments of moral violations. Moreover, those with higher FoMO were found to report a greater likelihood of committing moral violations in the past, knowing people who have committed moral violations in the past, being more likely to commit them in the future, and knowing people who are likely to commit moral violations in the future.
-
Establishing an optimal datacenter selection policy within the cloud environment is paramount to maximize the performance of the cloud services. Service broker policy governs the selection of datacenters for user requests. In our research, we introduce an innovative approach incorporating the genetic algorithm with service broker policy to assist cloud services in identifying the most suitable datacenters for specific userbases. The effectiveness of our proposed genetic algorithm was rigorously evaluated through experiments conducted on CloudAnalyst platform. The results clearly indicate that our proposed algorithm surpasses existing service broker policies and previous research works done in this field in terms of reducing response time and data processing time. The results analysis validates its efficacy and potential for enhancing cloud service performance and reducing the cost of overall cloud infrastructure.
-
This study developed a framework for predicting usability factors through an understanding of how cognitive traits relate to human interaction with a computer system. Specifically, this study examined the relationship of field-independence, spatial visualization, logical reasoning, and integrative reasoning to interaction process and outcome. The research hypothesis was tested through correlation to determine the relationships among variables. As a post hoc analysis, multiple regression analysis was used to examine the predictive power of four cognitive variables on interaction outcome. The results of the study emphasize the importance of considering cognitive variables as important predictors to human interaction process and outcome. © 2024 IEEE.
-
Scheduling periodic real-time tasks on multiple periodic resources is an emerging research issue in the real-time scheduling community and has drawn increased attention over the last few years. This paper studies a sub-category of the scheduling problem which focuses on scheduling a periodic task on multiple periodic resources where none of these resources have sufficient capacity to support the task. Instead of splitting the task into sub-tasks, which is not always practical in real systems, we integrate resources together to jointly support the task. First, we develop a method to integrate two fixed but arbitrary pattern periodic resources into an equivalent periodic resource. Second, for two periodic resources with unknown but fixed resource occurrence patterns, we give the lower and upper bounds of the available time provided by an integrated periodic resource within a period. Third, we present theoretical and empirical analysis on the schedulability of a non-splittable periodic task on two periodic resources and their integrated periodic resource.
-
In this paper, we address the resource and virtual machine instance hour minimization problem for directed-acyclic-graph based deadline constrained applications deployed on computer clouds. The allocated resources and instance hours on computer clouds must: (1) guarantee the satisfaction of a deadline constrained application's end-to-end deadline; (2) ensure that the number of virtual machine (VM) instances allocated to the application is minimized; (3) under the allocated number of VM instances, determine application execution schedule that minimizes the application's makespan; and (4) under the decided application execution schedule, determine a VM operation schedule, i.e., when a VM should be turned on or off, that minimizes total VM instance hours needed to execute the application. We first give lower and upper bounds for the number of VM instances needed to guarantee the satisfaction of a deadline constrained application's end-to-end deadline. Based on the bounds, we develop a heuristic algorithm called minimal slack time and minimal distance (MSMD) algorithm that finds the minimum number of VM instances needed to guarantee the application's deadline and schedules tasks on the allocated VM instances so that the application's makespan is minimized. Once the application execution schedule and the number of VM instances needed are determined, the proposed VM instance hour minimization (IHM) algorithm is applied to further reduce the instance hours needed by VMs to complete the application's execution. Our experimental results show that the MSMD algorithm can guarantee applications' end-to-end deadlines with less resources than the HEFT [32], MOHEFT [16], DBUS [9], QoS-base [40] and Auto-Scaling [25] heuristic scheduling algorithms in the literature. Furthermore, under allocated resources, the MSMD algorithm can, on average, reduce an application's makespan by 3.4 percent of its deadline. In addition, with the IHM algorithm we can effectively reduce the application's execution instance hours compared with when IHM is not applied.
-
The periodic task set assignment problem in the context of multiple processors has been studied for decades. Different heuristic approaches have been proposed, such as the Best-Fit (BF), the First-Fit (FF), and the Worst-Fit (WF) task assignment algorithms. However, when processors are not dedicated but only periodically available to the task set, whether existing approaches still provide good performance or if there is a better task assignment approach in the new context are research problems which, to our best knowledge, have not been studied by the real-time research community. In this paper, we present the Best-Harmonically-Fit (BHF) task assignment algorithm to assign periodic tasks on multiple periodic resources. By periodic resource we mean that for every fixed time interval, i.e., the period, the resource always provides the same amount of processing capacity to a given task set. Our formal analysis indicates that if a harmonic task set is also harmonic with a resource's period, the resource capacity can be fully utilized by the task set. Based on this analysis, we present the Best-Harmonically-Fit task assignment algorithm. The experimental results show that, on average, the BHF algorithm results in 53.26 , 42.54 , and 27.79 percent higher resource utilization rate than the Best-Fit Decreasing (BFD), the First-Fit Decreasing (FFD), and the Worst-Fit Decreasing (WFD) task assignment algorithms, respectively; but comparing to the optimal resource utilization rate found by exhaustive search, it is about 11.63 percent lower.
-
Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.
-
In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO) algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM), the Power Model (POWM) and the Delayed S-Shaped Model (DSSM). In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
-
This research introduces the application of an innovative bio-inspired metaheuristic technique, termed the Crow Search Algorithm (CSA), to model a crucial industrial process - hot rolling manufacturing. Inspired by the foraging patterns of crows, the CSA algorithm has demonstrated its prowess in solving diverse optimization challenges. In the context of this study, the CSA algorithm is harnessed to fine-tune the parameters of a simulation model focused on predicting the force exerted during a hot rolling procedure. The proposed model takes into consideration a range of influential factors, including the initial temperature (Ti), width (Ws), carbon equivalent (Ce), gauge (hi), draft (i), and roll diameter (R). The findings underscore the CSA's capability to deliver an exceptional modeling performance characterized by swift convergence and high solution quality. By getting along very well with the proposed model with the CSA algorithm, a robust and efficient avenue to optimize the hot rolling process emerges, with the potential for expansion into other manufacturing domains. The computational and simulation results demonstrated that the proposed approach-based CSA outperformed different meta-heuristic search algorithms, such as the Salp Swarm Algorithm (SSA), Dandelion Optimizer (DO), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), and Moth-Flame Optimization (MFO), in all test cases. The CSA has achieved the highest coefficient of determination (R2), equal to 0.97244, and the lowest mean squared error (MSE), equal to 1904.97, compared to its opponent algorithms. © 2024 IEEE.
-
Urban air pollution, a combination of industry, traffic, forest burning, and agriculture pollutants, significantly impacts human health, plants, and economic growth. Ozone exposure can lead to mortality, heart attacks, and lung damage, necessitating the creation of complex environmental safety regulations by forecasting ozone concentrations and associated pollutants. This study proposes a hybrid method, RFNN-GOA, combining recurrent fuzzy neural network (RFNN) and grasshopper optimization algorithm (GOA) to estimate and forecast the daily ozone (O3) in specific urban areas, specifically Kopački Rit and Osijek city in Croatia, aiming to improve air quality, human health, and ecosystems. Due to the intricate structure of atmospheric particles, modeling of O3 likely poses the biggest challenge in air pollution today. The dataset used by the proposed RFNN-GOA model for the prediction of O3 concentrations in each explored area consists of the following air pollutants, NO, NO2, CO, SO2, O3, PM10, and PM2.5; and five meteorological elements, including temperature, relative humidity, wind direction, speed, and pressure. The RFNN-GOA method optimizes membership functions’ parameters and the rule premise, demonstrating robustness and reliability compared to other identifiers and indicating its superiority over competing methods. The RFNN-GOA method demonstrated superior accuracy in Osijek city and Kopački Rit area, with variance-accounted for (VAF) values of 91.135%, 83.676%, 87.807%, 79.673% compared to the RFNN method’s corresponding values of 85.682%, 80.687%, 80.808%, 74.202% in both training and testing phases, respectively. This reveals that RFNN-GOA increased the average VAF in Osijek city and Kopački Rit area by over 5% and 8%, respectively. © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2024.
-
Barcode-less fruit recognition technology has revolutionized the checkout process by eliminating manual barcode scanning. This technology automatically identifies and adds fruit items to the purchase list, significantly reducing waiting times at the cash register. Faster checkouts enhance customer convenience and optimize operational efficiency for retailers. Adding barcode to fruits require using adhesives on the fruit surface that may cause health hazards. Leveraging deep learning techniques for barcode-less fruit recognition brings valuable advantages to industries, including advanced automation, enhanced accuracy, and increased efficiency. These benefits translate into improved productivity, cost reduction, and superior quality control. This study introduces a Convolutional Neural Network (CNN) designed explicitly for automatic fruit recognition, even in challenging real-world scenarios. The proposed method assists fruit sellers in accurately identifying and distinguishing between different types of fruit that may exhibit similarities. A dataset that includes 44,406 images of different fruit types is used to train and test our technique. Employing a CNN, the developed model achieves an impressive classification accuracy of 97.4% during the training phase and 88.6% during the testing phase respectively, showcasing its effectiveness in precise fruit recognition.
-
In this paper, we develop an indoor positioning system using smartphones. An indoor positioning system plays a vital role in indoor spaces such as home, office, university, airport, and hospital buildings by locating and tracking persons, devices, and assets. Our indoor positioning system is applicable in any indoor spaces which has smart devices such as smartphones, tablets, smartwatches, and robots with a Wi-Fi connection. We used Wi-Fi-based fingerprinting technique t o build o ur indoor positioning system because a Wi-Fi-based system can leverage existing Wi-Fi infrastructure and hence, it is cost effective. A major challenge in implementing a Wi-Fi-based fingerprinting technique is the missed access points (APs) problem. In this paper, we address this critical challenge by proposing a localization procedure called ‘cosine similarity + k-means clustering'. In this localization procedure, we leverage k-means clustering algorithm in identifying the wrong location estimates produced by the cosine similarity measure because of missed APs problem. To evaluate the effectiveness of our proposed localization procedure, we collected data from three different scenarios, specifically, home, office, a nd university f or creating signal m ap a nd performing localization tests. Additionally, we tested both stationary and walk data. Our experimental results prove that our ‘cosine similarity + k-means clustering’ localization procedure is effective in mitigating the detrimental impact of missed APs, and consequently, it significantly improves localization accuracy.
-
The advancement in treating medical data grows significantly daily. An accurate data classification model can help determine patient disease and diagnose disease severity in the medical domain, thus easing doctors' treatment burdens. Nonetheless, medical data analysis presents challenges due to uncertainty, the correlations between various measurements, and the high dimensionality of the data. These challenges burden statistical classification models. Machine Learning (ML) and data mining approaches have proven effective in recent years in gaining a deeper understanding of the importance of these aspects. This research adopts a well-known supervised learning classification model named a Decision Tree (DT). DT is a typical tree structure consisting of a central node, connected branches, and internal and terminal nodes. In each node, we have a decision to be made, such as in a rule-based system. This type of model helps researchers and physicians better diagnose a disease. To reduce the complexity of the proposed DT, we explored using the Feature Selection (FS) method to design a simpler diagnosis model with fewer factors. This concept will help reduce the data collection stage. A comparative analysis has been conducted between the developed DT and other various ML models, such as Logistic Regression (LR), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB), to demonstrate the effectiveness of the developed model. The results of the DT model establish a notable accuracy of 93.78% and an ROC value of 0.94, which beats other compared algorithms. The developed DT model provided promising results and can help diagnose heart disease. © 2024, Zarka Private University. All rights reserved.
-
This chapter presents Hybrid Whale Optimization Algorithm (HWOA) to tackle the stubborn problems of local optima traps and initialization sensitivity of the K-means clustering technique. This work was inspired by the popularity and robustness of meta-heuristic algorithms in providing compelling solutions, which sparked several effective approaches and computational tools to address challenging real-world problems. The Chameleon Swarm Algorithm (CSA) is embedded with the bubble-net mechanism of WOA to help the search agents of HWOA effectively explore and exploit each potential area of the search space, enhancing the capability of both exploitation and exploration aspects of the classic WOA. Additionally, the search agents of HWOA use a rotation mechanism to relocate to new spots outside of nearby areas to conduct global exploration. This process increases the search efficiency of WOA while also enhancing the diversity and intensity behavior of the search agents. These improvements to HWOA increase its capacity for exploitation and broaden the range of search scopes and directions in performing clustering tasks. To assess the effectiveness of the proposed HWOA on clustering activities, a total of ten distinct datasets from the UCI are used, each with a different level of complexity. According to the experimental findings, the proposed HWOA outperforms eight meta-heuristic algorithms-based clustering and the conventional K-means clustering technique by a statistically significant margin in terms of performance distance metric.
-
Diabetes mellitus is a chronic disease affecting over 38.4 million adults worldwide. Unfortunately, 8.7 million were undiagnosed. Early detection and diagnosis of diabetes can save millions of people’s lives. Significant benefits can be achieved if we have the means and tools for the early diagnosis and treatment of diabetes since it can reduce the ratio of cardiovascular disease and mortality rate. It is urgently necessary to explore computational methods and machine learning for possible assistance in the diagnosis of diabetes to support physician decisions. This research utilizes machine learning to diagnose diabetes based on several selected features collected from patients. This research provides a complete process for data handling and pre-processing, feature selection, model development, and evaluation. Among the models tested, our results reveal that Random Forest performs best in accuracy (i.e., 0.945%). This emphasizes Random Forest’s efficiency in precisely helping diagnose and reduce the risk of diabetes.
-
This paper presents the first curricular landscape analysis of transfer pathways for computer science (CS) transfer students in the public higher education system in California, the largest and most complex higher education system in the United States. Drawing on data from 115 community colleges and 31 public universities in California, this study examines and compares computer science Bachelor's degree requirements, curriculum complexities, and both ideal and existing course articulation coverage between schools. We find considerable variation in the CS degree requirements across the system, particularly in the number of math courses required and the overall flexibility of the course requirements. Articulation agreements between community colleges and four-year schools have the potential to (and sometimes do) reduce the complexity of the degree for transfer students significantly, but articulation agreements are not consistently in place across the system. This research both suggests concrete action items and surfaces important areas of further exploration to create a more seamless process for transfer students to complete their CS Bachelor's degrees.
-
The structure of blood vessels in the retina is a crucial factor in identifying and forecasting various eye diseases like cardiovascular diseases, diabetes, and other diseases. Therefore, detecting the structure of blood vessels from retinal fundus images is a critical field of research in healthcare. This study employed a novel deep learning model to segment vessels for different diseases, including Glaucoma, Diabetic Retinopathy (DR), and Age-related Macular Degeneration (AMD). We considered multiple transfer learning-based models and discovered that the ResNet-based U-Net architecture was the most effective for vessel segmentation, achieving the highest Dice Score above 84% for disease-agnostic, and 82%-84% for disease-specific conditions. We believe the proposed methodology will help to advance retinal vessel segmentation process and enhance the screening process of diseases based on retinal fundus images in clinical settings of Qatar Biobank as well as other biobanks across the globe. © 2023 IEEE.
Explore
Department
- Computer Science
- Chemistry (1)
- History (1)
- Mathematics (1)
- Physics (6)
- Psychology (2)
- Public Health (1)
Resource type
- Book (12)
- Book Section (11)
- Conference Paper (126)
- Journal Article (133)
- Report (13)
Publication year
- Between 1900 and 1999 (53)
-
Between 2000 and 2026
(242)
- Between 2000 and 2009 (35)
- Between 2010 and 2019 (87)
- Between 2020 and 2026 (120)