Your search

Department
  • Stock market forecasting is an essential factor in the daily operations of many companies and individuals. However, the complex and nonlinear nature of the stock market and the unpredictable variations in factors affecting stock prices present significant challenges in accurate forecasting. To address this, we employ four model-based metaheuristic search algorithms (MHs), namely the Crow Search Algorithm (CSA), Particle Swarm Optimizer (PSO), Gray Wolf Optimizer (GWO), and Dandelion Optimizer (DO), to estimate the parameters of stock market prices models. The data utilized in our experiments are extracted from the widely recognized stock index of Standard & Poor's 500 (S&P 500), that serves as a representative benchmark for the United States stock market. Our findings demonstrate that the CSA outperforms other MHs by providing the best combination of parameters for modeling stock market prices. The optimized parameters for the CSA model yielded Variance-Account-For (VAF) values of 97.846% in the training set and 93.483% in the testing set. This suggests that CSA offers promising capabilities for enhancing the accuracy and effectiveness of stock market forecasting models. © (2024), (Research Institute of Intelligent Computer Systems). All rights reserved.

  • Epidermolysis bullosa acquisita (EBA) represents a big challenge as a rare skin disorder, with no established markers for early detection for patients. Moreover, as a rare disease, it is extremely difficult to acquire good number of patient sample to diagnose accurately with high confidence. EBA has many biomarkers very similar to other bullosa diseases and needs specific clinical expertise to detect it using immunofluorescence microscopy. In this study, we introduce a deep learningbased method, EBAnet, that leveraged Convolutional Neural Network (CNN) based model for the detection of EBA based on Direct immunofluorescence (DIF) microscopy image. The proposed EfficientNet-based model achieved 97.3% sensitivity, 96.1% precision, and 96.7% accuracy in distinguishing EBA from other class and outperformed the existing model for the same purpose. GradCAM based class activation map also highlighted the important region of the DIF images that was focused by the proposed model leveraging the explainability of the model. We believe, EBAnet will add value in the early and accurate detection of EBA, addressing a critical need in clinical practice.

  • Obstructive Sleep Apnea (OSA) is a prevalent health issue affecting 10-25% of adults in the United States (US) and is associated with significant economic consequences. Machine learning methods have shown promise in improving the efficiency and accessibility of OSA diagnoses, thus reducing the need for expensive and challenging tests. A comparative analysis of Logistic Regression (LR), Support Vector Machine (SVM), Gradient Boosting (GB), Gaussian Naive Bayes (GNB), Random Forest (RF), and K-Nearest Neighbors (KNN) algorithms was conducted to predict Obstructive Sleep Apnea (OSA). To improve the predictive accuracy of these models, Random Oversampling was applied to address the imbalance in the dataset, ensuring a more equitable representation of the minority class. Patient demographics, including age, sex, height, weight, BMI, neck circumference, and gender, were employed as predictive features in the models. The RFC provided outstanding training and testing accuracies of 87% and 65%, respectively, and a Receiver Operating Characteristic (ROC) score of 87%. The GBC and SVM classifiers also demonstrated good performance on the test dataset. The results of this study show that machine learning techniques may be effectively used to diagnose OSA, with the Random Forest Classifier demonstrating the best results.

  • Background: In the United States, chronic obstructive pulmonary disease (COPD) is a significant cause of mortality. As far as we know, it is a chronic, inflammatory lung condition that cuts off airflow to the lungs. Many symptoms have been reported for such a disease: breathing problems, coughing, wheezing, and mucus production. Patients with COPD might be at risk, since they are more susceptible to heart disease and lung cancer. Methods: This study reviews COPD diagnosis utilizing various machine learning (ML) classifiers, such as Logistic Regression (LR), Gradient Boosting Classifier (GBC), Support Vector Machine (SVM), Gaussian Naïve Bayes (GNB), Random Forest Classifier (RFC), K-Nearest Neighbors Classifier (KNC), Decision Tree (DT), and Artificial Neural Network (ANN). These models were applied to a dataset comprising 1603 patients after being referred for a pulmonary function test. Results: The RFC has achieved superior accuracy, reaching up to 82.06% in training and 70.47% in testing. Furthermore, it achieved a maximum F score in training and testing with an ROC value of 0.0.82. Conclusions: The results obtained with the utilized ML models align with previous work in the field, with accuracies ranging from 67.81% to 82.06% in training and from 66.73% to 71.46% in testing.

  • Meta-heuristic optimization algorithms have become widely used due to their outstanding features, such as gradient-free mechanisms, high flexibility, and great potential for avoiding local optimal solutions. This research explored the grey wolf optimizer (GWO) to find the ideal configuration for a six-element Yagi–Uda antenna. The GWO algorithm adjusted the lengths of the antenna wires and the spacings between them. The goal was to maximize the antenna’s ability to transmit signals (throughput gain). Optimal antenna selection relies on various parameters, including gain, bandwidth, impedance matching, frequency, side-lobe levels, etc. The optimization of a six-element Yagi–Uda antenna presents a challenging engineering design problem due to its multimodal and nonlinear nature. Achieving optimal performance hinges on the intricate interplay between the lengths of the constituent elements and the spacing configurations. To this end, a multiobjective function was adopted to design this antenna. The performance of several meta-heuristic algorithms, including genetic algorithms, biogeography-based optimization, simulated annealing, and grey wolf optimizer, was compared. The GWO-based approach has performed better than its competitors. This optimized antenna design based on GWO reported a gain of 14.21 decibel. Therefore, the GWO-based method optimizes antennas that can be further investigated for other antenna design problems.

  • Across three online studies, we examined the relationship between the Fear of Missing Out (FoMO) and moral cognition and behavior. Study 1 (N = 283) examined whether FoMO influenced moral awareness, judgments, and recalled and predicted behavior of first-person moral violations in either higher or lower social settings. Study 2 (N = 821) examined these relationships in third-person judgments with varying agent identities in relation to the participant (agent = stranger, friend, or someone disliked). Study 3 (N = 604) examined the influence of recalling activities either engaged in or missed out on these relationships. Using the Rubin Causal Model, we created hypothetical randomized experiments from our real-world randomized experimental data with treatment conditions for lower or higher FoMO (median split), matched for relevant covariates, and compared differences in FoMO groups on moral awareness, judgments, and several other behavioral outcomes. Using a randomization-based approach, we examined these relationships with Fisher Tests and computed 95% Fisherian intervals for constant treatment effects consistent with the matched data and the hypothetical FoMO intervention. All three studies provide evidence that FoMO is robustly related to giving less severe judgments of moral violations. Moreover, those with higher FoMO were found to report a greater likelihood of committing moral violations in the past, knowing people who have committed moral violations in the past, being more likely to commit them in the future, and knowing people who are likely to commit moral violations in the future.

  • Establishing an optimal datacenter selection policy within the cloud environment is paramount to maximize the performance of the cloud services. Service broker policy governs the selection of datacenters for user requests. In our research, we introduce an innovative approach incorporating the genetic algorithm with service broker policy to assist cloud services in identifying the most suitable datacenters for specific userbases. The effectiveness of our proposed genetic algorithm was rigorously evaluated through experiments conducted on CloudAnalyst platform. The results clearly indicate that our proposed algorithm surpasses existing service broker policies and previous research works done in this field in terms of reducing response time and data processing time. The results analysis validates its efficacy and potential for enhancing cloud service performance and reducing the cost of overall cloud infrastructure.

  • This study developed a framework for predicting usability factors through an understanding of how cognitive traits relate to human interaction with a computer system. Specifically, this study examined the relationship of field-independence, spatial visualization, logical reasoning, and integrative reasoning to interaction process and outcome. The research hypothesis was tested through correlation to determine the relationships among variables. As a post hoc analysis, multiple regression analysis was used to examine the predictive power of four cognitive variables on interaction outcome. The results of the study emphasize the importance of considering cognitive variables as important predictors to human interaction process and outcome. © 2024 IEEE.

  • This research introduces the application of an innovative bio-inspired metaheuristic technique, termed the Crow Search Algorithm (CSA), to model a crucial industrial process - hot rolling manufacturing. Inspired by the foraging patterns of crows, the CSA algorithm has demonstrated its prowess in solving diverse optimization challenges. In the context of this study, the CSA algorithm is harnessed to fine-tune the parameters of a simulation model focused on predicting the force exerted during a hot rolling procedure. The proposed model takes into consideration a range of influential factors, including the initial temperature (Ti), width (Ws), carbon equivalent (Ce), gauge (hi), draft (i), and roll diameter (R). The findings underscore the CSA's capability to deliver an exceptional modeling performance characterized by swift convergence and high solution quality. By getting along very well with the proposed model with the CSA algorithm, a robust and efficient avenue to optimize the hot rolling process emerges, with the potential for expansion into other manufacturing domains. The computational and simulation results demonstrated that the proposed approach-based CSA outperformed different meta-heuristic search algorithms, such as the Salp Swarm Algorithm (SSA), Dandelion Optimizer (DO), Particle Swarm Optimization (PSO), Gray Wolf Optimizer (GWO), and Moth-Flame Optimization (MFO), in all test cases. The CSA has achieved the highest coefficient of determination (R2), equal to 0.97244, and the lowest mean squared error (MSE), equal to 1904.97, compared to its opponent algorithms. © 2024 IEEE.

  • Urban air pollution, a combination of industry, traffic, forest burning, and agriculture pollutants, significantly impacts human health, plants, and economic growth. Ozone exposure can lead to mortality, heart attacks, and lung damage, necessitating the creation of complex environmental safety regulations by forecasting ozone concentrations and associated pollutants. This study proposes a hybrid method, RFNN-GOA, combining recurrent fuzzy neural network (RFNN) and grasshopper optimization algorithm (GOA) to estimate and forecast the daily ozone (O3) in specific urban areas, specifically Kopački Rit and Osijek city in Croatia, aiming to improve air quality, human health, and ecosystems. Due to the intricate structure of atmospheric particles, modeling of O3 likely poses the biggest challenge in air pollution today. The dataset used by the proposed RFNN-GOA model for the prediction of O3 concentrations in each explored area consists of the following air pollutants, NO, NO2, CO, SO2, O3, PM10, and PM2.5; and five meteorological elements, including temperature, relative humidity, wind direction, speed, and pressure. The RFNN-GOA method optimizes membership functions’ parameters and the rule premise, demonstrating robustness and reliability compared to other identifiers and indicating its superiority over competing methods. The RFNN-GOA method demonstrated superior accuracy in Osijek city and Kopački Rit area, with variance-accounted for (VAF) values of 91.135%, 83.676%, 87.807%, 79.673% compared to the RFNN method’s corresponding values of 85.682%, 80.687%, 80.808%, 74.202% in both training and testing phases, respectively. This reveals that RFNN-GOA increased the average VAF in Osijek city and Kopački Rit area by over 5% and 8%, respectively. © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2024.

  • Barcode-less fruit recognition technology has revolutionized the checkout process by eliminating manual barcode scanning. This technology automatically identifies and adds fruit items to the purchase list, significantly reducing waiting times at the cash register. Faster checkouts enhance customer convenience and optimize operational efficiency for retailers. Adding barcode to fruits require using adhesives on the fruit surface that may cause health hazards. Leveraging deep learning techniques for barcode-less fruit recognition brings valuable advantages to industries, including advanced automation, enhanced accuracy, and increased efficiency. These benefits translate into improved productivity, cost reduction, and superior quality control. This study introduces a Convolutional Neural Network (CNN) designed explicitly for automatic fruit recognition, even in challenging real-world scenarios. The proposed method assists fruit sellers in accurately identifying and distinguishing between different types of fruit that may exhibit similarities. A dataset that includes 44,406 images of different fruit types is used to train and test our technique. Employing a CNN, the developed model achieves an impressive classification accuracy of 97.4% during the training phase and 88.6% during the testing phase respectively, showcasing its effectiveness in precise fruit recognition.

  • In this paper, we develop an indoor positioning system using smartphones. An indoor positioning system plays a vital role in indoor spaces such as home, office, university, airport, and hospital buildings by locating and tracking persons, devices, and assets. Our indoor positioning system is applicable in any indoor spaces which has smart devices such as smartphones, tablets, smartwatches, and robots with a Wi-Fi connection. We used Wi-Fi-based fingerprinting technique t o build o ur indoor positioning system because a Wi-Fi-based system can leverage existing Wi-Fi infrastructure and hence, it is cost effective. A major challenge in implementing a Wi-Fi-based fingerprinting technique is the missed access points (APs) problem. In this paper, we address this critical challenge by proposing a localization procedure called ‘cosine similarity + k-means clustering'. In this localization procedure, we leverage k-means clustering algorithm in identifying the wrong location estimates produced by the cosine similarity measure because of missed APs problem. To evaluate the effectiveness of our proposed localization procedure, we collected data from three different scenarios, specifically, home, office, a nd university f or creating signal m ap a nd performing localization tests. Additionally, we tested both stationary and walk data. Our experimental results prove that our ‘cosine similarity + k-means clustering’ localization procedure is effective in mitigating the detrimental impact of missed APs, and consequently, it significantly improves localization accuracy.

  • The advancement in treating medical data grows significantly daily. An accurate data classification model can help determine patient disease and diagnose disease severity in the medical domain, thus easing doctors' treatment burdens. Nonetheless, medical data analysis presents challenges due to uncertainty, the correlations between various measurements, and the high dimensionality of the data. These challenges burden statistical classification models. Machine Learning (ML) and data mining approaches have proven effective in recent years in gaining a deeper understanding of the importance of these aspects. This research adopts a well-known supervised learning classification model named a Decision Tree (DT). DT is a typical tree structure consisting of a central node, connected branches, and internal and terminal nodes. In each node, we have a decision to be made, such as in a rule-based system. This type of model helps researchers and physicians better diagnose a disease. To reduce the complexity of the proposed DT, we explored using the Feature Selection (FS) method to design a simpler diagnosis model with fewer factors. This concept will help reduce the data collection stage. A comparative analysis has been conducted between the developed DT and other various ML models, such as Logistic Regression (LR), Support Vector Machine (SVM), and Gaussian Naive Bayes (GNB), to demonstrate the effectiveness of the developed model. The results of the DT model establish a notable accuracy of 93.78% and an ROC value of 0.94, which beats other compared algorithms. The developed DT model provided promising results and can help diagnose heart disease. © 2024, Zarka Private University. All rights reserved.

  • This chapter presents Hybrid Whale Optimization Algorithm (HWOA) to tackle the stubborn problems of local optima traps and initialization sensitivity of the K-means clustering technique. This work was inspired by the popularity and robustness of meta-heuristic algorithms in providing compelling solutions, which sparked several effective approaches and computational tools to address challenging real-world problems. The Chameleon Swarm Algorithm (CSA) is embedded with the bubble-net mechanism of WOA to help the search agents of HWOA effectively explore and exploit each potential area of the search space, enhancing the capability of both exploitation and exploration aspects of the classic WOA. Additionally, the search agents of HWOA use a rotation mechanism to relocate to new spots outside of nearby areas to conduct global exploration. This process increases the search efficiency of WOA while also enhancing the diversity and intensity behavior of the search agents. These improvements to HWOA increase its capacity for exploitation and broaden the range of search scopes and directions in performing clustering tasks. To assess the effectiveness of the proposed HWOA on clustering activities, a total of ten distinct datasets from the UCI are used, each with a different level of complexity. According to the experimental findings, the proposed HWOA outperforms eight meta-heuristic algorithms-based clustering and the conventional K-means clustering technique by a statistically significant margin in terms of performance distance metric.

  • Diabetes mellitus is a chronic disease affecting over 38.4 million adults worldwide. Unfortunately, 8.7 million were undiagnosed. Early detection and diagnosis of diabetes can save millions of people’s lives. Significant benefits can be achieved if we have the means and tools for the early diagnosis and treatment of diabetes since it can reduce the ratio of cardiovascular disease and mortality rate. It is urgently necessary to explore computational methods and machine learning for possible assistance in the diagnosis of diabetes to support physician decisions. This research utilizes machine learning to diagnose diabetes based on several selected features collected from patients. This research provides a complete process for data handling and pre-processing, feature selection, model development, and evaluation. Among the models tested, our results reveal that Random Forest performs best in accuracy (i.e., 0.945%). This emphasizes Random Forest’s efficiency in precisely helping diagnose and reduce the risk of diabetes.

  • This paper presents the first curricular landscape analysis of transfer pathways for computer science (CS) transfer students in the public higher education system in California, the largest and most complex higher education system in the United States. Drawing on data from 115 community colleges and 31 public universities in California, this study examines and compares computer science Bachelor's degree requirements, curriculum complexities, and both ideal and existing course articulation coverage between schools. We find considerable variation in the CS degree requirements across the system, particularly in the number of math courses required and the overall flexibility of the course requirements. Articulation agreements between community colleges and four-year schools have the potential to (and sometimes do) reduce the complexity of the degree for transfer students significantly, but articulation agreements are not consistently in place across the system. This research both suggests concrete action items and surfaces important areas of further exploration to create a more seamless process for transfer students to complete their CS Bachelor's degrees.

  • ​The hands-on textbook covers both the theory and applications of data communications, the Internet, and network security technology, following the ACM guideline for courses in networking. The content is geared towards upper undergraduate and graduate students in information technology, communications engineering, and computer science. The book is divided into three sections: Data Communications, Internet Architecture, and Network Security. Topics covered include flow control and reliable transmission; modulation, DSL, cable modem, and FTTH; Ethernet and Fast Ethernet; gigabit and 10 gigabit Ethernet; and LAN interconnection devices, among others. The book also covers emerging topics such as IPv6 and software defined networks. The book is accompanied with a lab manual which uses Wireshark, Cisco Packet Tracer, and virtual machines to lead students through simulated labs.

  • The Crow Search Algorithm (CSA) is a swarm-based metaheuristic algorithm that simulates the intelligent foraging behaviors of crows. While CSA effectively handles global optimization problems, it suffers from certain limitations, such as low search accuracy and a tendency to converge to local optima. To address these shortcomings, researchers have proposed modifications and enhancements to CSA’s search mechanism. One widely explored approach is the structured population mechanism, which maintains diversity during the search process to mitigate premature convergence. The island model, a common structured population method, divides the population into smaller independent sub-populations called islands, each running in parallel. Migration, the primary technique for promoting population diversity, facilitates the exchange of relevant and useful information between islands during iterations. This paper introduces an enhanced variant of CSA, called Enhanced CSA (ECSA), which incorporates the cooperative island model (iECSA) to improve its search capabilities and avoid premature convergence. The proposed iECSA incorporates two enhancements to CSA. Firstly, an adaptive tournament-based selection mechanism is employed to choose the guided solution. Secondly, the basic random movement in CSA is replaced with a modified operator to enhance exploration. The performance of iECSA is evaluated on 53 real-valued mathematical problems, including 23 classical benchmark functions and 30 IEEE-CEC2014 benchmark functions. A sensitivity analysis of key iECSA parameters is conducted to understand their impact on convergence and diversity. The efficacy of iECSA is validated by conducting an extensive evaluation against a comprehensive set of well-established and recently introduced meta-heuristic algorithms, encompassing a total of seventeen different algorithms. Significant differences among these comparative algorithms are established utilizing statistical tests like Wilcoxon’s rank-sum and Friedman’s tests. Experimental results demonstrate that iECSA outperforms the fundamental ECSA algorithm on 82.6% of standard test functions, providing more accurate and reliable outcomes compared to other CSA variants. Furthermore, Extensive experimentation consistently showcases that the iECSA outperforms its comparable algorithms across a diverse set of benchmark functions.

  • Identification of the optimal subset of features for Feature Selection (FS) problems is a demanding problem in machine learning and data mining. A trustworthy optimization approach is required to cope with the concerns involved in such a problem. Here, a Binary version of the Capuchin Search Algorithm (CSA), referred to as BCSA, was developed to select the optimal feature combination. Owing to the imbalance of parameters and random nature of BCSA, it may sometimes fall into the trap of an issue called local maxima. To beat this problem, the BCSA could be further improved with the resettlement of its individuals by adopting some methods of repopulating the individuals during foraging. Lévy flight was applied to augment the exploitation and exploration abilities of BCSA, a method referred to as LBCSA. A Chaotic strategy is used to reinforce search behavior for both exploration and exploitation potentials of BCSA, which is referred to as CBCSA. Finally, Lévy flight and chaotic sequence are integrated with BCSA, referred to as LCBCSA, to increase solution diversity and boost the openings of finding the global optimal solutions. The proposed methods were assessed on twenty-six datasets collected from the UCI repository. The results of these methods were compared with those of other FS methods. Overall results show that the proposed methods render more precise solutions in terms of accuracy rates and fitness scores than other methods.

Last update from database: 3/25/26, 6:13 PM (UTC)

Explore