Your search

In authors or contributors
Resource language
  • Modeling lipase activity aids researchers in optimizing features such as temperature, pH, and substrate concentration to boost enzyme performance. This is essential in biotechnology for progressing the productivity and yield of processes such as fermentation, biodiesel production, and bioremediation. Fermentation is a highly complex, multivariable, and non-linear biotechnological process that produces bioactive materials. This study leverages artificial neural networks (ANN) to predict lipase activity in batch fermentation processes, addressing the inherent challenges in weight learning optimization often encountered with traditional algorithms like Backpropagation (BP). Several metaheuristic algorithms were employed to optimize the Multilayered Perceptron (MLP) structure and weights, including moth-frequency optimization (MFO), Particle Swarm Optimization (PSO), Dandelion Optimizer Algorithm (DO), Crow Search Algorithm (CSA), and Salp Swarm Algorithm (SSA) to overcome these limitations. Among the tested algorithms, MFO emerged as the most effective approach, achieving superior performance in weight learning with the best fitness value (i.e., mean square error (MSE)) of 0.6006. MFO-optimized ANN models deliver the most accurate predictions for lipase activity, highlighting their potential as a powerful tool for advancing industrial fermentation process optimization. © 2025 IEEE.

  • In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO) algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM), the Power Model (POWM) and the Delayed S-Shaped Model (DSSM). In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.

  • Roads should always be in a reliable con-dition and maintained regularly. One of the problems that should be maintained well is the pavement cracks problem. This a challenging problem that faces road engineers, since maintaining roads in a stable condition is needed for both drivers and pedestrians. Many meth-ods have been proposed to handle this problem to save time and cost. In this paper, we proposed a two-stage method to detect pavement cracks based on Principal Component Analysis (PCA) and Convolutional Neural Network (CNN) to solve this classification problem. We employed a Principal Component Analysis (PCA) method to extract the most significant features with a di˙erent number of PCA components. The proposed approach was trained using a Mendeley Asphalt Crack dataset, which contains 400 images of road cracks with a 480×480 resolution. The obtained results show how PCA helped in speeding up the learning process of CNN.

  • Image reconstruction for industrial applications based on Electrical Capacitance Tomography (ECT) has been broadly applied. The goal of image reconstruction based ECT is to locate the distribution of permittivity for the dielectric substances along the cross-section based on the collected capacitance data. In the ECT-based image reconstruction process: (1) the relationship between capacitance measurements and permittivity distribution is nonlinear, (2) the capacitance measurements collected during image reconstruction are inadequate due to the limited number of electrodes, and (3) the reconstruction process is subject to noise leading to an ill-posed problem. Thence, constructing an accurate algorithm for real images is critical to overcoming such restrictions. This paper presents novel image reconstruction methods using Deep Learning for solving the forward and inverse problems of the ECT system for generating high-quality images of conductive materials in the Lost Foam Casting (LFC) process. Here, Long Short-Term Memory Recurrent Neural Network (LSTM-RNN) models were implemented to predict the distribution of metal filling for the LFC process-based ECT. The recurrent connection and the gating mechanism of the LSTM is capable of extracting the contextual information that is repeatedly passing through the neural network while filtering out the noise caused by adverse factors. Experimental results showed that the presented ECT-LSTM-RNN model is highly reliable for industrial applications and can be utilized for other manufacturing processes. © 2013 IEEE.

  • SESSION TITLE: Clinical Prediction and Diagnosis of OSA

  • Meta-heuristic search algorithms were successfully used to solve a variety of problems in engineering, science, business, and finance. Meta-heuristic algorithms share common features since they are population-based approaches that use a set of tuning parameters to evolve new solutions based on the natural behavior of creatures. In this paper, we present a novel nature-inspired search optimization algorithm called the capuchin search algorithm (CapSA) for solving constrained and global optimization problems. The key inspiration of CapSA is the dynamic behavior of capuchin monkeys.The basic optimization characteristics of this new algorithm are designed by modeling the social actions of capuchins during wandering and foraging over trees and riverbanks in forests while searching for food sources. Some of the common behaviors of capuchins during foraging that are implemented in this algorithm are leaping, swinging, and climbing. Jumping is an effective mechanism used by capuchins to jump from tree to tree. The other foraging mechanisms exercised by capuchins, known as swinging and climbing, allow the capuchins to move small distances over trees, tree branches, and the extremities of the tree branches. These locomotion mechanisms eventually lead to feasible solutions of global optimization problems. The proposed algorithm is benchmarked on 23 well-known benchmark functions, as well as solving several challenging and computationally costly engineering problems. A broad comparative study is conducted to demonstrate the efficacy of CapSA over several prominent meta-heuristic algorithms in terms of optimization precision and statistical test analysis. Overall results show that CapSA renders more precise solutions with a high convergence rate compared to competitive meta-heuristic methods. © 2020, Springer-Verlag London Ltd., part of Springer Nature.

  • Ozone is a toxic gas with massive distinct chemical components from oxygen. Breathing ozone in the air can cause severe effects on human health, especially people who have asthma. It can cause long-lasting damage to the lungs and heart attacks and might lead to death. Forecasting the ozone concentration levels and related pollutant attribute is critical for developing sophisticated environment safety policies. In this paper, we present three artificial neural network (ANN) models to forecast the daily ozone (O3), coarse particulate matter (PM10), and particulate matter (PM2.5) concentrations in a highly polluted city in the Republic of China. The proposed models are (1) recurrent multilayer perceptron (RMLP), (2) recurrent fuzzy neural network (RFNN), and (3) hybridization of RFNN and grey wolf optimizer (GWO), which are referred to as RMLP-ANN, RFNN, and RFNN-GWO models, respectively. The performance of the proposed models is compared with other conventional models previously reported in the literature. The comparative results showed that the proposed models presented outstanding performance. The RFNN-GWO model revealed superior results in the modeling of O3, PM10, and PM2.5 compared with the RMLP-ANN and RFNN models. © 2020, Springer Nature B.V.

  • In the last decade, a wide range of machine learning approaches were proposed and experimented to model highly nonlinear manufacturing processes. However, improving the performance of such models is challenging due to the complexity and high dimensionality of the manufacturing processes in general. In this paper, we propose bidirectional echo state reservoir networks (Bi-ESNs) trained using support vector machine privileged information method (SVM$$+$$) to model a winding machine process. The proposed model will be applied, tested and compared to reported models in the literature such as classical ESN with linear regression, ESN with a linear SVM readout, genetic programming, feedfoward neural network with backpropagation, radial basis function network, adaptive neural fuzzy inference system and local linear wavelet neural network. The developed results show that Bi-ESNs trained with SVM$$+$$are promising. It was able to provide better generalization performance compared to other models.

  • The rapid growth of technology has brought about many advantages, but has also made networks more susceptible to security threats. Intrusion Detection Systems (IDS) play a vital role in protecting computer networks against malicious activities. Given the dynamic and constantly evolving nature of cyber threats, these systems must continuously adapt to maintain their effectiveness. Machine Learning (ML) methods have gained prominence as effective tools for constructing IDS that offer both high accuracy and efficiency. This study conducts a performance assessment of several machine learning classifiers, including Random Forests (RF), Decision Trees (DT), and Support Vector Machines (SVM), in addressing multiclass intrusion detection as a means to counter cybersecurity threats. The NSL-KDD dataset, which includes various network attacks, served as the basis for our experimental evaluation. The research explores two classification scenarios: a five-class and a three-class model, analyzing their impact on detection performance. The results demonstrate that RF consistently achieves the highest accuracy (85.42%) on the three-class scenario testing set, highlighting its effectiveness in handling patterns and non-linear relationships within the intrusion data. Furthermore, reducing the classification complexity (three classes vs. five classes) significantly improves model generalization, as evidenced by the reduced performance gap between training and testing data. Friedman’s rank test and Holm’s post-hoc analysis were applied to ensure statistical rigor, confirming that RF outperforms DT and SVM in all evaluation metrics. These findings establish RF as the most robust classifier for intrusion detection and underscore the importance of simplifying classification tasks for improved IDS performance. © (2025), (Science Publications). All rights reserved.

  • Obstructive sleep apnea syndrome (OSAS) is a pervasive disorder with an incidence estimated at 5–14 percent among adults aged 30–70 years. It carries significant morbidity and mortality risk from cardiovascular disease, including ischemic heart disease, atrial fibrillation, and cerebrovascular disease, and risks related to excessive daytime sleepiness. The gold standard for diagnosis of OSAS is the polysomnography (PSG) test which requires overnight evaluation in a sleep laboratory and expensive infrastructure, which renders it unsuitable for mass screening and diagnosis. Alternatives such as home sleep testing need patients to wear diagnostic instruments overnight, but accuracy continues to be suboptimal while access continues to be a barrier for many. Hence, there is a continued significant underdiagnosis and under-recognition of sleep apnea in the community, with at least one study suggesting that 80–90% of middle-aged adults with moderate to severe sleep apnea remain undiagnosed. Recently, we have seen a surge in applications of artificial intelligence and neural networks in healthcare diagnostics. Several studies have attempted to examine its application in the diagnosis of OSAS. Signals included in data analytics include Electrocardiogram (ECG), photo-pletysmography (PPG), peripheral oxygen saturation (SpO2), and audio signals. A different approach is to study the application of machine learning to use demographic and standard clinical variables and physical findings to try and synthesize predictive models with high accuracy in assisting in the triage of high-risk patients for sleep testing. The current paper will review this latter approach and identify knowledge gaps that may serve as potential avenues for future research.

  • SESSION TITLE: Clinical Prediction and Diagnosis of OSA

  • The performance of any meta-heuristic algorithm depends highly on the setting of dependent parameters of the algorithm. Different parameter settings for an algorithm may lead to different outcomes. An optimal parameter setting should support the algorithm to achieve a convincing level of performance or optimality in solving a range of optimization problems. This paper presents a novel enhancement method for the salp swarm algorithm (SSA), referred to as enhanced SSA (ESSA). In this ESSA, the following enhancements are proposed: First, a new position updating process was proposed. Second, a new dominant parameter different from that used in SSA was presented in ESSA. Third, a novel lifetime convergence method for tuning the dominant parameter of ESSA using ESSA itself was presented to enhance the convergence performance of ESSA. These enhancements to SSA were proposed in ESSA to augment its exploration and exploitation capabilities to achieve optimal global solutions, in which the dominant parameter of ESSA is updated iteratively through the evolutionary process of ESSA so that the positions of the search agents of ESSA are updated accordingly. These improvements on SSA through ESSA support it to avoid premature convergence and efficiently find the global optimum solution for many real-world optimization problems. The efficiency of ESSA was verified by testing it on several basic benchmark test functions. A comparative performance analysis between ESSA and other meta-heuristic algorithms was performed. Statistical test methods have evidenced the significance of the results obtained by ESSA. The efficacy of ESSA in solving real-world problems and applications is also demonstrated with five well-known engineering design problems and two real industrial problems. The comparative results show that ESSA imparts better performance and convergence than SSA and other meta-heuristic algorithms.

  • Over recent decades, research in Artificial Intelligence (AI) has developed a broad range of approaches and methods that can be utilized or adapted to address complex optimization problems. As real-world problems get increasingly complicated, this requires an effective optimization method. Various meta-heuristic algorithms have been developed and applied in the optimization domain. This paper used and ameliorated a promising meta-heuristic approach named Crow Search Algorithm (CSA) to address numerical optimization problems. Although CSA can efficiently optimize many problems, it needs more searchability and early convergence. Its positioning updating process was improved by supporting two adaptive parameters: flight length (fl) and awareness probability (AP) to tackle these curbs. This is to manage the exploration and exploitation conducts of CSA in the search space. This process takes advantage of the randomization of crows in CSA and the adoption of well-known growth functions. These functions were recognized as exponential, power, and S-shaped functions to develop three different improved versions of CSA, referred to as Exponential CSA (ECSA), Power CSA (PCSA), and S-shaped CSA (SCSA). In each of these variants, two different functions were used to amend the values of fl and AP. A new dominant parameter was added to the positioning updating process of these algorithms to enhance exploration and exploitation behaviors further. The reliability of the proposed algorithms was evaluated on 67 benchmark functions, and their performance was quantified using relevant assessment criteria. The functionality of these algorithms was illustrated by tackling four engineering design problems. A comparative study was made to explore the efficacy of the proposed algorithms over the standard one and other methods. Overall results showed that ECSA, PCSA, and SCSA have convincing merits with superior performance compared to the others.

  • Barcode-less fruit recognition technology has revolutionized the checkout process by eliminating manual barcode scanning. This technology automatically identifies and adds fruit items to the purchase list, significantly reducing waiting times at the cash register. Faster checkouts enhance customer convenience and optimize operational efficiency for retailers. Adding barcode to fruits require using adhesives on the fruit surface that may cause health hazards. Leveraging deep learning techniques for barcode-less fruit recognition brings valuable advantages to industries, including advanced automation, enhanced accuracy, and increased efficiency. These benefits translate into improved productivity, cost reduction, and superior quality control. This study introduces a Convolutional Neural Network (CNN) designed explicitly for automatic fruit recognition, even in challenging real-world scenarios. The proposed method assists fruit sellers in accurately identifying and distinguishing between different types of fruit that may exhibit similarities. A dataset that includes 44,406 images of different fruit types is used to train and test our technique. Employing a CNN, the developed model achieves an impressive classification accuracy of 97.4% during the training phase and 88.6% during the testing phase respectively, showcasing its effectiveness in precise fruit recognition.

  • Data classification is a challenging problem. Data classification is very sensitive to the noise and high dimensionality of the data. Being able to reduce the model complexity can help to improve the accuracy of the classification model performance. Therefore, in this research, we propose a novel feature selection technique based on Binary Harris Hawks Optimizer with Time-Varying Scheme (BHHO-TVS). The proposed BHHO-TVS adopts a time-varying transfer function that is applied to leverage the influence of the location vector to balance the exploration and exploitation power of the HHO. Eighteen well-known datasets provided by the UCI repository were utilized to show the significance of the proposed approach. The reported results show that BHHO-TVS outperforms BHHO with traditional binarization schemes as well as other binary feature selection methods such as binary gravitational search algorithm (BGSA), binary particle swarm optimization (BPSO), binary bat algorithm (BBA), binary whale optimization algorithm (BWOA), and binary salp swarm algorithm (BSSA). Compared with other similar feature selection approaches introduced in previous studies, the proposed method achieves the best accuracy rates on 67% of datasets.

  • AbstractThe autonomous navigation of robots in unknown environments is a challenge since it needs the integration of a several subsystems to implement different functionality. It needs drawing a map of the environment, robot map localization, motion planning or path following, implementing the path in real-world, and many others; all have to be implemented simultaneously. Thus, the development of autonomous robot navigation (ARN) problem is essential for the growth of the robotics field of research. In this paper, we present a simulation of a swarm intelligence method is known as Particle Swarm Optimization (PSO) to develop an ARN system that can navigate in an unknown environment, reaching a pre-defined goal and become collision-free. The proposed system is built such that each subsystem manipulates a specific task which integrated to achieve the robot mission. PSO is used to optimize the robot path by providing several waypoints that minimize the robot traveling distance. The Gazebo simulator was used to test the response of the system under various envirvector representing a solution to the optimization problem.onmental conditions. The proposed ARN system maintained robust navigation and avoided the obstacles in different unknown environments. vector representing a solution to the optimization problem.

  • Diabetes mellitus is a chronic disease affecting over 38.4 million adults worldwide. Unfortunately, 8.7 million were undiagnosed. Early detection and diagnosis of diabetes can save millions of people’s lives. Significant benefits can be achieved if we have the means and tools for the early diagnosis and treatment of diabetes since it can reduce the ratio of cardiovascular disease and mortality rate. It is urgently necessary to explore computational methods and machine learning for possible assistance in the diagnosis of diabetes to support physician decisions. This research utilizes machine learning to diagnose diabetes based on several selected features collected from patients. This research provides a complete process for data handling and pre-processing, feature selection, model development, and evaluation. Among the models tested, our results reveal that Random Forest performs best in accuracy (i.e., 0.945%). This emphasizes Random Forest’s efficiency in precisely helping diagnose and reduce the risk of diabetes.

  • In this paper, we provide a consistent, inexpensive, and easy to use graphical user interface (GUI) smart phone application named Sleep Apnea Screener (SAS) that can diagnosis Obstructive Sleep Apnea (OSA) based on demographic data such as: gender, age, height, BMI, neck circumference, waist, etc., allowing a tentative diagnosis of OSA without the need for overnight tests. The developed smart phone application can diagnosis sleep apnea using a model trained with 620 samples collected from a sleep center in Corpus Christi, TX. Two machine learning classifiers (i.e., Logistic Regression (LR) and Support Vector Machine (SVM)) were used to diagnosis OSA. Our preliminary results show that at-home OSA screening is indeed possible, and that our application is effective method for covering large numbers of undiagnosed cases.

  • Background: In the United States, chronic obstructive pulmonary disease (COPD) is a significant cause of mortality. As far as we know, it is a chronic, inflammatory lung condition that cuts off airflow to the lungs. Many symptoms have been reported for such a disease: breathing problems, coughing, wheezing, and mucus production. Patients with COPD might be at risk, since they are more susceptible to heart disease and lung cancer. Methods: This study reviews COPD diagnosis utilizing various machine learning (ML) classifiers, such as Logistic Regression (LR), Gradient Boosting Classifier (GBC), Support Vector Machine (SVM), Gaussian Naïve Bayes (GNB), Random Forest Classifier (RFC), K-Nearest Neighbors Classifier (KNC), Decision Tree (DT), and Artificial Neural Network (ANN). These models were applied to a dataset comprising 1603 patients after being referred for a pulmonary function test. Results: The RFC has achieved superior accuracy, reaching up to 82.06% in training and 70.47% in testing. Furthermore, it achieved a maximum F score in training and testing with an ROC value of 0.0.82. Conclusions: The results obtained with the utilized ML models align with previous work in the field, with accuracies ranging from 67.81% to 82.06% in training and from 66.73% to 71.46% in testing.

  • SESSION TITLE: Clinical Prediction and Diagnosis of OSA

Last update from database: 3/13/26, 4:15 PM (UTC)

Explore

Resource language