Your search
Results 11 resources
-
In this paper, we address the resource and virtual machine instance hour minimization problem for directed-acyclic-graph based deadline constrained applications deployed on computer clouds. The allocated resources and instance hours on computer clouds must: (1) guarantee the satisfaction of a deadline constrained application's end-to-end deadline; (2) ensure that the number of virtual machine (VM) instances allocated to the application is minimized; (3) under the allocated number of VM instances, determine application execution schedule that minimizes the application's makespan; and (4) under the decided application execution schedule, determine a VM operation schedule, i.e., when a VM should be turned on or off, that minimizes total VM instance hours needed to execute the application. We first give lower and upper bounds for the number of VM instances needed to guarantee the satisfaction of a deadline constrained application's end-to-end deadline. Based on the bounds, we develop a heuristic algorithm called minimal slack time and minimal distance (MSMD) algorithm that finds the minimum number of VM instances needed to guarantee the application's deadline and schedules tasks on the allocated VM instances so that the application's makespan is minimized. Once the application execution schedule and the number of VM instances needed are determined, the proposed VM instance hour minimization (IHM) algorithm is applied to further reduce the instance hours needed by VMs to complete the application's execution. Our experimental results show that the MSMD algorithm can guarantee applications' end-to-end deadlines with less resources than the HEFT [32], MOHEFT [16], DBUS [9], QoS-base [40] and Auto-Scaling [25] heuristic scheduling algorithms in the literature. Furthermore, under allocated resources, the MSMD algorithm can, on average, reduce an application's makespan by 3.4 percent of its deadline. In addition, with the IHM algorithm we can effectively reduce the application's execution instance hours compared with when IHM is not applied.
-
The periodic task set assignment problem in the context of multiple processors has been studied for decades. Different heuristic approaches have been proposed, such as the Best-Fit (BF), the First-Fit (FF), and the Worst-Fit (WF) task assignment algorithms. However, when processors are not dedicated but only periodically available to the task set, whether existing approaches still provide good performance or if there is a better task assignment approach in the new context are research problems which, to our best knowledge, have not been studied by the real-time research community. In this paper, we present the Best-Harmonically-Fit (BHF) task assignment algorithm to assign periodic tasks on multiple periodic resources. By periodic resource we mean that for every fixed time interval, i.e., the period, the resource always provides the same amount of processing capacity to a given task set. Our formal analysis indicates that if a harmonic task set is also harmonic with a resource's period, the resource capacity can be fully utilized by the task set. Based on this analysis, we present the Best-Harmonically-Fit task assignment algorithm. The experimental results show that, on average, the BHF algorithm results in 53.26 , 42.54 , and 27.79 percent higher resource utilization rate than the Best-Fit Decreasing (BFD), the First-Fit Decreasing (FFD), and the Worst-Fit Decreasing (WFD) task assignment algorithms, respectively; but comparing to the optimal resource utilization rate found by exhaustive search, it is about 11.63 percent lower.
-
Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.
-
In this age of technology, building quality software is essential to competing in the business market. One of the major principles required for any quality and business software product for value fulfillment is reliability. Estimating software reliability early during the software development life cycle saves time and money as it prevents spending larger sums fixing a defective software product after deployment. The Software Reliability Growth Model (SRGM) can be used to predict the number of failures that may be encountered during the software testing process. In this paper we explore the advantages of the Grey Wolf Optimization (GWO) algorithm in estimating the SRGM’s parameters with the objective of minimizing the difference between the estimated and the actual number of failures of the software system. We evaluated three different software reliability growth models: the Exponential Model (EXPM), the Power Model (POWM) and the Delayed S-Shaped Model (DSSM). In addition, we used three different datasets to conduct an experimental study in order to show the effectiveness of our approach.
-
The objective of my sabbatical leave project was to propose a new scheduling algorithm that extends the current MapReduce model to improve system performance. MapReduce, which has been popularized by Google, is a scalable tool that enables the processing of massive volumes of data.
-
We defined a set of quantifiable features for authorship categorization. We performed our experiments on public domain literature - all books analyzed were obtained in plain text format through Project Gutenberg's online repository of classic books. We tested three machine learning algorithms: Artificial Neural Network, Naïve Bayes Classifier, and Support Vector Machine with our features. We found that certain features, such as punctuation and various suffixes result in a higher accuracy. In addition, the Support Vector Machine classifier produces repeatedly higher accuracies than other classifiers and seems to be a far superior method of classification in terms of authorship categorization. © 2016 IEEE.
-
We give the theoretical foundation for finding a reject region which gives the minimum equal error rate in serial fusion based biometric verification. Given a user-specified tolerance of x percent genuine score reject rate, we prove that there exists a unique reject region inside which the false alarm rate and impostor pass rate curves overlap, and this reject region gives the minimum equal error rate. Our theory leads to new algorithms for finding reject regions, which have two key advantages over the state-of-the-art: (1) the algorithms allow the system administrator to control the proportion of genuine scores that a reject region can erroneously reject and (2) the algorithms determine reject regions directly from the scores, without the need to estimate score distributions. Our proofs do not rely on data belonging to any particular distribution, which makes them applicable to a wide range of biometric modalities including face, finger, iris, speech, gait, and keystrokes. © 2016 IEEE.
-
Smartphones, while providing users ease of access to sensitive information on the go, also present severe security risks if an attacker is able to gain access to them. To strengthen the user authentication and identification in a smartphone, we develop a biometric authentication and identification system which uses the capacitive touchscreen that is featured in all current smartphones. Our methodology focuses on using the touchscreen as a sensor to capture the image of a user's ear, thumb or four fingers. We extract the capacitive raw data from the touched body part to obtain a capacitive image, and then use it to capture geometric features (e.g., length and width of a finger) and principal components. After that, we experiment with Support Vector Machine (SVM) and Random Forest (RF) classifiers to verify and also identify each user. We achieved the maximum authentication accuracy of 98.84% by four fingers with SVM, and maxinum identification accuracy of 97.61% by four fingers with RF. © 2016 IEEE.
Explore
Department
Resource type
- Conference Paper (5)
- Journal Article (5)
- Report (1)
Publication year
Resource language
- English (5)