On the Design of Adaptive and Decentralized Load Balancing Algorithms with Load Estimation for Computational Grid Environments

In this paper, we address several issues that are imperative to grid environments such as handling resource heterogeneity and sharing, communication latency, job migration from one site to other, and load balancing. We address these issues by proposing two job migration algorithms, which are MELISA...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on parallel and distributed systems Vol. 18; no. 12; pp. 1675 - 1686
Main Authors Shah, R., Veeravalli, B., Misra, M.
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2007
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1045-9219
1558-2183
2161-9883
1558-2183
DOI10.1109/TPDS.2007.1115

Cover

More Information
Summary:In this paper, we address several issues that are imperative to grid environments such as handling resource heterogeneity and sharing, communication latency, job migration from one site to other, and load balancing. We address these issues by proposing two job migration algorithms, which are MELISA (modified ELISA) and LBA (load balancing on arrival). The algorithms differ in the way load balancing is carried out and is shown to be efficient in minimizing the response time on large and small-scale heterogeneous grid environments, respectively. MELISA, which is applicable to large-scale systems (that is, interGrid), is a modified version of ELISA in which we consider the job migration cost, resource heterogeneity, and network heterogeneity when load balancing is considered. The LBA algorithm, which is applicable to small-scale systems (that is, intraGrid), performs load balancing by estimating the expected finish time of a job on buddy processors on each job arrival. Both algorithms estimate system parameters such as the job arrival rate, CPU processing rate, and load on the processor and balance the load by migrating jobs to buddy processors by taking into account the job transfer cost, resource heterogeneity, and network heterogeneity. We quantify the performance of our algorithms using several influencing parameters such as the job size, data transfer rate, status exchange period, and migration limit, and we discuss the implications of the performance and choice of our approaches.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ObjectType-Article-2
ObjectType-Feature-1
content type line 23
ISSN:1045-9219
1558-2183
2161-9883
1558-2183
DOI:10.1109/TPDS.2007.1115