Computer cluster

From Free net encyclopedia

Image:PurdueLinuxComputerCluster.jpg A computer cluster is a group of loosely coupled computers that work together closely so that in many respects they can be viewed as though they are a single computer. Clusters are commonly, but not always, connected through fast local area networks. Clusters are usually deployed to improve speed and/or reliability over that provided by a single computer, while typically being much more cost-effective than single computers of comparable speed or reliability.

Contents

Cluster categorizations

High-availability (HA) clusters

High-availability clusters are implemented primarily for the purpose of improving the availability of services which the cluster provides. They operate by having redundant nodes, which are then used to provide service when system components fail. The most common size for an HA cluster is two nodes, which is the minimum required to provide redundancy. HA cluster implementations attempt to manage the redundancy inherent in a cluster to eliminate single points of failure. There are many commercial implementations of High-Availability clusters for many operating systems. The Linux-HA project is one commonly used free software HA package for the Linux OS.

Load balancing clusters

Load balancing clusters operate by having all workload come through one or more load-balancing front ends, which then distribute it to a collection of back end servers. Although they are implemented primarily for improved performance, they commonly include high-availability features as well. Such a cluster of computers is sometimes referred to as a server farm. There are many commercial load balancers available including Platform LSF HPC, Moab Cluster Suite and Maui Cluster Scheduler. The Linux Virtual Server project provides one commonly used free software package for the Linux OS.

High-performance (HPC) clusters

High-performance clusters are implemented primarily to provide increased performance by splitting a computational task across many different nodes in the cluster, and are most commonly used in scientific computing. One of the more popular HPC implementations is a cluster with nodes running Linux as the OS and free software to implement the parallelism. This configuration is often referred to as a Beowulf cluster. Such clusters commonly run custom programs which have been designed to exploit the parallelism available on HPC clusters. Many such programs use libraries such as MPI which are specially designed for writing scientific applications for HPC computers.

HPC clusters are optimized for workloads which require jobs or processes happening on the separate cluster computer nodes to communicate actively during the computation. These include computations where intermediate results from one node's calculations will affect future calculations on other nodes.

Grid computing

Grid computing or grid clusters are a technology closely related to cluster computing. The key differences between grids and traditional clusters are that grids connect collections of computers which do not fully trust each other, and hence operate more like a computing utility than like a single computer. In addition, grids typically support more heterogeneous collections than are commonly supported in clusters.

Grid computing is optimized for workloads which consist of many independent jobs or packets of work, which do not have to share data between the jobs during the computation process. Grids serve to manage the allocation of jobs to computers which will perform the work independently of the rest of the grid cluster. Resources such as storage may be shared by all the nodes, but intermediate results of one job do not affect other jobs in progress on other nodes of the grid.

High-performance cluster implementations

The TOP500 organization publishes the 500 fastest computers twice a year, usually including many clusters on their list. TOP500 is a collaboration between the University of Mannheim, the University of Tennessee, and the National Energy Research Scientific Computing Center at Lawrence Berkeley National Laboratory. The current top supercomputer is the Department of Energy's BlueGene/L system with performance of 280.6 TFlops. The second place is owned by another BlueGene/L system with performance of 91.29 TFlops.

Clustering can provide significant performance benefits versus price. The System X supercomputer at Virginia Tech, the twentieth most powerful supercomputer on Earth as of November 2005, is a 12.25 TFlops computer cluster of 1100 Apple XServe G5 2.3 GHz dual processor machines (4 GB RAM, 80 GB SATA HD) running Mac OS X. The cluster initially consisted of Power Mac G5s; the XServe's are smaller, reducing the size of the cluster. The total cost of the previous Power Mac system was $5.2 million, a tenth of the cost of slower mainframe supercomputers. The Power Mac G5s were sold off.

The central concept of a Beowulf cluster is using COTS computers to produce a cost-effective alternative to a traditional supercomputer. One project that took this to an extreme was the Stone Soupercomputer.

John Koza has the largest computer cluster owned by an individual.

Cluster history

The first commodity clustering product was ARCnet, developed by Datapoint in 1977. ARCnet wasn't a commercial success and clustering didn't really take off until DEC released their VAXcluster product in the 1980s for the VAX/VMS operating system. The ARCnet and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. They were supposed to give you the advantage of parallel processing, while maintaining data reliability and uniqueness. VAXcluster, now VMScluster, is still available on OpenVMS systems from HP running on Alpha and Itanium systems.

The history of cluster computing is intimately tied to the evolution of networking technology. As networking technology has become cheaper and faster, cluster computers have become significantly more attractive.

Cluster technologies

MPI is a widely-available communications library that enables parallel programs to be written in C and Fortran, for example, in the climate modeling program MM5.

The GNU/Linux world sports various cluster software, such as:

DragonFly BSD, a recent fork of FreeBSD 4.8 is being redesigned at its core to enable native clustering capabilities. It also aims to achieve single-system image capabilities.

MSCS is Microsoft's high-availability cluster service for Windows. Based on technology developed by Digital Equipment Corporation, the current version supports up to eight nodes in a single cluster, typically connected to a SAN. A set of APIs support cluster-aware applications, generic templates provide support for non-cluster aware applications.

Clustering software (open source)

Clustering products

See also

References

  • Karl Kopper: The Linux Enterprise Cluster: Build a Highly Available Cluster with Commodity Hardware and Free Software, No Starch Press, ISBN 1593270364
  • Evan Marcus, Hal Stern: Blueprints for High Availability: Designing Resilient Distributed Systems, John Wiley & Sons, ISBN 0471356018
  • Greg Pfister: In Search of Clusters, Prentice Hall, ISBN 0138997098
  • Rajkumar Buyya (editor): High Performance Cluster Computing: Architectures and Systems, Volume 1, ISBN 0-13-013784-7, Prentice Hall, NJ, USA, 1999.
  • Rajkumar Buyya (editor): High Performance Cluster Computing: Programming and Applications, Volume 2, ISBN 0-13-013785-5, Prentice Hall, NJ, USA, 1999.

External links

Cluster sites

da:Klyngecomputer de:Computercluster es:Cluster de computadores fr:Grappe de serveurs ko:컴퓨터 클러스터 it:Cluster nl:Computercluster ja:コンピュータ・クラスター nn:Dataklynge pl:Klaster komputerowy pt:Cluster ru:Кластер (группа компьютеров) su:Data clustering fi:Klusteri sv:Kluster zh:计算机集群