FLOPS

From Free net encyclopedia

(Redirected from Flops)

Template:Otheruses4

In computing, FLOPS (or flops) is an abbreviation of FLoating point Operations Per Second. This is used as a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations. (Compare to MIPS -- million instructions per second.) One should speak in the singular of a FLOPS and not of a FLOP, although the latter is frequently encountered. The final S stands for second and does not indicate a plural.

Alternatively, the singular FLOP (or flop) is used as an abbreviation for "floating-point operation", and a flop count is a count of these operations (e.g. required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate.

Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than the FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as the megaFLOPS (MFLOPS, 106 FLOPS), the gigaFLOPS (GFLOPS, 109 FLOPS), the teraFLOPS (TFLOPS, 1012 FLOPS), the petaFLOPS (PFLOPS, 1015 FLOPS) and the exaFLOPS (EFLOPS, 1018 FLOPS).

Contents

The performance spectrum

A relatively cheap but modern desktop computer using, for example, a Pentium 4 or Athlon 64 CPU, typically runs at a clock frequency in excess of 2 GHz and provides computational performance in the range of a few GFLOPS. Even some video game consoles of the late 1990s and early 2000s, such as the Nintendo GameCube and Sega Dreamcast, had performance in excess of one GFLOPS (but see below).

The original supercomputer, the Cray-1, was set up at Los Alamos National Laboratory in 1976. The Cray-1 was capable of 80 MFLOPS (or, according to another source, 138–250 MFLOPS). In fewer than 30 years since then, the computational speed of supercomputers has jumped a millionfold.

According to Top500.org, the fastest computer in the world as of October 2005 was the IBM Blue Gene/L supercomputer, measuring a peak of 280.6 TFLOPS. That's more than twice the previous Blue Gene/L record of 136.8 teraFLOPS, set when only half the machine was installed. Blue Gene (unveiled October 27th, 2005) contains 131,072 processor cores, yet each of these cores are quite similar to those found in many mid-performance computers (PowerPC 440).

Listed first on the top500.org website (see above for link), it is a joint project of the Lawrence Livermore National Laboratory and IBM Article.

Distributed computing uses the Internet to link personal computers to achieve a similar effect: Folding@home, the most powerful distributed computing project, has been able to sustain over 200 TFLOPS. SETI@home computes data at more than 100 TFLOPS. As of June 2005, GIMPS is sustaining 17 TFLOPS, while Einstein@home is actually crunching more than 50 TFLOPS against 167 TFLOPS of its theoretical computing speed.

Pocket calculators are at the other end of the performance spectrum. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is experienced as instantaneous by a human operator, so a simple calculator could be said to operate at about 10 FLOPS.

Humans are even worse floating-point processors. If it takes a person a quarter of an hour to carry out a pencil-and-paper long division problem with 10 significant digits, that person would be calculating in the milliFLOPS range. Bear in mind, however, that a purely mathematical test may not truly measure a human's FLOPS, as a human is also processing smells, sounds, touch, sight and motor coordination. This takes an average human's FLOPS up to an estimated 10 quadrillion FLOPS (roughly 10 PFLOPS). [1]

FLOPS as a measure of performance

In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark.

FLOPS in isolation are arguably not very useful as a benchmark for modern computers. There are many factors in computer performance other than raw floating-point computation speed, such as I/O performance, interprocessor communication, cache coherence, and the memory hierarchy. This means that supercomputers are in general only capable of a small fraction of their "theoretical peak" FLOPS throughput (obtained by adding together the theoretical peak FLOPS performance of every element of the system). Even when operating on large highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law. Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance.

For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.

Historically: the earliest reliably documented serious use of the Floating Point Operation as metric appears to be AEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s. The computing world is saddled with this marginal metric ever since. If an earlier reliable documented use can be found, please get verification with a second independent source (it should be more than casual academic citation).

The terminology is currently so confusing that Export Control is now governed by millions of "Theoretical Operations Per Second" or MTOPS.

FLOPS, GPUs, and game consoles

Very high FLOPS figures are often quoted for inexpensive computer video cards and game consoles.

For example, the Xbox 360 has been announced as having a system floating point performance of around one hundred GFLOPS, while the PS3 has been announced as having 218 GFLOPS. By comparison, a high-end general-purpose PC would have a FLOPS rating of around ten GFLOPS, if the performance of its CPU alone was considered. The 1 or 2 TFLOPS ratings that were sometimes mentioned regarding the consoles would even appear to class them as supercomputers.

However, these FLOPS figures should be treated with caution, as they are often the product of marketing. The game console figures are often based on total system performance (CPU + GPU). In the extreme case, the TFLOPS figure is primarily derived from the function of the single-purpose texture filtering unit of the GPU. This piece of logic is tasked with doing a weighted average of sometimes hundreds of pixels in a texture during a look-up (particularly when performing a quadrilinear anisotropically filtered fetch from a 3D texture). However, single-purpose hardware can never be included in an honest FLOPS figure.

Still, the programmable pixel pipelines of modern GPUs are capable of a theoretic peak performance that is an order of a magnitude higher than a CPU. An NVIDIA 7800 GTX 512 is capable of around 200 GFLOPS. ATI's latest X1900 architecture (2/06) has a claimed performance of 554 GFLOPS. This is possible because 3D graphics operations are a classic example of a highly parallelizable problem which can easily be split between different execution units and pipelines, allowing a high speed gain to be obtained from scaling the number of logic gates while taking advantage of the fact that the cost-efficiency sweet spot of (number of transistors)*frequency lies at around 500 MHz. This has to do with the imperfection rate in the manufacturing process, which rises exponentially with frequency. While CPUs dedicate a few transistors to run at very high frequency in order to process a single thread of execution very quickly, GPUs pack a great deal more transistors running at a low speed because they are designed to simultaneously process a large number of pixels with no requirement that each pixel be completed quickly. Moreover, GPUs are not designed to perform branch operations (IF statements which determine what will be executed based on the value of a piece of data) well. The circuits for this, in particular the circuits for predicting how a program will branch to ready data for it, consume an inordinant number of transistors on a CPU that could be used for FLOPS. Lastly, GPUs are designed to be fed a continuous stream of predetermined data. CPUs, meanwhile, access data more unpredictably. This requires them to include an amount of on-chip memory called a cache for quick random access. This cache eats up the majority of CPU transistors.

The tasks which can be performed well on the GPU are thus slightly more limited, yet some problems can take advantage of the specialized processing very well. Even those that are inefficient in implementation have to be ten times less efficient to lose out in the end. General purpose computing on GPUs is an emerging field which hopes to utilize the vast advantage in raw FLOPS, as well as memory bandwidth, of modern video cards. A few applications can even take advantage of the texture fetch unit in computing averages in (1, 2, or 3 dimensional) sorted data for a further boost in performance.

In January 2006, ATI Technologies launched a graphics sub-system that put in excess of 1 TERAFLOP within the reach of most home users. To give this achievement perspective, you need to consider that less than 9 years ago, the US Department of Energy commissioned the world's first TERAFLOP super computer, ASCI Red, consisting of more than 9,200 Pentium II chips. The original incarnation of this machine used Intel Pentium Pro processors, each clocked at 200 MHz. These were later upgraded to Pentium II OverDrive processors.

Cost of Computing

  • 1997: about US$30,000 per Gflops; with two 16-Pentium-Pro–processor Beowulf-class computers, Loki and Hyglac
  • 2000, May: $640 per Gflops, KLAT2, University of Kentucky
  • 2003, August: $82 per Gflops, KASY0, University of Kentucky
  • 2006: about $5 per Gflops in the XBOX 360 in case Linux will be implented as intended [2]
  • 2006, February: about $1 per Gflops in ATI PC add-in graphics card (X1900 architecture)

This trend toward low cost follows Moore's law.

Trivia

In the Star Trek fictional universe, circa 2364, the android Data was constructed with an initial linear computational speed rated at 60 trillion operations per second, or 60 TFLOPS (and thereby, potentially 'dating' the series Star Trek: The Next Generation in which he appears); however, he was later able to infinitely exceed this limit by modifying his hardware and software. And in the film Terminator III, Skynet is expanded over the Internet at 60 TFLOPS too.

External links

es:FLOPS fr:Floating-point operations per second id:FLOPS it:Flops nl:FLOPS ja:FLOPS pl:FLOPS ru:FLOPS sv:Flops th:FLOPS