GeForce 256

From Free net encyclopedia

Image:Geforce256logo.jpg The GeForce 256 (codenamed NV10), often known simply as the GeForce, was the first of NVIDIA's "GeForce" product-line. Released in August 1999, the GeForce 256 improved on its predecessor (RIVA TNT2) by increasing the number of fixed pixel-pipelines, offloading host geometry calculations to a hardware transform and lighting engine, and adding hardware motion-compensation for MPEG-2 video.

Contents

Industry leader

With its industry-leading 3D feature-set and rendering-speed, the GeForce 256 delivered a mighty blow to the competition and finally cemented NVIDIA's position as a key figure in the PC graphics industry. NVIDIA's success came at the expense of 3Dfx, Matrox, and S3 Graphics. A few months after the GeForce 256, Nvidia's competitor S3 rolled out their Savage 2000 Diamond Viper II which also had hardware T&L and had the supposed advantage of being able to be produced cheaper than the GeForce 256. However, poor drivers prevented the Savage 2000's T&L from functioning and S3 made no attempt to correct them. One year after the Geforce 256's introduction, only ATI with their comparable Radeon series would remain in direct competition with NVIDIA, in the discrete graphics-chipset market.

World's first 'GPU'

Upon introduction, it was marketed as "the world's first 'GPU', or Graphics Processing Unit," a term nVidia had just coined and was defined as "a single-chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second." This was intended to set it apart from professional graphics cards with on-board (but separate) geometry processors, and previous much less powerful products with on-chip T&L support like the 3Dlabs Permedia 2 (featuring an on-chip GLint MX geometry processor). It was also intended to highlight the newly-added support for T&L in DirectX 7, and downplay the traditional support for T&L hardware acceleration built into OpenGL.

Hardware T&L

nVidia was one of the first graphics chip manufacturers to integrate hardware transform and lighting into their chip, a step of the rendering pipeline that was previously either performed on the host system's CPU, or by a separate processor on the board (such as on older workstation cards.) Geometry (triangle) setup had been integrated in the more powerful graphics chips since 3dfx Voodoo2 and Rendition Verite, but hardware T&L was definitely a new step forward since it handled a tremendous amount of work that would otherwise have to be done with the main CPU.

At the time several hardware reviews promoted this new feature. Its raw rendering performance far surpassed existing high-end graphics cards including the RIVA TNT2, the ATI Rage 128, the 3Dfx Voodoo 3, and the Matrox Millennium G400 MAX. However, without broad application support at the time in 1999, critics contended that the T&L technology had little real-world value. It was really only somewhat beneficial in a few 3D first-person shooter titles of the time, most notably Quake III Arena and Unreal Tournament. 3Dfx and others contended that a fast CPU would make up for the lack of a T&L unit. The GeForce 256 was also extremely expensive and its average or poor performance for non-gaming consumer applications confined it to a niche market as a "gamer's card." Only after the GeForce 256 was replaced by the GeForce 2 line did hardware T&L eventually become a recommended feature for current games. The GeForce 2 MX offered much of the GeForce256's performance for half the price.

Quadro

Based on the GeForce 256, Nvidia also produced the Quadro for professional workstations; the Quadro had special features not found in its consumer-oriented GeForce counterpart. However the first Quadro was undercut by its GeForce sibling; many professionals found out that the GeForce 256 could handle workstation applications reasonably well at a fraction of the Quadro's cost. Although the GeForce 256 was expensive as a gamer's card, it was a bargain for professionals, making it known as the "poor man's workstation card".

Longevity

The GeForce 256 and GeForce 2 derivatives enjoyed much longevity and popularity in the gaming market, due in part to its hardware T&L. Its competitors in 1999 and 2000 such as the ATI Rage 128, Voodoo 3 and 5 series, Matrox G400, and the STM PowerVR3 Kyro were rendered obsolete not long in their lifespans due to a lack of built-in T&L.

The GeForce 256 and its similar offshoots, the GeForce 2 and GeForce 4 MX, only support fixed path DirectX 7.0 graphics, and not programmable DirectX 8.0 and 9.0 shaders. However, the widespread popularity of the other NV1x members GeForce 2 and GeForce 4 MX, would ensure that the GeForce 256 would be supported in games released as late as 2004.

Specifications

  • Core Clock: 120MHz
  • Memory Clock: 166MHz (150MHz for Geforce 256 DDR version)
  • Pipeline Configuration: 4 pixel units with 1 texture unit each. nVidia Shading Rasterizer.
  • Graphics Core: 256-bit
  • Memory Interface: 128-bit
  • Triangles per second: 15 million
  • Pixels per second: 480 million
  • Memory: Up to 64MB

The common configuration was 32MB of SDRAM (mid-high end) or DDR SDRAM (high-end), with DDR giving substantially better performance especially at higher resolutions, due to nearly double the bandwidth. 64MB versions were made by several third-party manufacturers but they were extremely rare.

As studies of the "NV1x" architecture progressed, it was determined that GeForce 256 was extremely memory bandwidth constrained, especially the SDR SDRAM model. It did not have much in the way of memory bandwidth saving mechanisms (see ATI's HyperZ). The "NV1x" architecture was perhaps the most memory bandwidth limited GPU line ever produced and was never able to approach theoretical pixel/texel performance as a result. GeForce 256 DDR, with its 4x1 architecture and DDR memory, was perhaps the least constrained between the GF256 and GF2. The later GeForce4 MX (NV17) line was far more efficient while still being within the "NV1x" family. As a result the less "brute force" GeForce4 MX 440/460 cards could outperform even GeForce2 Ultra.

See also

External links

Template:NVIDIAit:GeForce 256