Introducing the GeForce4: nVidia strikes back
nFiniteFX Engine II
In principle, this is exactly the pixel and vertex shader that was programmable in the GeForce3 almost 12 months ago and brought lighting effects to the consumer market. Here is a brief explanation of the original nFiniteFX engine. This essentially consists of the so-called pixel and vertex shaders, which are responsible for manipulating the individual pixels and the associated texture operations or the basic geometry data. Similar to the hardware T&L unit of the old GeForce2 series.
The big and decisive difference between the so-called DirectX -7-T & L (GeForce2) and DirectX-8-T & L (GeForce3) is due to the fact that only a few effects or operations were possible with the old hardware that were fixed by nVidia (and DirectX). With the new shaders it was now possible for the first time that every game developer could freely program the operations that were important for him within certain limits and not only for the geometry, i.e. the triangular structure of a scene, but also for each individual pixel. In the end, this means that, as the name subtly suggests, almost any mathematical effect is possible with the textures and polygons used. Six months later, ATi proved that it can be done a little better with its Radeon8500, which handled the number of possible (dependent) texture operations and color values even more flexibly.
In contrast to the GeForce3, the GeForce4 now has a second vertex shader , which should certainly make up a large part of the additional 6 million transistors and should provide a good boost in geometry processing. 'Should' because, based on the data known from the GeForce3 (approx. 50M vertices/s per vertex shader or 25M triangles/s according to DirectX-7), we expect a slightly higher geometry performance than the officially stated 136M vertices/s had gone out. From a purely mathematical point of view, about 150M vertices/s should be possible, considering that the vertex shader is now available twice and clocked 50% higher. Officially, nVidia also gives up to three times higher vertex shader performance, the data to verify this, i.e. corresponding information about the GeForce3, are unfortunately not available.
The next point concerns the now also more powerful pixel shader. Nvidia speaks of 'Advanced Pixelshader Pipelines', which should work up to 50% faster than a GeForce3. This coincidentally corresponds exactly to the clock increase of 100MHz and indicates aunchanged takeover. The implementation of the pixel shader in version 1.4, which ATi's Radeon8500 has been offering for more than half a year, which is expected and hoped for by many, does not seem to be desired by the Californians, if one would admit that the GeForce3 is technologically behind the Radeon8500. The attempt is now being made to silence this generally welcome development of competition by means of market superiority. Either this, or it requires an extensive redesign of the entire pixel shader in order to incorporate the changes compared to versions 1.0-1.3, which are already supported by the GeForce3, that you couldn't manage in the short time.
This feature is completely omitted for the GeForce4 MX series, which has to be satisfied with a hardware transform and lighting unit based on the model of the GeForce2 . Due to the consistently higher clock rates, this will be more powerful than before, but will essentially remain unchanged.
On the next page: Light-Speed Memory Architecture II