Menu
New details about the GeForce4

New details about the GeForce4

At xBit-Labs there is once again new speculation about the GeForce4, which nVidia wants to present on February 5th of this year. What sets these new rumors apart from previous ones, however, is the fact that they do not speculate about timing and names, but give details of the partly new features.

As every one of our readers knows by now, the GeForce4 presented publicly on February 5th, so you have to be patient with definitive details until then. However, the fact that xBit-Labs had to remove their story, allegedly at nVidia's insistence, speaks for very clever marketing, as there was enough time to spread these rumors through the net, and at least for them partial correctness of the information given there.

In detail, the following emerged from the message:

The memory that is in the highestExpansion stages should reach a size of 128MB, will be clocked with up to 325MHz DDR, which corresponds to an effective data rate equal to that of 650MHz SD-RAM and ensures a real memory bandwidth of 10.4GB per second.

The The other architecture of the rendering pipelines remains the same with 2 texture units for each of the four pipelines, only the Lightspeed memory architecture is said to have been increased in its efficiency through the Quadacache architecture and second generation occlusion culling, so that absolutely and relatively more usable Bandwidth is available when it was still the case with the GeForce3.

Due to the higher available bandwidth Now a new anti-aliasing method to eliminate the podium effects called AccuView AA has been implemented, of which only vague details, such as a new subpixel grid, are mentioned.

Furthermore, older rumors about the GeForce4 a two It should be attributed to the first vertex shader, which, in conjunction with the higher clock rate, is supposed to provide a triple vertex shader performance (not to be confused with the overall performance!) compared to a original GeForce3 accelerated by 50%, but only due to the higher clock rate. Not a word is said about a possible adaptation of the same to the standard 1.4 supported by ATi. For marketing tactical reasons, that would certainly be inappropriate, if one would admit that in the (then) mid-range segment, the GeForce3 is technologically inferior to the Radeon8500. It is possible that the GeForce3 line will also be completely replaced by the GeForce4 and its MX derivatives, so that this argument would be invalid again ...

One last detail of the xBit-Labs story is the mention of a deeply correct bump Mapping, which is unfortunately not explained in more detail.

As with the last outbreak of rumors, we willbe smarter!