Asus V8440 TD and V8460 ultra TD in the test: Two 'Ti' tans among themselves
- 1 Introduction
- 2 The cards
- 3 Scope of delivery
- Drivers and tools
- 4 Technical details
- 6 Test system
- 7 Synthetic benchmarks
- 10 Game benchmarks
- 15 FSAA Performance
- 18 Anisotropic Filter
- 20 FSAA and AF combined
- Image quality
- 21 Conclusion
A lot has been speculated in advance, a lot has already been addressed in one or the other (P) review, including in our small preview . We want to repeat a few important details here.
Features of the GeForce4 Ti series
- nFiniteFX Engine II - Programmable pixel and vertex shaders
- Accuview Anti-Aliasing - Multisampling Anti-Aliasing Hardware
- Lightspeed Memory Architecture II - Bandwidth-saving measures
- nView Display Technology - Independent control of multiple displays
- Shadow buffer and real-time shadows in hardware
- Bump and environment mapping in hardware
- DirectX and S3TC texture compression
- High Performance 2D Rendering Engine
- High Quality HDTV/DVD playback
- High Definition Video Processor
- DirectX 8.1 Support
- OpenGL 1.3 Support
First of all a table overview with the different models:
As can already be guessed here, the differences between the Ti 4200 and the Ti4400 should be reasonably in Keep boundaries. Only cards that are based on the GeForce4 Ti4600, such as the Asus V8460 ultra, can set themselves apart from the specifications a little more clearly.
In principle, this is exactly the pixel and vertex shader that brought programmable transform and lighting effects to the consumer market in the GeForce3 almost 12 months ago. Here is a brief explanation of the original nFiniteFX engine. This essentially consists of the so-called pixel and vertex shaders, which are responsible for manipulating the individual pixels and the associated texture operations or the basic geometry data. Similar to the hardware T&L unit of the old GeForce2 series.
The big and decisive difference between the so-called DirectX-7-T & L (GeForce2) and DirectX-8-T & L (GeForce3) is that one With the old hardware only a few effects or operations were possible that were fixed by nVidia (and DirectX). With the new shaders it was now possible for the first time that every game developer could freely program the operations that were important for him within certain limits and not only for the geometry, i.e. the triangular structure of a scene, but also for each individual pixel. In the end, this means that, as the name subtly suggests, almost any mathematical effect is possible with the textures and polygons used. Six months later, ATi proved with its Radeon8500, which counts the number of possible(dependent) texture operations and color values a little more flexible.
In contrast to the GeForce3, the GeForce4 now has a Second vertex shader available, which should certainly make up a large part of the additional 6 million transistors and should provide a good boost in geometry processing. 'Should' because, based on the data known from the GeForce3 (approx. 50M vertices/s per vertex shader or 25M triangles/s according to DirectX-7) we assumed a slightly higher geometry performance than the officially stated 136M vertices/s. From a purely mathematical point of view, about 150M vertices/s should be possible, considering that the vertex shader is now available twice and clocked 50% higher. Officially, nVidia also gives up to three times higher vertex shader performance, the data to verify this, i.e. corresponding information on the GeForce3, are unfortunately not available.
The next point concerns the now more powerful pixel shader . Nvidia speaks of 'Advanced Pixelshader Pipelines', which should work up to 50% faster than a GeForce3. This coincidentally corresponds exactly to the clock increase of 100MHz or 50% and indicates an unchanged takeover. It should be emphasized here, however, that the speed with multiple texture operations, more precisely with the so-called texture lookups and dependent texture reads, is said to have increased significantly. Thanks to these improvements, among other things, it is possible that nVidia is now talking about support for pixel shaders in version 1.3, whereas with the GeForce3 (Ti) it still had to be left with version 1.1.
Pixelshader 1.4 remain a domain of the Radeon8500 from ATi.
On the next page: LMA-II