3Dc & 16x Anisotropic Filtering
3Dc - CompressionAlmost any... well any graphics card nowadays makes use of Texture compression technology. It's been discussed here on more then one occasion, I'm sure you recognize terms like S3TC and DXTC. Basically you reduce the byte-size of a texture while maintaining the best quality as possible. However, compression equals artifacts and thus image degradation at some point. 3Dc is a compression technology designed to bring out fine details in games while minimizing memory usage. It's the first compression technique optimized to work with normal maps, which allow fine per-pixel control over how light reflects from a textured surface. With up to 4:1 compression possible, this means game designers can now include up to 4x the detail without changing the amount of graphics memory required and without impacting performance.
Let's look what ATi has to say and analyse this a bit.
3Dc is a new compression technology developed by ATI, and introduced in the new RADEON X800 series of Visual Processing Units. This technology is designed to allow game developers to pack more detail into real time 3D images than ever before, making 3Dc a key enabler of the HD Gaming vision.
A close-up of the Optico character from ATI's Double Cross demo showing the increase in fine detail made possible by 3Dc compression.
Todays graphics processors rely heavily on data and bandwidth compression techniques to reach ever increasing levels of performance and image quality. Rendering a scene in a modern 3D application requires many different kinds of data, all of which must compete for bandwidth and space in the local memory of the graphics card. For games, texture data tends to be the largest consumer of these precious resources, making it one of the most obvious targets for compression. A set of algorithms known as DXTC (DirectX Texture Compression) has been widely accepted as the industry standard for texture compression techniques.
Introduced in 1999, along with S3TC (its counterpart for the OpenGL API), DXTC has been supported by all new graphics hardware for the past few years, and has seen widespread adoption by game developers. It has proven particularly effective for compressing two-dimensional arrays of color data stored in RGB and RGBA formats. With the appearance of graphics hardware and APIs that support programmable pixel shaders in recent years, researchers and developers have come up with a variety of new uses for textures. A variety of material properties such as surface normals, shininess, roughness and transparency can now be stored in textures along with the traditional color information.
Well then, there are some negatives about using normal maps. One that is very easy to explain is the graphics processor's load, it will increase. Another negative is that a higher amount of data is required. The more details the developer wishes to include, the higher the resolution the normal map has to be - and the more memory bandwidth is needed. Therefore ATI developed 3Dc, which compresses normal maps up to 4:1 without any significant loss of quality. The new x800 range and upwards will incorporate this technology, whether it'll be included into DirectX remains a mystery. Developers can bypass this by applying some sort of add-on, just like we saw with Unreal when it started to support S3TC.
Anisotropic Filtering
So ATI now has revised Anisotropic Filtering a bit. The x800 supports up to 16 levels of AF; 2, 4, 8 or 16 texture samples. The user can select bilinear filtering or trilinear filtering by selecting either performance or quality mode in the driver properties. You already read the Trilinear filtering bit on the first page right? So basically that's a mixed mode.
"SMOOTHVISION HD anisotropic filtering supports 2, 4, 8, or 16 texture samples per pixel. Each setting can be used in a performance mode that uses bilinear samples, or a quality mode that uses trilinear samples. There is also a new capability to support intermediate modes, to help strike the ideal balance between performance and quality."