Commonly used graphics terminology
Come commonly used graphics terminology explained here at Guru3D.com
2D Graphics |
Displayed representation of a scene or an object along two axes of reference: height and width (x and y). |
---|---|
3D Graphics |
Displayed representation of a scene or an object that appears to have three axes of reference: height, width, and depth (x, y, and z). |
3D Pipeline |
The process of 3D graphics can be divided into three-stages: tessellation, geometry, and rendering. In the tessellation stage, a described model of an object is created, and the object is then converted to a set of polygons. The geometry stage includes transformation, lighting, and setup. The rendering stage, which is critical for 3D image quality, creates a two dimensional display from the polygons created in the geometry stage. |
Alpha Blending |
A technique in graphics processing that simulates transparency or translucency for objects or layers - the real world is composed of transparent, translucent, and opaque objects and alpha blending mimics this by adding transparency information for translucent objects. It is implemented by rendering polygons through a stipple mask with an on-off density proportional to the transparency of the object. The resultant color of a pixel is a combination of the foreground and background color. Typically, alpha has a normalised value of 0 to 1 for each color pixel. new pixel = (alpha)(pixel A color) + (1 - alpha)(pixel B color) |
Alpha Buffer / Alpha Channel / Alpha Plane |
An extra Color channel to hold transparency information; pixels become quad values (RGBA). In a 32-bit frame buffer there are 24 bits of color, 8 each for red, green, and blue, along with an 8-bit alpha channel. |
Ambient Light |
A global level of artificial illumination that ensures all surfaces are visibly lit, particularly those without direct illumination. It functions by representing infinite diffuse reflections from all surfaces within a scene. |
Animation |
A technique providing the illusion of movement using a sequence of (rendered) still images. |
Anti-aliasing |
Anti-aliasing is sub pixel interpolation, a technique that reduces the jagged effect of edges and makes them appear to have better resolution. |
Application Programming Interface (API) |
A standardised programming interface allowing developers to write their applications to a standard and without specific knowledge of hardware implementations. The software driver for the hardware intercepts the API instructions and translates them into specific instructions tailored to specific hardware. The Fujitsu graphics controllers API includes all functions supported in hardware and allows to port software from Cremson up to Carmine family easily. |
Atmospheric Effect |
Effects, such as fog and depth cueing that improve the rendering of real-world environments. |
Bilinear Filtering |
Bilinear filtering is a method of anti-aliasing texture maps. A texture-aliening artifact occurs due to sampling on a finite pixel grid. Point-sampled telexes jump from one pixel to another at random times. This aliening is very noticeable on slowly rotating or moving polygons. The texture image jumps and shears along pixel boundaries. To eliminate this problem, bilinear filtering takes a weighted average of four adjacent texture pixels to create a single telex. |
BitBLTs |
A graphics operation in which two bitmap patterns are combined into one - and the single most important acceleration function for windowed GUI environments. A BitBLT is simply the movement of a block of data from one place to another, taking into account the special requirements and arrangements of the graphics memory. Operations handle patterns - usually square - and produce them at different locations on the screen. For example, this function is utilised every time a window is moved; in which case, the BitBLT is a simple Pixel Block Transfer. More complicated cases may occur where some transformation of the source data is to occur, such as in a Color Expanded Block Transfer, where each monochromatic bit in the source is expanded to the color in the foreground or background register before being written to the display. |
Bitmap |
A Bitmap is a pixel by pixel image. |
Blending |
Blending is the combining of two or more objects by adding them on a pixel-by-pixel basis. Bus Mastering A feature of PCI buses that allows a card with this feature to retrieve data directly from system memory without any interaction with the host CPU |
BLTengine / Blitter |
A Blitter (acronym for BLock Image TransferrER) is a part of a display controller that specialises in bitmap data-transfer using BitBlit methods. |
Buffer |
Memory dedicated to a specific function or set of functions. For example: the graphics memory functions as a frame buffer, but can also be used as a Z buffer or a video buffer. Smaller buffers exist in many different places inside the display controller's memory as well and serve as temporary storage areas for data (e.g. bitmaps). |
Chroma Keying |
This is the removal of a color from one image to reveal another image "behind" it. The removed color becomes transparent. This technique is also referred to as "colour-separation overlay" ("CSO"), "greenscreen" and "bluescreen". Since not all objects are easily modeled with polygons, chroma keying is used to include complex objects in a scene as texture maps. |
Clipping |
This usually means avoiding the drawing of items outside a defined field of view (e.g. in 2D a rectangular area). |
Depth Cueing |
Depth cueing is the lowering of intensity as objects move away from the viewpoint. |
Display list |
A display list is a group of graphic commands and arguments that has been stored for subsequent execution. The list can be stored in the CPU RAM or in local graphic memory. |
Dithering |
Dithering is a technique for archiving 24-bit quality in 8 or 16-bit frame buffers. Dithering uses two colors to create the appearance of a third, giving a smooth appearance to an otherwise abrupt transition. |
Double Buffering |
A method of using two buffers, one for display and the other for rendering. While one of the buffers is being displayed, the other buffer is operated on by a rendering engine. When the new frame is rendered, the two buffers are switched. The viewer sees a perfect image all the time. |
Flat Shading |
The flat shading method is also called constant shading. For rendering, it assigns a uniform color throughout an entire polygon. This shading results in the lowest quality, an object surface with a faceted appearance and a visible underlying geometry that looks 'blocky'. |
Fill rate |
The speed at which the display controller can render pixels. Usually measured in millions of pixels per second (Megapixels/sec). |
Fog / Fogging |
Fog is the blending of an object with a fixed color as its pixels become farther away from the viewpoint. It is a technique used in 3D computer graphics to enhance the perception of distance. Objects in the distance that have been "fogged out" can be computed more quickly. Fogging is primarily used in games and entertainment systems. |
Frames per Second (FPS) |
The rate at which the graphics processor renders new frames, or full screens of pixels. Benchmarks and games use this metric as a measurement of a display controller's performance. A faster display controller will render more frames per second, making the application more fluid and responsive to user input |
Gamma |
The characteristics of displays using phosphors (as well as some cameras) are nonlinear. A small change in voltage when the voltage level is low produces a change in the output display brightness level; but this same small change in voltage at a high voltage level will not produce the same magnitude of change in the brightness output. This effect, or actually the difference between what you should have and what you actually measured, is known as gamma. |
Gamma Correction | Before being displayed, linear RGB data must be processed (gamma corrected) to compensate for the gamma (nonlinear characteristics) of the display. |
Gouraud Shading |
One of the most popular smooth shading algorithms, and named after its French originator, Henri Gouraud. Gouraud shading, or color interpolation, is a process by which color information is interpolated across the face of the polygon to determine the colors at each pixel. It assigns color to every pixel within each polygon based on linear interpolation from the polygon's vertices. This method improves the 'blocky' (see Flat Shading) look and provides an appearance of plastic or metallic surfaces. In practice, it is used to achieve smooth lighting on low-polygon surfaces without the heavy computational requirements of calculating lighting for each pixel. |
Graphics Controller / Graphics Processor / Graphics Processing Unit (GPU) |
A high-performance 2D or 3D processor that integrates the entire graphics pipeline (transformation, lighting, setup, and rendering). A GPU offloads all calculations from the CPU, freeing the CPU for other functions such as physics and artificial intelligence. |
Graphic Display Controller (GDC) |
A GPU with integrates a flexible display controller for the connection of multiple standard displays. |
Graphics Pipeline |
The series of functions, in logical order, that must be performed to compute and display computer graphics. |
Hidden Surface Removal |
Also called visible surface determination, this entails displaying only those surfaces visible to a viewer because objects are a collection of surfaces or solids. |
Interpolation |
Interpolation is a mathematical way of regenerating missing or needed information. For example, an image needs to be scaled up by a factor of two, from 100 pixels to 200 pixels. The missing pixels are generated by interpolating between the two pixels that are on either side of the pixel that needs to be generated. After all of the 'missing' pixels have been interpolated, 200 pixels exist where only 100 existed before, and the image is twice as big as it used to be. |
Jaggies |
A slang term used to describe the stair-step effect you see along curves and edges in text or bit-mapped graphics. Anti-aliasing can smooth out jaggies. |
Layer |
A level of an image that can be edited independently from the rest of the image. Our graphics controllers support up to 8 different layers in hardware (simultaneous). |
Lighting |
There are many techniques for creating realistic graphical effects to simulate a real-life 3-D object on a 2-D display. One technique is lighting. Lighting is used to create realistic-looking scenes with greater depth instead of flat-looking or cartoonish environments. |
Line Buffer |
A line buffer is a memory buffer used to hold one line of video. If the horizontal resolution of the screen is 640 pixels and RGB is used as the color space, the line buffer would have to be 640 locations long by 3 bytes wide. This amounts to one location for each pixel and each color plane. Line buffers are typically used in filtering algorithms. |
MIP Mapping |
Multum in Parvum (Latin) means 'many in one'. MIP mapping is technique to improve graphics performance by generating and storing multiple versions of the original texture image, each with different levels of detail. The graphics processor chooses a different mipmap based on how large the object is on the screen, so that low-detail textures can be used on objects that contain only a few pixels and high-detail textures can be used on larger objects where the user will actually see the difference. This technique saves memory bandwidth and enhances performance. |
Occlusion |
The effect of one object in 3-D space blocking another object from view. |
OpenGL |
A graphics API that was originally developed by Silicon Graphics, Inc. (SGI) for use on professional graphics workstations. OpenGL subsequently grew to be the standard API for CAD and scientific applications and today is popular for consumer applications such as PC games as well. OpenGL ES is the version for embedded systems. |
Palletised Texture |
Palletised Texture means compressed texture formats, such as 1-, 2-, 4-, and 8-bit instead of 24-bit; this allows more textures to be stored in less memory. |
PCI Bus |
The Peripheral Component Interconnect standard (in practice almost always shortened to PCI) specifies a computer bus for attaching peripheral devices to a main CPU. The PCI bus is common in modern PCs, where it has displaced ISA and VESA Local Bus s the standard expansion bus, but it also appears in many other computer types. The peak transfer rate is 133MB/second for the 32-bit bus width standard at 33MHz. |
PCI Express |
A new PC bus with serial architecture delivering over 4GB per second in both upstream and downstream data transfers. Very likely, PCI Express will replace PCI on the long term. |
Perspective Correction |
A particular way to do texture mapping; it is extremely important for creating a realistic image. It takes into account the effect of the Z value in a scene while mapping texels onto the surface of polygons. As a 3D object moves away from the viewer, the length and height of the object become compressed, making it appear shorter. Without perspective correction, objects will appear to shift and 'tear' in an unrealistic way. True perspective correction is that the rate of change per pixel of texture is proportional to the depth. Since it requires a division per pixel, perspective correction is very computing intensive. |
Phong Shading |
Phong shading is a sophisticated smooth shading method, originated by Phong Bui-tuong. The Phong shading algorithm is best known for its ability to render precise, realistic specula highlights. During rendering, Phong shading achieves excellent realism by calculating the amount of light on the object at tiny points across the entire surface instead of at the vertices of the polygons. Each pixel representing the image is given its own color based on the lighting model applied at that point. Phong shading requires much more computation for the hardware than Gouraud shading. |
Pixel |
Shortform for 'picture element'. A pixel is the smallest element of a graphics display or the smallest element of a rendered image. |
Pixels per second |
The units used to describe the fill rate of a display controller. It is usually measured in millions of pixels per second (Megapixels/sec). |
Polygon |
The building blocks of all 2D or 3D objects (usually triangles) used to form the surfaces and skeletons of rendered objects. |
Projection |
The process of reducing three dimensions to two dimensions for display is called Projection. It is the mapping of the visible part of a three dimensional object onto a two dimension screen. |
Refresh rate |
The frequency at which an analogue monitor or TFT redraw the image, measured in Hertz (Hz) or cycles per second. As an example, a refresh rate of 60 Hz means the screen is redrawn 60 times per second. Higher refresh rates reduce or eliminate image "flicker" that can cause eye strain. |
Rendering |
The process of creating life-like images on a screen using mathematical models and formulas to add shading, color, and lamination to a 2D or 3D wireframe. Hence: Rendering Engine - the part of the graphics engine that draws 3D primitives, usually triangles or other simple polygons. In most implementations, the rendering engine is responsible for interpolation of edges and 'filling in' the triangle. |
RGB Colour Resolution |
The resolution of each RGB (red green blue) colour channel is represented by n bit. An RGB888 colour system has 8 bits per channel = 24 bits per pixel colour resolution. This gives a choice of over 16 million colours per pixel. Such a system is generally known as a true colour or full colour system. Other common standards are RGB666 or RGB555. |
Rendering |
The process of creating life-like images on a screen using mathematical models and formulas to add shading, color, and lamination to a 2D or 3D wireframe. Hence: Rendering Engine - the part of the graphics engine that draws 3D primitives, usually triangles or other simple polygons. In most implementations, the rendering engine is responsible for interpolation of edges and 'filling in' the triangle. |
ROP |
A parameter of the BitBlt function specifying a raster operation (ROP) that defines exactly how to combine the bits of the source and the destination. Because a bitmap is nothing more than a collection of bit values, the ROP is simply a Boolean equation that operates on the bits. An example is to add transparency to a BLT operation. |
Set-up Engine |
A set-up engine allows drivers to pass polygons to the rendering engine in the form of raw vertex information, subpixel polygon addresses. Whereas, most common designs force the host CPU to pre-process polygons for the rendering engine in terms of delta values for edges, color, and texture. Thus, a set-up engine moves processing from the host CPU to the graphics chip, reducing bus bandwidth requirements by 30% for small, randomly placed triangles and by proportionately more for larger polygons. |
SGRAM |
Synchronous Graphics Random Access memory (SGRAM) is a type of memory that is optimized for graphics use. SGRAM is capable of running at much higher speeds than fast page or EDO DRAM. SGRAM is able to execute a small number of frequently executed operations, such as buffer clears, specific to graphics applications independently of the controller. |
Shading |
Colouring a surface according to its incident light. The colour depends on the position, orientation and attributes of both the surface and the sources of the illumination. |
Span |
In raster graphics architecture a primitive is formed by scan conversion where each scan line intersects the primitive at two ends, P left and P right. A contiguous sequence of pixels on the scan line between P left and P right is called a Span. Each pixel within the span contains the z, R, G, and B data values. |
Stencil buffer |
The section of the graphics memory that stores the stencil data. Stencil data can be used to mask pixels for a variety of reasons, such as stippling patterns for lines, simple shadows and more. |
Tessellation |
Processing 3D graphics can be pipelined into three-stages: tessellation, geometry, and rendering. Tessellation is the process of subdividing a surface into smaller shapes. To describe object surface patterns, tessellation breaks down the surface of an object into manageable polygons. Triangles or quadrilaterals are two usually used polygons in drawing graphical objects because computer hardware can easy manipulate and calculate these two simple polygons. An object divided into quads and subdivided into triangles for convenient calculation. |
Texture Anti-aliasing |
An interpolation technique used to remove texture distortion, staircasing or jagged edges, at the edges of an object. |
Texture Filtering |
Removing the undesirable distortion of a raster image, also called aliasing artifacts, such as sparkles and blockiness, through interpolation of stored texture images. |
Texture Mapping |
Texture mapping is based on a stored bitmap consisting of texture pixels, or texels. It consists of wrapping a texture image onto an object to create a realistic representation of the object in 3D space. The object is represented by a set of polygons, usually triangles. The advantage is complexity reduction and rendering speed, because only one texel read is required for each pixel being written to the frame buffer. The disadvantage is the blocky image that results when the object moves. |
Transform and Lighting |
Two separate engines on the display controller that provide calculations for the rendering process. Transform performance determines how complex objects can be and how many can appear in a scene without sacrificing frame rate. Lighting techniques add to a scene's realism by changing the appearance of objects based on light sources |
Tri-linear MIP Mapping |
A method of reducing aliasing artifacts within texture maps by applying a bilinear filter to four texels from the two nearest MIP maps and then interpolating between the two. |
Triangles per second |
The rate at which a graphics controller processes triangles. It is a common industry metric for describing performance. The higher the number of triangles per second, the faster the graphics controller. |
Vertex |
A vertex is a point in 3D space with a particular location, usually given in terms of its x, y, and z coordinates. It is one of the fundamental structures in polygonal modeling: two vertices, taken together, can be used to define the endpoints of a line; three vertices can be used to define a triangle. |
Z-buffer |
The area of the graphics memory used to store the Z or depth information about rendered objects. The Z-buffer value of a pixel is used to determine if it is behind or in front of another pixel. Z calculations prevent background objects from overwriting foreground objects in the frame buffer. |
Z-sorting |
A process of removing hidden surfaces by sorting polygons in back-to-front order prior to rendering. Thus, when the polygons are rendered, the forward-most surfaces are rendered last. The rendering results are correct unless objects are close to or intersect each other. The advantage is not requiring memory for storing depth values. The disadvantage is the cost in more CPU cycles and limitations when objects penetrate each other. |