3D computer graphics as seen on FaceYourArt.com
A 3D rendering with raytracing and ambient occlusion using Blender and Yafray
3D computer graphics are works of graphic art created with the aid of digital computers and 3D software. The term may also refer to the process of creating such graphics, or the field of study of 3D computer graphic techniques and related technology.
3D computer graphics are different from 2D computer graphics in that a three-dimensional representation of geometric data is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing.
3D modeling is the process of preparing geometric data for 3D computer graphics, and is akin to sculpting or photography, whereas the art of 2D graphics is analogous to painting. Despite these differences, 3D computer graphics rely on many of the same algorithms as 2D computer graphics.
In computer graphics software, the distinction between 2D and 3D is occasionally blurred; 2D applications may use 3D techniques to achieve effects such as lighting, and primarily 3D may use 2D techniques.
2 Creation of 3D computer graphics 2.1 Modeling
3 Process 3.1 Scene layout setup
3.2 Tessellation and meshes
3.4 Renderers 3.4.1 Projection
4 Reflection and shading models
5 3D graphics APIs
6 See also
OpenGL and Direct3D are two popular APIs for generation of real-time imagery. Real-time means that image generation occurs in ‘real time’, or ‘on the fly’, and may be highly user-interactive. Many modern graphics cards provide some degree of hardware acceleration based on these APIs, frequently enabling display of complex 3D graphics in real-time.
 Creation of 3D computer graphics as seen on FaceYourArt.com
3D model of a suspension bridge spanning an unusually placid body of water
Architectural rendering compositing of modeling and lighting finalized by rendering process
The process of creating 3D computer graphics can be sequentially divided into three basic phases:
Content creation (3D modeling, texturing, animation)
Scene layout setup
The modeling stage could be described as shaping individual objects that are later used in the scene. There exist a number of modeling techniques, including, but not limited to the following:
constructive solid geometry
Modeling processes may also include editing object surface or material properties (e.g., color, luminosity, diffuse and specular shading components — more commonly called roughness and shininess, reflection characteristics, transparency or opacity, or index of refraction), adding textures, bump-maps and other features.
Modeling may also include various activities related to preparing a 3D model for animation (although in a complex character model this will become a stage of its own, known as rigging). Objects may be fitted with a skeleton, a central framework of an object with the capability of affecting the shape or movements of that object. This aids in the process of animation, in that the movement of the skeleton will automatically affect the corresponding portions of the model. See also Forward kinematic animation and Inverse kinematic animation. At the rigging stage, the model can also be given specific controls to make animation easier and more intuitive, such as facial expression controls and mouth shapes (phonemes) for lipsyncing.
Modeling can be performed by means of a dedicated program (e.g., Lightwave Modeler, Rhinoceros 3D, Moray), an application component (Shaper, Lofter in 3D Studio) or some scene description language (as in POV-Ray). In some cases, there is no strict distinction between these phases; in such cases modelling is just part of the scene creation process (this is the case, for example, with Caligari trueSpace and Realsoft 3D).
Particle system are a mass of 3d coordinates which have either points, polygons, splats or sprites assign to them. They act as a volume to represent a shape.
A 3D scene of 8 red glass balls
 Scene layout setup
Scene setup involves arranging virtual objects, lights, cameras and other entities on a scene which will later be used to produce a still image or an animation. If used for animation, this phase usually makes use of a technique called “keyframing”, which facilitates creation of complicated movement in the scene. With the aid of keyframing, instead of having to fix an object’s position, rotation, or scaling for each frame in an animation, one needs only to set up some key frames between which states in every frame are interpolated.
Lighting is an important aspect of scene setup. As is the case in real-world scene arrangement, lighting is a significant contributing factor to the resulting aesthetic and visual quality of the finished work. As such, it can be a difficult art to master. Lighting effects can contribute greatly to the mood and emotional response effected by a scene, a fact which is well-known to photographers and theatrical lighting technicians.
 Tessellation and meshes as seen on [http://www.FaceYourArt.com]
The process of transforming representations of objects, such as the middle point coordinate of a sphere and a point on its circumference into a polygon representation of a sphere, is called tessellation. This step is used in polygon-based rendering, where objects are broken down from abstract representations (“primitives”) such as spheres, cones etc, to so-called meshes, which are nets of interconnected triangles.
Meshes of triangles (instead of e.g. squares) are popular as they have proven to be easy to render using scanline rendering.
Polygon representations are not used in all rendering techniques, and in these cases the tessellation step is not included in the transition from abstract representation to rendered scene.
 Rendering as seen on [http://www.FaceYourArt.com]
Rendering is the final process of creating the actual 2D image or animation from the prepared scene. This can be compared to taking a photo or filming the scene after the setup is finished in real life.
Rendering for interactive media, such as games and simulations, is calculated and displayed in real time, at rates of approximately 20 to 120 frames per second. Animations for non-interactive media, such as feature films and video, are rendered much more slowly. Non-real time rendering enables the leveraging of limited processing power in order to obtain higher image quality. Rendering times for individual frames may vary from a few seconds to several days for complex scenes. Rendered frames are stored on a hard disk then can be transferred to other media such as motion picture film or optical disk. These frames are then displayed sequentially at high frame rates, typically 24, 25, or 30 frames per second, to achieve the illusion of movement.
Several different, and often specialized, rendering methods have been developed. These range from the distinctly non-realistic wireframe rendering through polygon-based rendering, to more advanced techniques such as: scanline rendering, ray tracing, or radiosity. In general, different methods are better suited for either photo-realistic rendering, or real-time rendering.
In real-time rendering, the goal is to show as much information as possible as the eye can process in a 30th of a second (or one frame, in the case of 30 frame-per-second animation). The goal here is primarily speed and not photo-realism. In fact, here exploitations are made in the way the eye ‘perceives’ the world, and thus the final image presented is not necessarily that of the real-world, but one which the eye can closely associate to. This is the basic method employed in games, interactive worlds, VRML. The rapid increase in computer processing power has allowed a progressively higher degree of realism even for real-time rendering, including techniques such as HDR rendering. Real-time rendering is often polygonal and aided by the computer’s GPU.
An example of a ray-traced image that typically takes seconds or minutes to render. The photo-realism is apparent.
When the goal is photo-realism, techniques are employed such as ray tracing or radiosity. Rendering often takes of the order of seconds or sometimes even days (for a single image/frame). This is the basic method employed in digital media and artistic works, etc.
Rendering software may simulate such visual effects as lens flares, depth of field or motion blur. These are attempts to simulate visual phenomena resulting from the optical characteristics of cameras and of the human eye. These effects can lend an element of realism to a scene, even if the effect is merely a simulated artifact of a camera.
Techniques have been developed for the purpose of simulating other naturally-occurring effects, such as the interaction of light with various forms of matter. Examples of such techniques include particle systems (which can simulate rain, smoke, or fire), volumetric sampling (to simulate fog, dust and other spatial atmospheric effects), caustics (to simulate light focusing by uneven light-refracting surfaces, such as the light ripples seen on the bottom of a swimming pool), and subsurface scattering (to simulate light reflecting inside the volumes of solid objects such as human skin).
The rendering process is computationally expensive, given the complex variety of physical processes being simulated. Computer processing power has increased rapidly over the years, allowing for a progressively higher degree of realistic rendering. Film studios that produce computer-generated animations typically make use of a render farm to generate images in a timely manner. However, falling hardware costs mean that it is entirely possible to create small amounts of 3D animation on a home computer system.
The output of the renderer is often used as only one small part of a completed motion-picture scene. Many layers of material may be rendered separately and integrated into the final shot using compositing software.
 Renderers as seen on [http://www.FaceYourArt.com]
Often renderers are included in 3D software packages, but there are some rendering systems that are used as plugins to popular 3D applications. These rendering systems include:
AccuRender for SketchUp
Since the human eye sees three dimensions, the mathematical model represented inside the computer must be transformed back so that the human eye can correlate the image to a realistic one. But the fact that the display device – namely a monitor – can display only two dimensions means that this mathematical model must be transferred to a two-dimensional image. Often this is done using projection; mostly using perspective projection. The basic idea behind the perspective projection, which unsurprisingly is the way the human eye works, is that objects that are further away are smaller in relation to those that are closer to the eye. Thus, to collapse the third dimension onto a screen, a corresponding operation is carried out to remove it – in this case, a division operation.
Orthographic projection is used mainly in CAD or CAM applications where scientific modelling requires precise measurements and preservation of the third dimension.
 Reflection and shading models as seen on [http://www.FaceYourArt.com]
Modern 3D computer graphics rely heavily on a simplified reflection model called Phong reflection model (not to be confused with Phong shading).
In refraction of light, an important concept is the refractive index. In most 3D programming implementations, the term for this value is “index of refraction,” usually abbreviated “IOR.”
Popular reflection rendering techniques in 3D computer graphics include:
Flat shading: A technique that shades each polygon of an object based on the polygon’s “normal” and the position and intensity of a light source.
Gouraud shading: Invented by H. Gouraud in 1971, a fast and resource-conscious vertex shading technique used to simulate smoothly shaded surfaces.
Texture mapping: A technique for simulating a large amount of surface detail by mapping images (textures) onto polygons.
Phong shading: Invented by Bui Tuong Phong, used to simulate specular highlights and smooth shaded surfaces.
Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate wrinkled surfaces.
Cel shading: A technique used to imitate the look of hand-drawn animation.
 3D graphics APIs
3D graphics have become so popular, particularly in computer games, that specialized APIs (application programming interfaces) have been created to ease the processes in all stages of computer graphics generation. These APIs have also proved vital to computer graphics hardware manufacturers, as they provide a way for programmers to access the hardware in an abstract way, while still taking advantage of the special hardware of this-or-that graphics card.
These APIs for 3D computer graphics are particularly popular:
OpenGL and the OpenGL Shading Language
OpenGL ES 3D API for embedded devices
Direct3D (a subset of DirectX)
TruDimension LC Glasses and 3D monitor API
There are also higher-level 3D scene-graph APIs which provide additional functionality on top of the lower-level rendering API. Such libraries under active development include:
JSR 184 (M3G)
Vega Prime by MultiGen-Paradigm
NVidia Scene Graph
UGS DirectModel (aka JT)