3D rendering is basically the process of creating two-dimensional images (e.g. for a computer screen) from a 3D model. The images are generated based on sets of data dictating what color, texture, and material a certain object in the image has.
Rendering first came about in 1960 when William Fetter created a depiction of a pilot in order to simulate the space needed in a cockpit. Then, in 1963, Ivan Sutherland created Sketchpad, the first 3D modeling program, while at MIT. For his pioneering work, he is known as the “Father of Computer Graphics”.
In 1975, researcher Martin Newell created the “Utah Teapot”, a 3D test model which became a standard test render. This teapot, also called the Newell Teapot, has become so iconic that it’s thought to be the equivalent of “Hello World” in the realm of computer programming.
How It Works
In concept, 3D rendering is similar to photography. For instance, a rendering program effectively points a camera towards an object to compose a photo. As such, digital lighting is important to create a detailed and realistic render.
Over time, a number of different rendering techniques have been developed. Nevertheless, the goal of every render is to capture an image based on how light hits objects, just like in real life.
Rendering Technique #1: Rasterization
One of the earliest methods for rendering, rasterization works by treating the model as a mesh of polygons. These polygons have vertices which are embedded with information such as position, texture, and color. These vertices are then projected onto a plane normal to the perspective (i.e. the camera).
With the vertices acting like borders, the remaining pixels are filled with the right colors. Imagine painting by first having an outline for every color you paint – that’s rendering via rasterization.
Rasterization is a fast form of rendering. It’s still used widely today, especially for real-time rendering (e.g. computer games, simulation, and interactive GUI). More recently, the process has been further improved by higher resolution and anti-aliasing, a process used to smoothen the edges of objects and blend them into surrounding pixels.
Rendering Technique #2: Ray Casting
Though useful, rasterization encounters issues when overlapping objects are present: If surfaces overlap, the last one drawn will be reflected in the render, causing the wrong object to be rendered. To solve this, the concept of a Z-buffer was developed for rasterization. This involves a depth sensor to indicate which surface is under or over in a particular point of view.
This became unnecessary, however, when ray casting was developed. As opposed to rasterization, the potential problem of overlapping surfaces does not occur with ray casting.
Ray casting, as its name implies, casts rays onto the model from the camera’s point of view. The rays are drawn out to every pixel on the image plane. The surface it hits first will be shown in the render, and any other intersection after a first surface will not be rendered.
Rendering Technique #3: Ray Tracing
Despite the advantages brought by ray casting, the technique still lacked the ability to properly simulate shadows, reflections, and refractions. Thus, ray tracing was developed.
Ray tracing works similar to ray casting, except it’s better at depicting light. Essentially, the primary rays from the camera’s point of view are cast onto the models to produce secondary rays. After hitting a model, shadow rays, reflection rays, or refraction rays will be emitted, depending on the surface’s properties.
A shadow is generated on another surface if the path of the shadow ray to the light source is hindered by the surface. If the surface is a reflective one, the resulting reflection ray will be emitted at an angle and will illuminate any other surface it hits, which will further emit another set of rays. For this reason, this technique is also known as recursive ray tracing. For a transparent surface, a refractive ray is emitted once the surface is hit by the secondary ray.
Rendering Technique #4: Rendering Equation
Further development in rendering eventually led to the rendering equation, which attempts to simulate how light is emitted more accurately in reality. The technique considers that light is emitted by everything, not just by a single light source. This equation tries to consider all sources of light in the render, as compared to ray tracing, which only utilizes direct illumination. The algorithm created using this equation is known as global illumination or indirect illumination.