Visualising the OpenGL 3D Transform Pipeline Using Unity

Download

Download Unity package

Transcript

“Hi, I’m Allen Pestaluky. I’m going to go over a simple tool that I’ve made in Unity that visualizes the 3D transform pipeline in OpenGL. You can download it and find the transcript of this video on my blog at allenwp.com or via the link in the video description.

This tool was designed to provide a quick visual refresher on the coordinate systems used during the 3D transform pipeline in OpenGL and Unity. If you’re planning on doing any advanced non-standard vertex transformations, especially in the vertex shader, this tool might help you refresh your memory and will enable you to simulate your transforms in a visualized, debuggable environment rather than blinding coding in the vertex shader. Or, if you’re new to 3D graphics, this tool might help you gain a better visual understanding of 3D graphics theory.

The tool that I’ve made is nothing more than a few scripts and a prefab in a unity scene. These white cube game objects represent vertices that are being passed into the vertex shader. This script assumes that these vertices don’t need any world transformation, but it would be easy to modify the script to add in other transforms to any point in the pipeline. The transform manager hosts the script which creates game objects that represent these vertices as they are transformed by the view, projection, and viewport transformations.

The Scale property adjusts the scale of the generated game objects. The Transform Camera provides the view and projection matrices. You can add any number of game objects to the vertices list to see how they will be transformed by each step of the pipeline. Finally, the View, Projection, and Viewport Transform boolean properties are used to toggle visibility of each transformation step.

You can see that, when run, new orange spheres are added to the scene view. These represent the vertices after they have been transformed by the view matrix of the camera. In Unity, the view matrix is exposed as the “worldToCameraMatrix” property of a camera and it does just that: the view matrix transforms vertices from world coordinates into camera coordinates, also known as eye or view coordinates. As you can see, these coordinates are relative to the eye of the camera. The vertices that are closer to the camera have a smaller z component and vise-versa. But it’s important to note that what you are seeing is not the exact result of the 4 dimensional view transform; first, a homogeneous divide must be performed on the resulting homogeneous 4 dimensional vector to transform it to 3 dimensional space that we can see in the scene view. Or, in this case, we could simply discard the fourth “w” component because it is 1. For each of the transformed vertices, you can see the original 4D coordinates in the “Homogeneous Vertex” property and the 3D coordinates in the position property which is visualized in the scene view.

Next, I’m going to configure the transform manager to show the result of the view and projection transforms. This results in a 4 dimensional coordinate system referred to as “clip space”. Clip space is what most graphics pipelines expect you to return from your vertex shader. No surprise, clipping is performed in this coordinate system by comparing x, y, and z coordinates against w. Any x, y, or z component greater than w or less than -w is clipped. Note that DirectX is slightly different and clips the z axis when it is less than 0. The flashing vertices that you see represent those that are being clipped.

Next, we transform into 3D space known as the normalized device coordinate system by performing the homogenous divide on the clip space vector. You can see that this coordinate system hosts your camera’s view frustum in the form of a canonical view volume between negative 1 and positive 1 on each axis. This volume is represented by the cube outline. All clipped vertices lie outside of this volume. We are now very close to what we see rendered.

Lastly we perform the final viewport transform. This tool doesn’t take any x and y offset into account, but does scale the normalized device coordinates to match the viewport width and height. The transformation into 2 dimensional space is as simple as throwing away the z component from the normalized device coordinates. You can see that this puts us in a coordinate space that matches that of our final viewport.

Hopefully this tool is useful. Please let me know if you have any questions by posting a comment on my blog — you can find the link in the description of this video if you need it. Thanks!”


Posted

in

by