Posts Tagged ‘3D Graphics Programming’

Disabling Frustum Culling on a Game Object in Unity

Thursday, December 19th, 2013

You should never disable frustum culling in a release build.

But sometimes it can be useful to do so for debugging or when dealing with a really wacky vertex shader where mesh bounds don’t make sense anymore. Here’s an easy way to disable frustum culling on a game object by moving its bounds into the center of the camera’s frustum:

// boundsTarget is the center of the camera's frustum, in world coordinates:
Vector3 camPosition = camera.transform.position;
Vector3 normCamForward = Vector3.Normalize(camera.transform.forward);
float boundsDistance = (camera.farClipPlane - camera.nearClipPlane) / 2 + camera.nearClipPlane;
Vector3 boundsTarget = camPosition + (normCamForward * boundsDistance);

// The game object's transform will be applied to the mesh's bounds for frustum culling checking.
// We need to "undo" this transform by making the boundsTarget relative to the game object's transform:
Vector3 realtiveBoundsTarget = this.transform.InverseTransformPoint(boundsTarget);

// Set the bounds of the mesh to be a 1x1x1 cube (actually doesn't matter what the size is)
Mesh mesh = GetComponent().mesh;
mesh.bounds = new Bounds(realtiveBoundsTarget, Vector3.one);

[Download C# Unity Script Component]

Visualising the OpenGL 3D Transform Pipeline Using Unity

Saturday, October 12th, 2013

Download

Download Unity package

Transcript

“Hi, I’m Allen Pestaluky. I’m going to go over a simple tool that I’ve made in Unity that visualizes the 3D transform pipeline in OpenGL. You can download it and find the transcript of this video on my blog at allenwp.com or via the link in the video description.

This tool was designed to provide a quick visual refresher on the coordinate systems used during the 3D transform pipeline in OpenGL and Unity. If you’re planning on doing any advanced non-standard vertex transformations, especially in the vertex shader, this tool might help you refresh your memory and will enable you to simulate your transforms in a visualized, debuggable environment rather than blinding coding in the vertex shader. Or, if you’re new to 3D graphics, this tool might help you gain a better visual understanding of 3D graphics theory.

The tool that I’ve made is nothing more than a few scripts and a prefab in a unity scene. These white cube game objects represent vertices that are being passed into the vertex shader. This script assumes that these vertices don’t need any world transformation, but it would be easy to modify the script to add in other transforms to any point in the pipeline. The transform manager hosts the script which creates game objects that represent these vertices as they are transformed by the view, projection, and viewport transformations.

The Scale property adjusts the scale of the generated game objects. The Transform Camera provides the view and projection matrices. You can add any number of game objects to the vertices list to see how they will be transformed by each step of the pipeline. Finally, the View, Projection, and Viewport Transform boolean properties are used to toggle visibility of each transformation step.

You can see that, when run, new orange spheres are added to the scene view. These represent the vertices after they have been transformed by the view matrix of the camera. In Unity, the view matrix is exposed as the “worldToCameraMatrix” property of a camera and it does just that: the view matrix transforms vertices from world coordinates into camera coordinates, also known as eye or view coordinates. As you can see, these coordinates are relative to the eye of the camera. The vertices that are closer to the camera have a smaller z component and vise-versa. But it’s important to note that what you are seeing is not the exact result of the 4 dimensional view transform; first, a homogeneous divide must be performed on the resulting homogeneous 4 dimensional vector to transform it to 3 dimensional space that we can see in the scene view. Or, in this case, we could simply discard the fourth “w” component because it is 1. For each of the transformed vertices, you can see the original 4D coordinates in the “Homogeneous Vertex” property and the 3D coordinates in the position property which is visualized in the scene view.

Next, I’m going to configure the transform manager to show the result of the view and projection transforms. This results in a 4 dimensional coordinate system referred to as “clip space”. Clip space is what most graphics pipelines expect you to return from your vertex shader. No surprise, clipping is performed in this coordinate system by comparing x, y, and z coordinates against w. Any x, y, or z component greater than w or less than -w is clipped. Note that DirectX is slightly different and clips the z axis when it is less than 0. The flashing vertices that you see represent those that are being clipped.

Next, we transform into 3D space known as the normalized device coordinate system by performing the homogenous divide on the clip space vector. You can see that this coordinate system hosts your camera’s view frustum in the form of a canonical view volume between negative 1 and positive 1 on each axis. This volume is represented by the cube outline. All clipped vertices lie outside of this volume. We are now very close to what we see rendered.

Lastly we perform the final viewport transform. This tool doesn’t take any x and y offset into account, but does scale the normalized device coordinates to match the viewport width and height. The transformation into 2 dimensional space is as simple as throwing away the z component from the normalized device coordinates. You can see that this puts us in a coordinate space that matches that of our final viewport.

Hopefully this tool is useful. Please let me know if you have any questions by posting a comment on my blog — you can find the link in the description of this video if you need it. Thanks!”

Simple, fast, GPU-driven multi-textured terrain

Thursday, May 6th, 2010

In this article, I will be outlining the method of multi-textured terrain that I used in my recently completed XNA Real Time Strategy (RTS) project, Toy Factory. I have also expanded this tutorial to cover organic (blended) and deformed terrain. This method enables you to produce a similar result to Riemer’s XNA Tutorial on Multitexturing, but allows for more flexible optimization of your terrain by separating the multi-texturing from the surface geometry.

Left: Toy Factory multi-texturing | Right: Deformed, organic multi-texturing

Left: Toy Factory multi-texturing | Right: Deformed, organic multi-texturing

If you are OK with having a flat terrain, this method will achieve extremely high framerates no matter how large your surface is. If you want deformed terrain, this method will still provide a strong starting point that allows for easy optimization of the geometry.

Part 1: Drawing some simple geometry

We’ll start with a large, square, two-triangle plane and draw a “ground” texture onto it using our own custom HLSL effect file. This ground texture will describe the type of terrain that we want to draw. In this case, hardwood is red, carpet is blue, and tile is green. This is the ground texture that I am going to use:

ToyFactory Multi-texturing Ground Texture

So our current goal is to simply draw that texture onto two very large triangles.

The C# Side

These variables need to be accessible to your initialization and drawing methods:

Effect terrainEffect;
VertexPositionTexture[] vertices;
VertexDeclaration vertexDeclaration;

Inside of your initialize method, let’s load the effect (we’ll put the HLSL code in “Content\Effects\Terrain.fx”) and set up the 6 vertices that we will be drawing:

            // Load the effect:
            terrainEffect = game.Content.Load<Effect>(@"Effects\Terrain");
            // Load the ground texture:
            Texture2D ground = game.Content.Load<Texture2D>(@"Ground");
            // Set the texture parameter of our effect to the ground:
            terrainEffect.Parameters["Ground"].SetValue(ground);

            // Initialize our verticies:
            vertices = new VertexPositionTexture[6];
            for (int i = 0; i < vertices.Length; i++)
                vertices[i] = new VertexPositionTexture();

            // Initialize our vertex declaration:
            vertexDeclaration = new VertexDeclaration(device, VertexPositionTexture.VertexElements);

            Vector3 topLeft = new Vector3(0f, groundHeight, 0f);
            Vector3 topRight = new Vector3(width, groundHeight, 0f);
            Vector3 bottomLeft = new Vector3(0f, groundHeight, height);
            Vector3 bottomRight = new Vector3(width, groundHeight, height);

            Vector2 topLeftTex = new Vector2(0f, 0f);
            Vector2 topRightTex = new Vector2(1f, 0f);
            Vector2 bottomLeftTex = new Vector2(0f, 1f);
            Vector2 bottomRightTex = new Vector2(1f, 1f);

            vertices[0].Position = topLeft;
            vertices[0].TextureCoordinate = topLeftTex;
            vertices[1].Position = bottomRight;
            vertices[1].TextureCoordinate = bottomRightTex;
            vertices[2].Position = bottomLeft;
            vertices[2].TextureCoordinate = bottomLeftTex;
            vertices[3].Position = topLeft;
            vertices[3].TextureCoordinate = topLeftTex;
            vertices[4].Position = topRight;
            vertices[4].TextureCoordinate = topRightTex;
            vertices[5].Position = bottomRight;
            vertices[5].TextureCoordinate = bottomRightTex;

The following code will go in your draw method to draw the vertices to the screen using the effect that we will create:

                terrainEffect.CurrentTechnique = terrainEffect.Techniques["Terrain"];
                terrainEffect.Parameters["View"].SetValue(camera.ViewMatrix);
                terrainEffect.Parameters["Projection"].SetValue(camera.ProjectionMatrix);

                terrainEffect.Begin();
                terrainEffect.CurrentTechnique.Passes[0].Begin();

                device.VertexDeclaration = vertexDeclaration;
                device.DrawUserPrimitives(PrimitiveType.TriangleList, vertices, 0, numTriangles);

                terrainEffect.CurrentTechnique.Passes[0].End();
                terrainEffect.End();

The HLSL Side

Let’s call this file “Terrain.fx”. We’ve already referenced to the parameters and technique in this effect file in the above C# code. Note lines 12-14 where I ensure that no smoothing from the sampled texture happens by setting the min, mag, and mip filters to “None”. If you want to smooth between terrain types, you could try setting those values to “Linear” instead (but this will require modification of some code that we will look at latter in this tutorial).

// HLSL to simply sample from a texture

// Input parameters.
float4x4 View;
float4x4 Projection;

texture Ground;
sampler GroundSampler = sampler_state
{
    Texture = (Ground);

    MinFilter = None;
    MagFilter = None;
    MipFilter = None;
    AddressU = clamp;
    AddressV = clamp;
};

// Vertex shader input structure.
struct VS_INPUT
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
};

// Vertex shader output structure.
struct VS_OUTPUT
{
    float4 Position : POSITION0;
    float2 TexCoord : TEXCOORD0;
};

// Vertex shader program.
VS_OUTPUT VertexShader(VS_INPUT input)
{
    VS_OUTPUT output;

    //generate the view-projection matrix
    float4x4 vp = mul(View, Projection);
    output.Position = mul(input.Position, vp);

    output.TexCoord = input.TexCoord;

    return output;
}

float4 PixelShader(VS_OUTPUT input) : COLOR
{
	float4 colour =	tex2D(GroundSampler, input.TexCoord);
	return colour;
}

technique Terrain
{
    pass Main
    {
        VertexShader = compile vs_2_0 VertexShader();
        PixelShader = compile ps_2_0 PixelShader();
    }
}

This should result in the following, which is already recognizable as a ground plane that follows the pattern we described in our “ground” texture (zoomed in on the top right corner of the texture):

ToyFactory Multi-texturing Ground - Drawn

ToyFactory Multi-texturing Ground Texture - Drawn in 3D

Part 2: Using the graphics card to decide which texture to draw

Now, time for some magic! Let’s pass in three more textures to our effect:

We can include these new textures the same way we did with the ground texture, but we’ll use them slightly differently:

The C# Side

In your initialize method, add lines 6-12 to load the textures:

            // Load the ground texture:
            Texture2D ground = game.Content.Load<Texture2D>(@"Ground");
            // Set the texture parameter of our effect to the ground:
            terrainEffect.Parameters["Ground"].SetValue(ground);

            // Load the terrain textures:
            Texture2D hardwood = game.Content.Load<Texture2D>(@"Hardwood");
            terrainEffect.Parameters["GroundText0"].SetValue(hardwood);
            Texture2D tile = game.Content.Load<Texture2D>(@"Tile");
            terrainEffect.Parameters["GroundText1"].SetValue(tile);
            Texture2D carpet = game.Content.Load<Texture2D>(@"Carpet");
            terrainEffect.Parameters["GroundText2"].SetValue(carpet);

            // Now let's set some new parameters that we will be using latter in our HLSL code:
            terrainEffect.Parameters["GroundText0Scale"].SetValue(terrainHardwoodDensity);
            terrainEffect.Parameters["GroundText1Scale"].SetValue(terrainTileDensity);
            terrainEffect.Parameters["GroundText2Scale"].SetValue(terrainCarpetDensity);

The HLSL Side

Now we have the different terrain types loaded into the effect as parameters. With these new terrain textures and their respective “scale” parameters, we can now draw them repeating across the entire ground plane — to do this, we use the “AddressU = wrap” and “AddressV = wrap” parameters in the sampler code for these new textures (see lines 13-14, 25-26, 37-38). Here’s what our new samplers in the HLSL should look like:

float GroundText0Scale;
float GroundText1Scale;
float GroundText2Scale;

texture GroundText0;
sampler GroundText0Sampler = sampler_state
{
    Texture = (GroundText0);

    MinFilter = Linear;
    MagFilter = Linear;
    MipFilter = Linear;
    AddressU = wrap;
    AddressV = wrap;
};

texture GroundText1;
sampler GroundText1Sampler = sampler_state
{
    Texture = (GroundText1);

    MinFilter = Linear;
    MagFilter = Linear;
    MipFilter = Linear;
    AddressU = wrap;
    AddressV = wrap;
};

texture GroundText2;
sampler GroundText2Sampler = sampler_state
{
    Texture = (GroundText2);

    MinFilter = Linear;
    MagFilter = Linear;
    MipFilter = Linear;
    AddressU = wrap;
    AddressV = wrap;
};

Next, we will put some logic in our pixel shader that will sample from only one of the three terrain textures, depending on what colour we receive from the “ground” texture:

float4 PixelShader(VS_OUTPUT input) : COLOR
{
	float4 colour = tex2D(GroundSampler, input.TexCoord);

	if(colour.r == 1)
	{
		colour = tex2D(GroundText0Sampler, input.TexCoord * GroundText0Scale);
	}
	else if(colour.g == 1)
	{
		colour = tex2D(GroundText1Sampler, input.TexCoord * GroundText1Scale);
	}
	else
	{
		colour = tex2D(GroundText2Sampler, input.TexCoord * GroundText2Scale);
	}

	return colour;
}

This will result in Toy Factory-style multitexturing that is super-fast, no matter what the size of your ground is. You can adjust the scaling of each type of terrain by using the “Scale” parameters that are passed into the effect.

Toy Factory Multi-texturing

Toy Factory Multi-texturing

Part 3: Organic terrain blending

Though these hard cut edges worked well for our indoor environment, many games will require a smoother blending between terrain types to create the illusion of an organic and natural terrain. To achieve this effect, simply change the ground sampler’s min, mag, and mip filters and change the logic of the pixel shader:

texture Ground;
sampler GroundSampler = sampler_state
{
    Texture = (Ground);

    MinFilter = Linear;
    MagFilter = Linear;
    MipFilter = Linear;
    // use "clamp" to avoid unwanted wrapping problems at the edges due to smoothing:
    AddressU = clamp;
    AddressV = clamp;
};
float4 PixelShader(VS_OUTPUT input) : COLOR
{
 float4 groundSample = tex2D(GroundSampler, input.TexCoord);

 float4 colour = float4(0,0,0,1);
 colour += tex2D(GroundText0Sampler, input.TexCoord * GroundText0Scale) * groundSample.r;
 colour += tex2D(GroundText1Sampler, input.TexCoord * GroundText1Scale) * groundSample.g;
 colour += tex2D(GroundText2Sampler, input.TexCoord * GroundText2Scale) * groundSample.b;

 return colour;
}

Let’s use these three textures instead:

This will result in the following, more organic looking terrain:

Toy Factory Multi-texturing - Organic

Toy Factory Multi-texturing - Organic

You can also modify the “ground” texture to include softer edges between values to have a smoother blending of terrain types.

Part 4: Mapping the terrain to complex geometry (deformable terrain)

When developing the fog of war for Toy Factory, I discovered a great trick that Catalin Zima used in his fog of war sample. In his sample he used world coordinates to determine the colour/alpha value for a given pixel in the pixel shader. This concept can also be applied in our sample by simply using the X and Y world coordinates to determine the texture coordinate that will be passed to the sampler.

The C# Side

Our C# code can change a bit now because we are no longer using texture coordinates. Instead, we will purely be using world coordinates. At this point you can change the vertex declaration to use VertexPositionColor, which is a little faster than the old VertexPositionTexture due to less data being sent to the graphics card. Or, you can scrap the old vertex buffer altogether and draw your own imported model, a custom mesh that has been optimized using a quadtree, or anything else you want!

Add the following to your initialize method:

            terrainEffect.Parameters["MapWidth"].SetValue(mapWidth);
            terrainEffect.Parameters["MapHeight"].SetValue(mapHeight);

The following code will draw an existing model with the effect, but you could also use a heightmap-generated mesh as well.

                foreach (ModelMesh mesh in terrainModel.Meshes)
                {
                    foreach (Effect effect in mesh.Effects)
                    {
                        effect.CurrentTechnique = terrainEffect.Techniques["Terrain"];
                        effect.Parameters["View"].SetValue(camera.ViewMatrix);
                        effect.Parameters["Projection"].SetValue(camera.ProjectionMatrix);
                        mesh.Draw();
                    }
                }

The HLSL Side:

Add the following two parameters:

float MapWidth;
float MapHeight;

And change your vertex and pixel shaders and input/output structures:

// Vertex shader input structure.
struct VS_INPUT
{
    float4 Position : POSITION0;
};

// Vertex shader output structure.
struct VS_OUTPUT
{
    float4 Position : POSITION0;
    float4 WorldPos : TEXCOORD0;
};

// Vertex shader program.
VS_OUTPUT VertexShader(VS_INPUT input)
{
    VS_OUTPUT output;

    //generate the view-projection matrix
    float4x4 vp = mul(View, Projection);
    output.Position = mul(input.Position, vp);
    output.WorldPos = input.Position;

    return output;
}

float4 PixelShader(VS_OUTPUT input) : COLOR
{
	float2 mapPosition = float2(input.WorldPos.x / MapWidth, input.WorldPos.z / MapHeight);
	float4 groundSample = tex2D(GroundSampler, mapPosition);

	float4 colour = float4(0,0,0,1);
	colour += tex2D(GroundText0Sampler, mapPosition * GroundText0Scale) * groundSample.r;
	colour += tex2D(GroundText1Sampler, mapPosition * GroundText1Scale) * groundSample.g;
	colour += tex2D(GroundText2Sampler, mapPosition * GroundText2Scale) * groundSample.b;

	return colour;
}

And here’s the result — It needs a bit of artistic work, but all the functionality that you need should be at your fingertips!

Toy Factory Multi-texturing - Organic & Deformed

Toy Factory Multi-texturing - Organic & Deformed