Disabling Frustum Culling on a Game Object in Unity

December 19th, 2013

You should never disable frustum culling in a release build.

But sometimes it can be useful to do so for debugging or when dealing with a really wacky vertex shader where mesh bounds don’t make sense anymore. Here’s an easy way to disable frustum culling on a game object by moving its bounds into the center of the camera’s frustum:

// boundsTarget is the center of the camera's frustum, in world coordinates:
Vector3 camPosition = camera.transform.position;
Vector3 normCamForward = Vector3.Normalize(camera.transform.forward);
float boundsDistance = (camera.farClipPlane - camera.nearClipPlane) / 2 + camera.nearClipPlane;
Vector3 boundsTarget = camPosition + (normCamForward * boundsDistance);

// The game object's transform will be applied to the mesh's bounds for frustum culling checking.
// We need to "undo" this transform by making the boundsTarget relative to the game object's transform:
Vector3 realtiveBoundsTarget = this.transform.InverseTransformPoint(boundsTarget);

// Set the bounds of the mesh to be a 1x1x1 cube (actually doesn't matter what the size is)
Mesh mesh = GetComponent().mesh;
mesh.bounds = new Bounds(realtiveBoundsTarget, Vector3.one);

[Download C# Unity Script Component]

Visualising the OpenGL 3D Transform Pipeline Using Unity

October 12th, 2013

Download

Download Unity package

Transcript

“Hi, I’m Allen Pestaluky. I’m going to go over a simple tool that I’ve made in Unity that visualizes the 3D transform pipeline in OpenGL. You can download it and find the transcript of this video on my blog at allenwp.com or via the link in the video description.

This tool was designed to provide a quick visual refresher on the coordinate systems used during the 3D transform pipeline in OpenGL and Unity. If you’re planning on doing any advanced non-standard vertex transformations, especially in the vertex shader, this tool might help you refresh your memory and will enable you to simulate your transforms in a visualized, debuggable environment rather than blinding coding in the vertex shader. Or, if you’re new to 3D graphics, this tool might help you gain a better visual understanding of 3D graphics theory.

The tool that I’ve made is nothing more than a few scripts and a prefab in a unity scene. These white cube game objects represent vertices that are being passed into the vertex shader. This script assumes that these vertices don’t need any world transformation, but it would be easy to modify the script to add in other transforms to any point in the pipeline. The transform manager hosts the script which creates game objects that represent these vertices as they are transformed by the view, projection, and viewport transformations.

The Scale property adjusts the scale of the generated game objects. The Transform Camera provides the view and projection matrices. You can add any number of game objects to the vertices list to see how they will be transformed by each step of the pipeline. Finally, the View, Projection, and Viewport Transform boolean properties are used to toggle visibility of each transformation step.

You can see that, when run, new orange spheres are added to the scene view. These represent the vertices after they have been transformed by the view matrix of the camera. In Unity, the view matrix is exposed as the “worldToCameraMatrix” property of a camera and it does just that: the view matrix transforms vertices from world coordinates into camera coordinates, also known as eye or view coordinates. As you can see, these coordinates are relative to the eye of the camera. The vertices that are closer to the camera have a smaller z component and vise-versa. But it’s important to note that what you are seeing is not the exact result of the 4 dimensional view transform; first, a homogeneous divide must be performed on the resulting homogeneous 4 dimensional vector to transform it to 3 dimensional space that we can see in the scene view. Or, in this case, we could simply discard the fourth “w” component because it is 1. For each of the transformed vertices, you can see the original 4D coordinates in the “Homogeneous Vertex” property and the 3D coordinates in the position property which is visualized in the scene view.

Next, I’m going to configure the transform manager to show the result of the view and projection transforms. This results in a 4 dimensional coordinate system referred to as “clip space”. Clip space is what most graphics pipelines expect you to return from your vertex shader. No surprise, clipping is performed in this coordinate system by comparing x, y, and z coordinates against w. Any x, y, or z component greater than w or less than -w is clipped. Note that DirectX is slightly different and clips the z axis when it is less than 0. The flashing vertices that you see represent those that are being clipped.

Next, we transform into 3D space known as the normalized device coordinate system by performing the homogenous divide on the clip space vector. You can see that this coordinate system hosts your camera’s view frustum in the form of a canonical view volume between negative 1 and positive 1 on each axis. This volume is represented by the cube outline. All clipped vertices lie outside of this volume. We are now very close to what we see rendered.

Lastly we perform the final viewport transform. This tool doesn’t take any x and y offset into account, but does scale the normalized device coordinates to match the viewport width and height. The transformation into 2 dimensional space is as simple as throwing away the z component from the normalized device coordinates. You can see that this puts us in a coordinate space that matches that of our final viewport.

Hopefully this tool is useful. Please let me know if you have any questions by posting a comment on my blog — you can find the link in the description of this video if you need it. Thanks!”

Human Skills: Elements of Game & Play

May 28th, 2013

The Problem with Genres & Game Design

As someone who contributes to creating games I have a difficult time understanding what other designers mean to say when they use a genre to describe a game system and its mechanics. For example, if someone says a game is a “First Person Shooter” I know that it is common for that genre to incorporate exploration, fast reflexes, precise hand-eye coordination, puzzle solving, tactics, etc. — But when talking game design, this is usually not the clear picture they intended: does their specific game involve any focus on puzzle solving? Does the player actually need to do any exploring?

This problem can cause an explosion of miscommunication among team members and result in an unfocused gameplay experience that ends up feeling like a mishmash of everyone’s ideas on what a game of this genre is supposed to be. Arguments can arise as people from different perspectives use a different foundation for discussion.

By looking at this problem, it’s obvious that vocabulary is at fault. Genres are designed to help the consumer by describing not only the raw game mechanics but also the theme, styling, and content of the media. This makes genres inappropriate and confusing when used to communicate the fine details of game design. This article aims to provide a more precise way of communicating the elements of play to avoid the issues that arise when describing a game based on genre.

A More Precise Communication Style

There is a field of thought that says that learning and exercising skills is instinctively fun for humans and animals. This is what drives us to play and what brings us enjoyment in play. With this in mind I find it helpful to understand “fun” as the act of learning and exercising skills. Games can consist of combinations of different skills: Like chemical compounds are composed of chemical elements, a game system is composed of the skills which it develops and exercises.

How-Video-Games-Can-Make-Kids-Better-People

Some skills that are commonly developed and exercised in games include the following:

  • Problem solving skills
  • Math skills
  • Statistics skills
  • Pattern recognition skills
  • Organization/categorization skills
  • Memory skills
  • Dexterity skills
  • Rhythm skills
  • Strategy & tactics skills
  • Navigation skills
  • Exploration skills
  • Visual comprehension skills
  • Social skills
  • Role playing skills

The vocabulary of skills doesn’t only provide a solid base of communication, but also forces you to think in terms of the most fine-grained elements that make up your gameplay experience. Knowing that a component of my game system is designed to exercise a player’s memory skills will help me design that experience to be more focused, coherent, and accessible to the player.

I may still use a genre, such as a First Person Shooter, to describe the presentation, pre-play expectations, high-level features, etc., but when communicating a game system design and fleshing out the details of game mechanics, human skills can be a much more effective way of communicating and thinking about your game.

Two Game Messaging Systems using Observer and Visitor Patterns

September 22nd, 2012

Maximum output for minimum input.

…That’s the idea behind Juicifying Your Game. But this can lead to some pretty messy code if you start injecting extra code at every event that happens in your game. This is where messaging/event systems come in handy to ensure that every component of the game is given an opportunity to provide their own juice.

This article is a quick overview of a couple different ways to implement a messaging/event system in your game engine. Typically something similar to the straight observer pattern is used, but there might be situations where using a visitor pattern could assist in more readable and maintainable code.

Straight Observer Pattern

Typically, the observer pattern allows an observer to be registered for notifications when a state change occurs in the system. In this case, we will use it to send game messages to a list of all registered components by calling their handleMessage(GameMessage*) function. This will require all observers to implement a common interface which includes this function and register themselves with the Game Message Manager to receive messages.

observer_message_system

void Component1::handleMessage(GameMessage* message)
{
    switch(message->getMessageType())
    {
    case MessageTypeTargetDestroyed:
        // Do stuff to this
        break;
    // etc.
    }
}

Observer-Visitor Hybrid

A strong benefit of the visitor pattern is full encapsulation of logic within the visitor. In this case the GameMessage will be the visitor which will visit all observers rather than relying on the observers implementing their own handling logic.

visitor_message_system

void GameMessageManager::sendMessage(GameMessage* message)
{
    foreach(std::vector::const_iterator it = components->begin(); it != components->end(); ++it)
    {
        message->visit(*it);
    }
}

class GameMessage
{
public:
    virtual void visit(Component* component) {}
    virtual void visit(Component1* component) {}
    virtual void visit(Component2* component) {}
    virtual void visit(Component3* component) {}
    virtual void visit(Component4* component) {}
}

class TargetDestroyedMessage : public GameMessage
{
public:
    virtual void visit(Component1* component)
    {
        // do stuff to the component
    }

    // etc.
}

This second pattern may promote flexible and reusable public interfaces for certain components because the visitor must perform all actions upon those public interfaces of the components being visited. If you are concerned about your components becoming overladen with message-specific functionality, this pattern may assist in relieving that coupling.

Post-Grad Talk at Carleton University

October 17th, 2011

Last week I was given an opportunity to join my manager at Magmic in a panel for a fourth year class in IMD to discuss my experience after graduating and joining the game development industry. This was the same class that I developed Hideout! for and I was happy to pass on some knowledge that I’ve gained from my experience.

During this time, I highlighted a few important things that new grads should keep in mind when searching for a job (specifically in the multimedia industry) and entering the creative workforce. I also touched on the importance of knowing existing design patterns and conventions of the trade to succeed in your new position. The slides are very lean as I needed to keep my talk short, but you can download them here:

Download Slides (PPTX)