In the first part of this blog series, I set up a goal to create assets for a dungeon level in UE4. Since then, the direction has changed towards a more procedural approach. I ended up writing my own custom render pipeline for Unity, which will be the topic this time.
While my humble implementation is not comparable to the Unreal rendering engine, I am switching over to Unity as the main development platform for my indie game.
AMBIENT OCCLUSION
The Unity version only had a very basic custom render pipeline, adequate enough for blockmesh levels, but not much else. The first step was to add ambient occlusion, as that makes the shapes readable even without lights. Let's look at the final result first.
So, where to begin. I had never made my own SSAO (Screen Space Ambient Occlusion) before, so I started prototyping in Houdini. For 2D image processing, Houdini has the COP (Composite Operator) network, which is a bit similar to Nuke, but not as sophisticated. When working on geometry in a SOP (Surface Operator) network, the go-to node is the Attribute Wrangle, which allows you to write code in Houdini's VEX (Vector Expression) language. I am already quite familiar with VEX, so I wanted to use that for this prototype as well. There is no Wrangle node in the COP network, but there is the VOP (Vector Operator) COP2 Filter node, which allows node based coding of a COP filter. (The VEX Filter node just above only allows a single line of VEX code, so that is not usable.)
Inside the VOP COP2 Filter, you can add a Snippet node, and inside that node, you can write VEX code. Rather than using for example @P to access the position, you just write the input connection name in upper case, without the @ prefix. The example below multiples the R, G, and B channels by 2.
The cinput VEX command (https://www.sidefx.com/docs/houdini/vex/functions/cinput.html) is used to sample a specific pixel in a specific plane. Using cinput, I could read pixels from the depth pass of my test render of the Houdini shader ball. Comparing the neighboring depth values, I could calculate the normal, and from the normal, shoot out rays at random angles, to see if the pixel I was testing was occluded by nearby pixels. After some experimentation, I ended up with this result.
Based on the Houdini prototype, I implemented this as a post processing effect in my render pipeline. This greatly improved the readability of my game levels. Though, as this was based only on the depth channel, the strength of the occlusion varied depending on viewing angle due to perspective distortion. For a correct result, I would need to use the actual world position and world normals for ray casting.
You may say: why not just render out the world position and world normal passes? So, that is what I did next. Nuke compositors might be familiar with relighting, which uses those passes to relight an image after it is rendered. In game engines, this is know as deferred rendering, as opposed to forward rendering. In other words, you render out all the passes you need (called the G-buffer), and perform the lighting later. This was a good timing for me to change my render pipeline from being a forward renderer to becoming a deferred renderer.
The image of the dungeon AO pass further above is the latest version, which uses these world positions and normals. As an additional benefit, it is also taking the bump mapping into account, as the bump mapping affects the normals, which affect the AO.
ANTI ALIASING
At this stage, I did not use any kind of texturing, except for the blockmesh grid pattern. The white brick wall, with sharp beveled edges, was getting quite hard on the eyes, and the aliasing artifacts were quite severe when playing at low resolution in the Unity editor. To resolve this, I implemented the FXAA (Fast Approximate Anti Aliasing) algorithm as a post processing effect. For those interested, this a quite common algorithm, and there are many recourses available explaining how it works, so I will not go further into that here.
LIGHTS AND SHADOWS
The dungeon is an entirely enclosed indoor space, with no sun or moonlight leaking in. The only sources of light are the torches on the walls. These are placed when designing the level in the Tiled editor, so a lighting artist would not be able to manually adjust these in the final level.
My previous post discussed the issues when rendering a pyro fire simulation in Houdini's Mantra. The light from the volumetric emissive was not enough to fill up the space, so I solved that by spawning supporting light particles in the fire volume. Of course, that would not be possible in a realtime game engine. The standard approach is to use baked lighting for the ambient contribution of the torches, and realtime point lights for the shadow casting and dynamic flickering direct component. As the dungeon is dynamically built from an instancing point cloud (as explained in an earlier post), and I want to be able to iterate quickly when designing the levels, I am not using baked lighting at this stage of game development.
I began implementing points lights. It seemed easy enough, as they are just a single point in space. The problem is with shadows. Point lights cast light in all directions. What is preventing the light from a torch on a wall from entering the room on the other side of the wall, is the shadowing property of the wall. Without shadowing, nothing is stopping the light from passing through walls and other objects.
Before proceeding, it is necessary to explain how realtime shadows work. The principle is simple: if the light can see me, I can see the light. If I can not see the light, I am in a shadow. So, each light source is treated like a camera, and we render out what it can see into a so called shadow map. This is just a depth render, as we are not interested in the color of the object, only if we can see something or not. Here is an example of a torch on the wall "seeing" a doorway and portcullis.
Wait, should the torch not also see the floor, the wall, and the torch holder? I this case, there is nothing below the floor to see. At this angle, there is nothing behind the walls to see either. And the torch holder would just be in the way, casting a big distracting shadow. Only the doorway with portcullis is important, so it is defined as shadow caster, and the other as non shadow casters. If they don't matter, there is no reason to spend GPU time on them.
But I just said that a point light is casting light in all directions, and the shadow map above is only showing one "camera angle". That is true, the example is not from a point light, it is from a spot light. A point light would need to render all angles around it like an HDRI image. Normally, one would render six angles, to cover all sides of a cube, like a cube map. That is six shadow maps for each point light, compared to the single shadow map for the spot light example. Point lights are expensive. Some game studios put a limit on point lights, and say that they can only cast shadows in one single direction. In my case, I decided to only implement spot lights, and see if I can get away with not using point lights at all.
REFLECTIONS
In PBR, metallic surfaces do not look good without reflections of the environment. If there is nothing to reflect, they will turn out pitch black, apart from the reflections of the torch light sources. So, I needed reflection maps. As I explained in previous posts, the game level and assets are generated in Houdini, so why not render out my HDRI reflection maps directly in Houdini? I even had the torches from my previous pyro simulation to use as light sources. Houdini's Mantra supports rendering HDRI images out-of-the-box. For the camera, the projection has to be set to Polar (panoramic). The resolution is normally twice as wide as it is high. There are some caveats. The Radiance HDR file format will not work when rendering in Mantra, so it has to be an EXR. Also, the result will be rotated 90-degrees. To compensate for this, set Screen Window X to minus 0.25.
Just using the torch light sources resulted in very monochromatic orange light, so I had to experiment with adding additional cool light sources. In the game, the torch lights have a color temperature of 1900K, and the ambient light a temperature of 8000K. The metal sphere in the first image is used to evaluate the HDRI reflections.
MATERIALS
I implemented PBR for my render pipeline. The first version was non-PBR, with a Lambert shader modified to wrap the light around the whole object, so the unlit side still had shape. The Lambert cosine law is also a law of physics, so non-PBR is not really the same as non-physical. PBR is a bit of a buzzword.
The materials are defining the base color, metallic and roughness of the surface, just as we are used to. In addition, they are also able to affect the normals. As users of Substance Designer are aware, it is easier to manipulate a height map than a normal map. So, rather than using normal mapping, I implemented bump mapping (using procedural height maps). I am using these with tri-planar projection. A bump map has u and v coordinates, that are projected on the three axises, back and front side. That totals 12 different combinations (2*3*2). To make sure the tri-planar bump mapping worked as intended, I tested each combination separately using a test pattern.
As the level as well as the modular assets are dynamically generated in Houdini, in would not be practical to use Substance Painter to texture the assets. I would have to repaint them each time they are modified and regenerated. Substance Designer and Painter do have procedural features, and automating the process using Python might have been one solution.
If I can procedurally generate materials in Substance Designer, why not generate them at runtime in the game instead? That would allow for the quickest iteration possible. I'll refer to this approach as procedural shading. For this we need a little bit of extra information in addition to the positions and normals of the geometry. Fortunately, the module geometry is generated in Houdini, so we can inject all additional information we want in there. The image below is an example of varying the color of the floor tiles based on id:s and uvw:s assigned to the different parts of the floor module.
Each module is exported as an FBX file from Houdini. Where can we store these extra attributes, so that they can be used by the game engine? There is not much information available in the documentation. Though, SideFX open sourced the FBX exporter, so looking at that source code is the only way to find out.
https://github.com/sideeffects/HoudiniFBX/tree/Houdini17.0/src/ROP
For those of you who don't like reading source code, here is brief summary. Houdini exports uv sets named uv, uv2, etc. but only the u and v components. The w component is not exported to FBX by Houdini. Normals, tangents and bitangents are exported as four component xyzw, though in this case, Houdini only has xyz in its own attributes. Custom attributes are also exported, but I could not find a way to import these into Unity, so I packed the information into the uv channels.
In addition, I also needed per-module information, such as whether it is a shadow caster, as well as gameplay related information on how the module can be interacted with (openable, lockable, etc.).
These have to be stored as object level parameters with the fbx_ prefix, and the Houdini parameter name will be the parameter name in the FBX file. For example, the Is Openable parameter is fbx_is_openable, and the name is is_openable. Avoid using spaces here, as these will not be handled well when opening the file in applications like Maya.