One Soul (Part 5)

Work In Progress / 08 February 2023

My last two posts were about the the flame pyro simulation in Houdini, and the custom render pipeline I made for Unity. The flame was intended for a cinematic, but it felt like a pity not to have that in the game. So, bringing the Houdini pyro simulation into Unity will be the topic this time.

UNITY PARTICLE SYSTEMS

Unity has an old legacy particle system, as well as the new VFX graph. I started trying out the legacy system. It is similar to most traditional particles systems like Cascade in Unreal engine, and as such is a battle proven technology used in many games. Though, I didn't not want to start from the beginning to create a flame particle system, when I already had done one in Houdini. The VFX graph is more modern, but is tightly integrated with the URP and HDRP render pipelines. So, I set out to create my own solution to integrate Houdini simulations with my custom render pipeline.

THE HOUDINI PYRO SIMULATION

While I was talking about particle systems above, the Houdini pyro simulation is not a particle system. It is a volumetric simulation, involving density, heat and temperature fields, represented as voxel data. The image below shows a cross section of the density, heat and temperature fields, from left to right.

The density field is used when rendering smoke, while the heat and temperature is used for the emissive flame. I will focus on the flame for now. It might seem a bit confusing to have both heat and temperature, and not just temperature, but the correct thing is to have both. I'll clarify the difference later below.

IMPORTING PARTICLES IN UNITY

Ok, so what about the particles? As I discussed in earlier posts, I spawned particles in the pyro volume to use as a complementary light source. That will serve well to use for the real time version in the game.

The first thing I did was to export one frame of this point cloud as both FBX and OBJ. Though, Unity's mesh importer can not import pure point clouds in these formats. Unity expects a mesh of connected points. So, let's connect them. This can be achieved by squashing the points onto a flat plane, performing Delaunay triangulation, and pulling the points back into 3D space.

Now we have a properly connected mesh that Unity is happy to import. I stored the heat and temperature fields into the first uv set. My last post goes more into details regarding exporting geometry from Houdini.

UNITY INSTANCING

To make a particle system, I need to instance a particle onto each point. This is done using GPU instancing. The particle itself can be any kind of geometry, but I am using simple quads, rendered as billboards. Each particle needs to know it's position and other attributes, like heat and temperature. These attributes can be passed to the particle shader as compute buffers.

PARTICLE SHADER

This is the fun part. The particle shader defines the look of the particle system. As we are working on the flame, it is a fully emissive shader. Each particle will add a small amount of light to the scene, and is thus additive.

As a side note, additive lighting assumes that we are working in a linear color space. Adding 50% gray in sRGB gamma space would be adding 0.5 to 0.5 resulting in 1.0, that is pure white. In linear color space, mid grey has a value of 18%, and adding 0.18 to 0.18 gives 0.36, which is physically correct. My render pipeline works in linear color space, and also has hight dynamic range (HDR), allowing values to exceed 1. 

We have the heat and temperature, and need to find the RGB value to add to the scene. What determines the color of a fire particle? We all know that fire has a color temperature of 1900K (or say 1700K for deeper red). Ok, so the temperature determines the color... Then we can map the particle temperature using a color ramp, similar to the color temperature charts we all have seen.

 Let's look at Houdini's flame preset for the Mantra fire shader.

It uses a ramp to map the temperature. This is only one part of the full ramp above it, but it follows the same idea of mapping the temperature to the color temperature. All artists are familiar with this. I started with a similar approach, but the result did not look convincing.

Back to the discussion about the heat field. Isn't heat the same thing as temperature? No, the heat is what is referred to as intensity in the above screenshot. The heat determines the intensity of the emissive, and the temperature the color. At least that is how the Houdini default pyro shader is set up. Then I could look up the temperature from a ramp, and multiply by the intensity. Still, this was not looking good.

So a bit of research regarding blackbody radiation. As it turns out, this Kelvin color temperature we have all been using is just a simplification. The color of a blackbody depends and both luminance (intensity/heat) as well as temperature. The real color temperature chart looks more like the following.

Using a two dimensional ramp to get a more accurate blackbody color greatly improved the look of the particle system.

MANY TORCHES

Now I had one torch flame, frozen in time, that looked good enough for me. The next step was to use this for all torches in the game. As you would only be able to see a few torches at a time, I added an occlusion test to only draw visible particle systems. Each particle system has a bounding box used to determine visibility.


ANIMATED FLAME

The flame is still frozen in time. We exported one frame from Houdini. How do we export the whole animation? That is tens of thousands of particles for each single frame. The mesh approach above was good enough to get this far. To pass a massive amount of data into a shader, one approach is to pack it into a texture. Each pixel stores the attributes of one particle. 100 frames with 20,000 particles, would require two million pixels. That is the same as a 2,000x1,000 image.

In my previous post, I introduced how to write VEX in a COP network. With VEX, we have full access to all geometry data. That is perfect for packing the particles into textures. Each frame is first packed into a multi-channel EXR, one slice of the total animation. All slices are then stacked into a single image. Unity will not read multi-channel EXR:s, so the channels are repacked, and stored as separate images.

COP networks have the bad habit of not updating properly, which caused problems when trying to make all this to work in Unity. I needed a way to check that the exported images were containing the data they were supposed to. Luckily, Nuke not only supports multi-channel EXR:s, but can also display the position pass as a point cloud. This is done using the PostionToPoints node (Gizmo), as shown below.


One Soul (Part 4)

Work In Progress / 03 February 2023

In the first part of this blog series, I set up a goal to create assets for a dungeon level in UE4. Since then, the direction has changed towards a more procedural approach. I ended up writing my own custom render pipeline for Unity, which will be the topic this time.

While my humble implementation is not comparable to the Unreal rendering engine, I am switching over to Unity as the main development platform for my indie game.

AMBIENT OCCLUSION

The Unity version only had a very basic custom render pipeline, adequate enough for blockmesh levels, but not much else. The first step was to add ambient occlusion, as that makes the shapes readable even without lights. Let's look at the final result first.

So, where to begin. I had never made my own SSAO (Screen Space Ambient Occlusion) before, so I started prototyping in Houdini. For 2D image processing, Houdini has the COP (Composite Operator) network, which is a bit similar to Nuke, but not as sophisticated. When working on geometry in a SOP (Surface Operator) network, the go-to node is the Attribute Wrangle, which allows you to write code in Houdini's VEX (Vector Expression) language. I am already quite familiar with VEX, so I wanted to use that for this prototype as well. There is no Wrangle node in the COP network, but there is the VOP (Vector Operator) COP2 Filter node, which allows node based coding of a COP filter. (The VEX Filter node just above only allows a single line of VEX code, so that is not usable.)

Inside the VOP COP2 Filter, you can add a Snippet node, and inside that node, you can write VEX code. Rather than using for example @P to access the position, you just write the input connection name in upper case, without the @ prefix. The example below multiples the R, G, and B channels by 2.

The cinput VEX command (https://www.sidefx.com/docs/houdini/vex/functions/cinput.html) is used to sample a specific pixel in a specific plane. Using cinput, I could read pixels from the depth pass of my test render of the Houdini shader ball. Comparing the neighboring depth values, I could calculate the normal, and from the normal, shoot out rays at random angles, to see if the pixel I was testing was occluded by nearby pixels. After some experimentation, I ended up with this result.

Based on the Houdini prototype, I implemented this as a post processing effect in my render pipeline. This greatly improved the readability of my game levels. Though, as this was based only on the depth channel, the strength of the occlusion varied depending on viewing angle due to perspective distortion. For a correct result, I would need to use the actual world position and world normals for ray casting.

You may say: why not just render out the world position and world normal passes? So, that is what I did next. Nuke compositors might be familiar with relighting, which uses those passes to relight an image after it is rendered. In game engines, this is know as deferred rendering, as opposed to forward rendering. In other words, you render out all the passes you need (called the G-buffer), and perform the lighting later. This was a good timing for me to change my render pipeline from being a forward renderer to becoming a deferred renderer.

The image of the dungeon AO pass further above is the latest version, which uses these world positions and normals. As an additional benefit, it is also taking the bump mapping into account, as the bump mapping affects the normals, which affect the AO.

ANTI ALIASING

At this stage, I did not use any kind of texturing, except for the blockmesh grid pattern. The white brick wall, with sharp beveled edges, was getting quite hard on the eyes, and the aliasing artifacts were quite severe when playing at low resolution in the Unity editor. To resolve this, I implemented the FXAA (Fast Approximate Anti Aliasing) algorithm as a post processing effect. For those interested, this a quite common algorithm, and there are many recourses available explaining how it works, so I will not go further into that here.

LIGHTS AND SHADOWS

The dungeon is an entirely enclosed indoor space, with no sun or moonlight leaking in. The only sources of light are the torches on the walls. These are placed when designing the level in the Tiled editor, so a lighting artist would not be able to manually adjust these in the final level.

My previous post discussed the issues when rendering a pyro fire simulation in Houdini's Mantra. The light from the volumetric emissive was not enough to fill up the space, so I solved that by spawning supporting light particles in the fire volume. Of course, that would not be possible in a realtime game engine. The standard approach is to use baked lighting for the ambient contribution of the torches, and realtime point lights for the shadow casting and dynamic flickering direct component. As the dungeon is dynamically built from an instancing point cloud (as explained in an earlier post), and I want to be able to iterate quickly when designing the levels, I am not using baked lighting at this stage of game development.

I began implementing points lights. It seemed easy enough, as they are just a single point in space. The problem is with shadows. Point lights cast light in all directions. What is preventing the light from a torch on a wall from entering the room on the other side of the wall, is the shadowing property of the wall. Without shadowing, nothing is stopping the light from passing through walls and other objects.

Before proceeding, it is necessary to explain how realtime shadows work. The principle is simple: if the light can see me, I can see the light. If I can not see the light, I am in a shadow. So, each light source is treated like a camera, and we render out what it can see into a so called shadow map. This is just a depth render, as we are not interested in the color of the object, only if we can see something or not. Here is an example of a torch on the wall "seeing" a doorway and portcullis.

Wait, should the torch not also see the floor, the wall, and the torch holder? I this case, there is nothing below the floor to see. At this angle, there is nothing behind the walls to see either. And the torch holder would just be in the way, casting a big distracting shadow. Only the doorway with portcullis is important, so it is defined as shadow caster, and the other as non shadow casters. If they don't matter, there is no reason to spend GPU time on them.

But I just said that a point light is casting light in all directions, and the shadow map above is only showing one "camera angle". That is true, the example is not from a point light, it is from a spot light. A point light would need to render all angles around it like an HDRI image. Normally, one would render six angles, to cover all sides of a cube, like a cube map. That is six shadow maps for each point light, compared to the single shadow map for the spot light example. Point lights are expensive. Some game studios put a limit on point lights, and say that they can only cast shadows in one single direction. In my case,  I decided to only implement spot lights, and see if I can get away with not using point lights at all.

REFLECTIONS

In PBR, metallic surfaces do not look good without reflections of the environment. If there is nothing to reflect, they will turn out pitch black, apart from the reflections of the torch light sources. So, I needed reflection maps. As I explained in previous posts, the game level and assets are generated in Houdini, so why not render out my HDRI reflection maps directly in Houdini? I even had the torches from my previous pyro simulation to use as light sources. Houdini's Mantra supports rendering HDRI images out-of-the-box. For the camera, the projection has to be set to Polar (panoramic). The resolution is normally twice as wide as it is high. There are some caveats. The Radiance HDR file format will not work when rendering in Mantra, so it has to be an EXR. Also, the result will be rotated 90-degrees. To compensate for this, set Screen Window X to minus 0.25.

Just using the torch light sources resulted in very monochromatic orange light, so I had to experiment with adding additional cool light sources. In the game, the torch lights have a color temperature of 1900K, and the ambient light a temperature of 8000K. The metal sphere in the first image is used to evaluate the HDRI reflections.

MATERIALS

I implemented PBR for my render pipeline. The first version was non-PBR, with a Lambert shader modified to wrap the light around the whole object, so the unlit side still had shape. The Lambert cosine law is also a law of physics, so non-PBR is not really the same as non-physical. PBR is a bit of a buzzword.

The materials are defining the base color, metallic and roughness of the surface, just as we are used to. In addition, they are also able to affect the normals. As users of Substance Designer are aware, it is easier to manipulate a height map than a normal map. So, rather than using normal mapping, I implemented bump mapping (using procedural height maps). I am using these with tri-planar projection. A bump map has u and v coordinates, that are projected on the three axises, back and front side. That totals 12 different combinations (2*3*2). To make sure the tri-planar bump mapping worked as intended, I tested each combination separately using a test pattern. 

As the level as well as the modular assets are dynamically generated in Houdini, in would not be practical to use Substance Painter to texture the assets. I would have to repaint them each time they are modified and regenerated. Substance Designer and Painter do have procedural features, and automating the process using Python might have been one solution.

If I can procedurally generate materials in Substance Designer, why not generate them at runtime in the game instead? That would allow for the quickest iteration possible. I'll refer to this approach as procedural shading. For this we need a little bit of extra information in addition to the positions and normals of the geometry. Fortunately, the module geometry is generated in Houdini, so we can inject all additional information we want in there. The image below is an example of varying the color of the floor tiles based on id:s and uvw:s assigned to the different parts of the floor module.

Each module is exported as an FBX file from Houdini. Where can we store these extra attributes, so that they can be used by the game engine? There is not much information available in the documentation. Though, SideFX open sourced the FBX exporter, so looking at that source code is the only way to find out.

https://github.com/sideeffects/HoudiniFBX/tree/Houdini17.0/src/ROP

For those of you who don't like reading source code, here is brief summary. Houdini exports uv sets named uv, uv2, etc. but only the u and v components. The w component is not exported to FBX by Houdini. Normals, tangents and bitangents are exported as four component xyzw, though in this case, Houdini only has xyz in its own attributes. Custom attributes are also exported, but I could not find a way to import these into Unity, so I packed the information into the uv channels.

In addition, I also needed per-module information, such as whether it is a shadow caster, as well as gameplay related information on how the module can be interacted with (openable, lockable, etc.).

These have to be stored as object level parameters with the fbx_ prefix, and the Houdini parameter name will be the parameter name in the FBX file. For example, the Is Openable parameter is fbx_is_openable, and the name is is_openable. Avoid using spaces here, as these will not be handled well when opening the file in applications like Maya.




One Soul (Part 3)

Work In Progress / 19 December 2022

It has been a while since my last post in this blog series. It is difficult to find the time to write, so I'll try to keep the words few, and use images to explain where possible. The topic this time will be a breakdown of this burning flame shot.

The biggest dilemma when doing art for a game is that if I spend time on modeling and texturing something like a doorway, it may later turn out that the doorway is too narrow, and the player capsule easily gets stuck when passing through. It is difficult to leave the blockout stage. And even blocking out a simple gameplay scenario in Maya takes considerable time. I'll discuss how I approached these two problems.

LEVEL DESIGN

In a previous post, I briefly mentioned the procedural dungeon generator I made in Houdini. This does not give you much control for setting up a specific gameplay scenario. I have been torn between the two options of procedural generation and manual blockout. This time, I am taking a hybrid approach, making a simple grid based design in the Tiled editor, and using this as a base for generating the level in Houdini. Here is an example map (please excuse the ugly tiles, but avoiding doodling on these are also important to save time during game development).

This map could be exported as for example a CSV file, but exporting is an extra step that takes time, so I'm am parsing the original Tiled file in Houdini. Based on this, I determine where to place doors, floors, walls, pillars, roofs etc.

Next, I generate a point cloud with position, orientation and the ID of the entity (modular piece) to place.

Finally, the modular pieces are instantiated at each point.

A little technical digression. The game engine also needs the point cloud to build the level. Again, exporting these would be an extra step that would take time. There is a little library that comes with Houdini called libHAPIL.dylib. This contains the Houdini API (HAPI). In my Unity implementation, I call this directly to ask Houdini for this point cloud. After saving a map in the Tiled editor, the new level will be immediately available when running the game. SideFX also provides artist friendly Houdini Engine for Unreal/Unity, but directly using HAPI allows for more flexibility, like reading this point cloud with entity IDs.

MODULAR PIECES

As mentioned, the problem with making modular pieces is that things like the size of a doorway might change due to gameplay reasons. The approach I took for these was to generate each module procedurally in Houdini. For example, the module below consists of two pieces, the stone doorway, and the metal portcullis. These are separated, as the portcullis is animated when opened.

The doorway part has 18 parameters that can be modified.

The portcullis part is a bit more simple, with only 10 parameters.

Also, the torch consists of two parts, the torch itself, and the holder, in case the player would pick up the torch.

The torch parameters would not fit on one page, so in this case, I had to divide it up into several tabs.

The generated geometry has clean watertight topology, so once I know it is working well with the gameplay, I can take it into ZBrush to add fine details. Alternatively, I might do procedural sculpting in Houdini, by boolean out dented corners, like this example.

Or possibly converting to voxel volumes, and doing the same thing for a more ZBrush-like result.

The generators themselves are also built using a boolean cutter approach. A pillar is generated from a solid block, that is cut into slabs that are stacked up on top of each other. Each slab is further shaped by cutting off the sides. These cutters are currently flat planes, giving a very clean and simple result. By adding noise to these planes, a more organic result would be achieved (exaggerated example to illustrate the method).

Likewise, the texturing can also be done manually in Painter/Mari, or procedurally in Substance Designer or in the shader itself. Any metadata that would make texturing easier, like IDs for different bricks in the stone wall, can easily be injected by the generator as needed.

CINEMATIC

As I plan to make a short promo cinematic for the game, it is convenient to reuse the models already generated by Houdini. I am personally a big fan of Clarisse. The point cloud can be imported into Clarisse as an alembic, and the entity IDs extracted and used to scatter the modules, rebuilding the level.

The orientation is stored as a forward vector, which require a bit of a trick to use in Clarisse. In case someone would like to try this at home, here is the required setup. The forward vector (which I store as point normal) is extracted as euler angles. Then (0.0, 0.75, -0.25) are added to the euler angles, and fed into the scatter rotation.

This time, I wanted to render out a burning torch in Houdini's built in renderer Mantra. The flame is a pyro simulation. One problem I encountered here was that the light emitted from the volumetric flame was not strong enough to light up the room. My solution was to spawn lit particles from the flame.

I wanted to be able to tweak the lighting in Nuke, so I needed AOVs for each light source. This is not well documented, and the only way I could get Mantra to output the AOVs, was by exporting the VEX variable "all" for each light (H17.5).

The final composition is the render at the top of this blog. Everything was done procedurally in Houdini.



One Soul (Part 2)

Work In Progress / 07 November 2022

In the previous post in this blog series, I discussed the background, software used, and a bit about the Unity version. This time I will focus on the Unreal version. But first, some words about organizing your files.

VERSION CONTROL

As the project is being developed using multiple game engines, on multiple platforms, it becomes especially important to keep your files in order. The easiest way is to use something like Dropbox, that allows you to access files from multiple computers, as well as reverting to previous versions in case some file gets messed up. Though, for game development this is not practical, as files might get updated in the background, clashing with how the game engines keep track of files. So, even for small projects, a proper version control system is needed. Git is very popular among programmers, while Subversion (SVN) is popular among artists.

As games involve both a lot a code, as well as art assets, neither of these are perfectly suited for the task. Many game developers use Perforce (also branded as Helix Core), which is suitable for both code and asset, and is the solution I am using. It takes a little reading of the manual to learn how it works, but once you understand it, it is very easy to use, and has a free cross platform front end application called P4V, so there is no need to purchase something like Cornerstone. You will need a Perforce host server. Assembla provides hosting, but the cost is rather high, so I recommend setting up your own server. For indie developers, Perforce itself is free to use, but larger corporations will have to pay a license fee.

PLUGINS

Also related to organization, this first thing I do when starting a new project is disabling all plugins. Later, when I need I specific feature, I reenable that plugin.

For example, when blocking out a level in Maya, there is no need for Bifrost, MASH, Arnold, xGen etc., and disabling these will make starting up Maya quicker, and the probability of crashes lower. Of course, you might want to use the procedural features of MASH to generate stairs and the like, but in this case I am keeping things light.

In Unity, all built-in plugins (packages) are enabled by default. The image below shows the built in packages I am currently using. I don't target Android, don't use the built in terrain engine, web, VR, AR etc., so there is no reason to keep those around.

Likewise, in Unreal, there are hundreds of built in plugins. Disabling most of these will make the editor start up much quicker. As I am using Perforce, I am keeping that one enabled.

Though, when it comes to Unreal plugins, do be aware that some of them might be needed for the final build to work, even if they are not needed when working in the editor, as explained later in this post.

All this discussion regarding version control and plugins is getting a little long and boring, so let's briefly switch to another topic...

TARGET REACHING

The screenshot below shows the blockout rig and attack animation I made. The target of the attack is part of the rig. The distance between the target and the character is automatically calculated, and exported as a custom animation curve (shown in the graph editor below).

Now, during a battle in the game, the target is moving around, and the distance is changing, and will not match the distance that the animation was authored for. So, the actual distance is compared to the authored distance, and the root motion is scaled so that the attack impact will occur at the correct distance. Without this compensation, the characters would swing their swords into thin air, and look quite silly.

In Unity, I have implemented my own animation system, so cases like this are easy to handle. Though, it is a bit more tricky in the Unreal engine, which leads back to the main topic of this blog post...

UNREAL ENGINE

While the Unity implementation is coded in C Sharp, I wanted to implement the Unreal version using blueprints only. One reason for this choice is that blueprint projects are faster to build, and less prone to cause problems. Let's look at the animation blueprint used to implement the target reaching discussed above. We can access the distance using the Get Curve Value node.

We also need to set the root motion scale. There is a function to do this using C++, but this is not exposed to blueprints. The node Set Anim Rootmotion Translation Scale below does not exist. 

One option is to convert the project to a C++ project. Though, I wanted to stick to the original plan of using a blueprint only project. So I had to make a plugin that exposes the required feature to blueprints, making the node above available.

Ok, so to go forward on this art challenge, I first need to make sure I can build the Unreal project on Windows, so that you can experience the final result interactively. While many artists build scenes in Unreal, and take screenshots and record fly-throughs to post on ArtStation, they could just as well have built their scenes and shaders in Maya, and rendered it out with Arnold. The real point of using a game engine is the interactive experience.

The Unreal project I will use as a base is a small subset of the game, with just implements the most basic (but most important) combat mechanics. I developed this on Mac, and to build this on Windows, I first had to check out the project from Perforce. The Mac project could be used as is, with some Windows specific configuration added.

Click build, and all would be done, I thought... but not...

First, I got a popup telling me that Visual Studio is not installed. Though, I had installed this previously. Finally, after installing CLion (an alternative IDE), I was able to build without errors. When running the built game, it crashed on launch, complaining about a missing super-struct. I thought that was caused by my plugin, and tried making my plugin an engine plugin rather than a project plugin. To package the plugin, it now complained that I needed to configure server settings for an iOS build. Though, I had no intention to build for iOS. So, a whitelist had to be added to the plugin settings, specifying that the plugin can only be used for Mac and Windows. And then it complained that the BuildTool could not be built due to a missing dot NET target. Ok, installing dot NET, and finally I could use my plugin as an engine plugin. Good, now let's try again, and... crash. I finally figured out that I needed to enable the built in AISupport plugin, that contains features used by the enemy AI.

While the Unreal engine is artist friendly in many regards, it can be a real pain sometimes. For indie developers, I would strongly recommend sticking to Unity. If using the Unreal engine, I would recommend blueprint only projects, and putting custom implementations into engine plugins.

One Soul (Part 1)

Work In Progress / 01 November 2022

I have been working on an RPG game, and thought it would be fun to challenge myself to create all the art for a simple level, and share my experience here.

GAME ENGINE

The game is prototyped in Unity for quick iteration. The art in the Unity prototype is minimal blockmesh, using a custom render pipeline with a simple tri-planar grid material. No lights, shadows or post processing. Though, the material itself is emulating a directional light and fog.

For this art challenge, I will be using the Unreal engine. It is artist friendly, having a single built in render pipeline, that everyone is familiar with. Unity, on the other hand, has a built in standard render pipeline (SRP), that is not supposed be used, as well universal (URP) and high definition render pipelines (HDRP).

PLATFORMS

I am using Mac as my primary platform, and Windows PC as the build target platform. As a long time Mac user, I prefer working on that platform, even though OS upgrades are prone to break existing software, causing quite a headache. Linux might have been an option, though it does not run Photoshop. As most desktop game players are using Windows, that is my target platform. Also, when using Unity on Mac, it is easy to make Windows builds. The game also supports game controllers, and could easily be built for consoles was well.

VERSIONS

When developing a product, updating the OS or applications to the latest version often cause problems, and might even break the product you are working on. The VFX reference platform (https://vfxplatform.com/) is addressing this problem by defining what software versions to use together.  In my case, I have locked my versions roughly corresponding to the VFX reference platform 2020:

  • macOS 10.13.6
  • Windows 10
  • Unity 2020 LTS
  • UE4.20
  • Maya 2019
  • Houdini 17.5

UNITY PROTOTYPE

The Unity prototype has four test levels. The first is a combat setup with three enemies. This is used to test the combat mechanics. The level was blocked out in Maya.

The second level is a city, used for testing traversal and NPC interaction. This is also blocked out in Maya.

The third level is a dungeon. The level is procedurally generated using Houdini, and also handles locking doors and placing the keys is such a way that the end of the level can be reached without locking yourself out. Houdini generates a point cloud with entity ids, specifying the location of modular pieces, doors, keys, chests and enemy encounters. Enemies are placed with easy encounters in the beginning, and more difficult by the end of the level. They are also placed to face the direction where the player is likely to enter a room. The cyan area on the floor is the navigation mesh, dynamically generated in Unity.

The forth level is an open world, also generated by Houdini, outputting height fields, masks, and a point cloud for all entities like trees, rocks, NPCs, and towers. The NPCs are placed on the roads, and near towers, and the towers are oriented with the entry towards the path.

SCOPE

For the final game with the main campaign, I plan to make a one square km hand crafted overland world, with an additional square km of underworld caves, dungeons, mines, sewers and secret tunnels. It would take maybe one year for the level blockout, placeholder animations and remaining game mechanics. There will also be quadrupeds in addition to bipeds, mainly for animals roaming the overland world. Then at least another year for the production of art, animation, effects, sound design and composing music.

That is a long time to wait, which is why I'd like to do this challenge, creating a minimal subset of the game:

  • one generated dungeon level
  • modular art pieces for the dungeon level (floors, walls, ceilings, columns, doors)
  • props (torch, chest, key, sword, shield, key, health potion)
  • one biped character with variations for the enemies (body, armor, helmet)
  • character rig
  • character animations (idle, walk, strafe, attack, combo attack, block, hit reaction, open chest, death)
  • environment/prop animations (open door, open chest)
  • effects (burning torch)
  • lighting (indoor)
  • user interface elements and font for the menu and HUDs
  • one cover art image
  • a 30s promo cinematic
  • sound/music (not part of this art challenge)

As I will also be working on the main campaign, the amount of time I'll allocate to this blog series depends on the interests from you readers.