A Date with Reality

It has been a long time since the last post, and there is good reason for that. I have put a lot of time and effort into creating the design documents for At the Crossroads. I have most of the main storylines fleshed out and many of the supporting character’s created. As I have worked on this, the understanding of just what it will take to create this game became more and more clear. As I am the only person working on this, it became clear that this just isn’t possible.

I have put off this post for months now, hoping that I could figure out a way to still make this project happen. My original idea was to create the opening for the game in it’s entirety, with all of the gameplay systems fully functional, and have the open world (which makes up the vast majority of the actual game) created with all of the assets in place. But, there wouldn’t be any NPCs in the open world and no story related content; it would just be an opportunity for potential supporters to see the world, and decide if they wanted to back the game. Then, this demo would be made available through a Kickstarter campaign, with anyone being able to download the demo to play through. The demo would give the community a chance to see the quality of the game, and decide if it was something that they would like to help make a reality. But, as the game’s story outline began to expand into a reasonably accurate view of what would be necessary to tell this story, it became abundantly clear that no one person could make this happen.

I don’t have any intentions of leaving the world that I have created unused. That is where the title to this post comes from. I have to look at reality, and be honest about what I am going to be able to do as a single developer. Story driven games are great, but there is a reason why indie developers choose to make small, sandbox-based, games with very little story to them. It isn’t as much writing the stories. It is the massive work that goes into creating all the game-play related aspects of the story in the game.

At the Crossroads, as a setting, is still going to see the light of day. I think a much smaller, more tightly focused, game that is set in the Crossroads would be the best approach. If that goes over well and generates enough revenue to hire others to help on the full story-based game, then the game that I’ve been designing could be made. I think the approach for full funding would still need to follow the strategy outlined above (with a demo that potential supporters could play before committing their support).

In closing I have to say that, as an indie developer, I want to make huge epic games that people lose themselves in for a time. Epic Games has given us so many tools, like MetaHumans, PCG graphs, and full access to Quixel, sometimes I convince myself that I can do more than is really possible. I think the story that I’ve created is worth seeing the light of day, and hopefully someday, it will.

Exploring the PCG System in UE 5.2

Many months have passed since the last update on the site. Work has continued on the development of At the Crossroads, but it has been slow-going due to real life encroaching on my time. However, I won’t let that stop me from moving forward, even if the progress is at a snail’s pace. The writing on AtC has been difficult to make progress on, because you can’t force inspiration. In programming, you can overcome an issue by putting in more work to find a solution. The same can’t be said about creative efforts like writing. I do not want to rely on ChatGPT to write my story/characters for me, so that is not an option. But sometimes a break is exactly what creativity needs to recharge the batteries. Luckily, Epic has provided just such a break with the Procedural Content Generation system (or PCG for short).

The PCG system is a great addition to UE 5.2 and is going to provide a considerable degree of control over how our content can be created in our games. Previously, I used the Procedural Foliage Spawner system (or PFS for short) within Unreal to create an accurate biome. While this system is experimental (just like the PCG system), it is a really nice system that, when finely tuned, can generate fairly accurate results. This is the technique that was used in Capuchin Capers to create the foliage for all of the islands. While this system can be a great time-saver when set up correctly, and can provide a good way of realizing the data that we gain from our terrain studies in a concrete way, it definitely has it’s limitations.

The PFS system will generate a spawn location for a type of foliage and calculate the spread of that foliage over generations. This results in clumps of foliage types, such as ferns, where there are “older” ferns in the middle of the clumps, and “younger” ferns on the periphery to simulate the gradual spread of that plant type within it’s environment. While this is seen within nature, it doesn’t always play out that way. There are some types of plants that, in nature, will choke out other plants in various ways. The PFS system tends to produce these clumps, or islands, of foliage types in a higher abundance than you would witness in real life. This effect can be lessened by adjusting the values for the individual foliage types that are to be spawned. But to a certain extent, the PFS system leans in this direction no matter what you do…it is the basic premise of the system, after all. To be sure, the PFS system is still a very good tool and I loved the results that I was getting over hand-placing the foliage with the foliage paint mode in the editor. But, I always felt that I just couldn’t get the results that I was after no matter the amount of tweaking that I did. I feel that the PCG system can remedy that.

One of the aspects of the PFS system that I struggled with was getting shade-loving plants to consistently spawn within the “shade bounds” of a large tree. No matter what settings I used, I just couldn’t get a consistent result that placed the foliage types where I wanted them. This is because of the rules that are baked into the PFS system (which, unless you’re willing to change the C++ code, are immutable). Now, with the PCG system, we not only have easier access to the code responsible for spawning the foliage, we actually get to write that code! Some may see this as a regression, in that we are required to write the spawn logic for our foliage, rather than having a built-in system to aid us in this. But this is much more of an unshackling of our creativity than a ball-and-chain that increases the effort required to realize our vision. The extra work is far outweighed by the freedom that the PCG system grants us. We get almost all of the benefits of targeted foliage placement while retaining the great performance that we receive from instanced foliage. It is true that procedural placement will never be as accurate as hand-placing every single asset, but we can come much closer than we ever could before, in the least amount of time. We can do our terrain studies to find out all the information that we need about a biome, and then implement that data in our PCG graphs to produce a much more realistic result than we could ever achieve with the PFS system.

Another limitation with the PFS system is that you can’t specify a static mesh foliage type to be used for saplings or very young plants. It will scale the static mesh that is specified in the static mesh foliage asset. Not only is this unrealistic, it can lead to serious visual artifacts if a static mesh has a material that implements World Position Offset to create wind effects. The animated leaves for these scaled assets can cause severe “smearing” or stretching that is instantly noticeable and very undesirable (see Image 1 below). This can occur with PCG, of course, but only if you rely solely on scaling an asset to achieve a “younger” version of that foliage. Obviously, with PCG we are not limited to simply scaling a mesh; we can provide a completely different static mesh for these saplings/sprouts within our graphs!

Image 1: An example of stretching (or smearing) in a material when the asset is merely scaled down to simulate a young specimen for the given type of foliage. While this looks bad in a still image like the one above, it actually looks even worse, and is very obvious, in real-time gameplay. This absolutely destroys immersion.

I am very excited by the direction that the PCG system is taking content generation within the Unreal Engine, and by extension, gaming in general. And make no mistake about it, with this type of open-ended procedural content generation that was only available in software packages like Houdini now being available in UE, other engine developers will be forced to implement something similar or risk being left behind. The PCG system benefits all developers, but not equally. Small studios or individual indie developers will benefit far more than AAA studios, because those huge studios could already create their vision with precise accuracy in a (relatively) short period of time. For the indie studio, however, throwing more work-hours at a problem just wasn’t an option.

Thus far, I have spoken only of the PCG system as a means of placing foliage on a landscape, but it is capable of SO much more than that, and the examples used above are just that: examples. There are very few limits to the PCG system and I believe we will see it’s use throughout UE content generation/development. With further development, it will become more feature-rich and performant, which will open new doors that were previously closed in game development.

Thank you for stopping by and reading this article, and I hope that you have a great day.

A Long Overdue Update

Real life has a way of creeping into every project, and this project isn’t immune by any means. A large scale, time sensitive issue came up and consumed several months of time that was meant for development of the game. I am happy to say that I can finally turn by attention back to working on At the Crossroads.

I have been outlining the individual plot lines, as there is one for each region of the game, and that is where most of my efforts are being directed towards. Because I haven’t spent years studying writing, this has been a challenging task. However, I am beginning to make reasonable progress again. There just isn’t much that can be commented on here, and there are no screenshots to share. This is one of the main reasons for the very long delay between this post and the last one.

The site isn’t dead, nor is the project. It’s just that life happened, and there was no way around that. Anyway, I hope that anyone reading this has a great day, and thanks for stopping by.

Writing Code is Easier than Writing Stories

This is a long overdue update, but a lot has been going on since the last post.

First, another modular kit that will be needed for the game has been, for the most part, completed. There are some accent pieces that still need to be made, to help dress up the areas that utilize this kit. Several versions of stela need to be modeled for the kit, but before that could be done, I ran into a set-back. To make the stelas show information that is relevant to the backstory, we needed a more detailed look into that backstory.

This need to dive into the backstory/lore of the game world forced me to review what we already had for the story, character arcs, and plots that take place in the game’s storyline. Not being a writer by profession, and only having a novice level of experience with the proper way to write a complex story, I’ve had to take a lot of time to dive deeper into this part of the game’s development.

We’ve always known that the story for this game needed to be strong, as it is really this story, and the world in which it takes place, that would hold the biggest appeal for the game. However, developing a well written story with believable characters that people will care about is very difficult, and that is under-stating it. We are so lucky when it comes to the amount of resources available to aspiring writers. From the numerous blog posts by successful authors, to the YouTube videos by some of the best selling authors today, there is hope, even for someone like me.

In my research on writing a proper story outline, I came across the YouTube videos for the BYU course on creative writing, by Brandon Sanderson. This has been invaluable to me, because I am definitely not a discovery writer, who can just sit down and start writing a decent story. I’ve tried. I failed. I have to outline everything so that I can picture how all the key events will fall together to build the story.

Since discovering Mr. Sanderson’s lectures, I have fleshed out a fair amount of the overall story and how it will progress. I have the major and minor plots figured out, and I just need to detail the events that have to occur for each of these. This has been fairly difficult, though, because we are making this into a game, rather than a novel. We can’t control the exact order that events will occur while the player is progressing, so I’ve had to take even more time in this process.

In the end, we hope to have a story that will be compelling and enjoyable. We know that we can’t complete this game if we have to do everything ourselves. We can’t model every asset, nor can we write every single line of code. We have to lean on the Unreal Engine Marketplace to get a fair amount of our assets. But there isn’t an engine plugin to write and implement our story for us. And, we anticipate that it is this story that will allow us to compete, on a very limited level, with larger studios that have more experienced teams. With resources like Mr. Sanderson’s lectures, books on writing stories and screenplays, and other sources of guidance, we have a chance to make our hope a reality.

Thank you for taking the time to read this post, and I hope that you’ve gotten something out of it. Have a great day.

Tool Design with Python

Image 1: A view of the inside of a Daursynka long house, in-game. There are some light artifacts that can be seen in the upper-left of the image. Hopefully, these can be overcome by tweaking the cascading shadow settings. Overall, however, we are very happy with the end result.

The Daursynka long house is done, and we are pretty happy with it (see the featured image for this post, and Image 1). There are many variations for each piece in the kit, to reduce the ’tiling’ effect that can be seen in some games. For example, there are twelve different variations of the main roof’s tile sections, to reduce the repetitive use of each piece. The human eye spots patterns fairly well, and repeating patterns are easiest to pick up on when there are many examples of the same pattern right next to one another. So, this approach was used for all of the pieces in the kit. However, this meant we ended up with over 460 pieces for this kit alone!

When looking at the prospects of exporting each piece, along with it’s collision geometry, it became clear very quickly that scripting the export process was going to be mandatory. When we say scripting in Blender, that means Python. I must admit, I have never been (nor will I ever be) a huge fan of Python. I don’t like the language as a whole, and so I didn’t have much experience with it. That meant first learning Python beyond a quick overview of the languages features. Even if the only scripts that you were ever going to write in Blender were small, relatively simple ‘helper’ scripts, the effort would absolutely be worth it. Especially while creating something as large as an entire modular kit.

After going beyond the introductory lessons on Python at W3Schools, I turned my attentions to writing a script to help with the export. It became obvious very quickly that this script was going to be much more than just a simple little script. I wanted to keep my collection hierarchy in Blender and use it as the folder structure on the hard drive when exporting the assets. This meant a recursive function. I want to be completely honest here. In all of the time that I have ever done programming of any kind, I have never had to write a recursive function. So this was a first for me, and it was an…experience.

Recursive functions are usually deceptively short, and therefore they are deceptively simple. But, at first, I found it very difficult to get my head wrapped around exactly how the function was going to work. I had to look at it in a completely new way (for me), which was so different to anything I had done before. The hardest part for me was thinking about how the function was going to ‘walk’ through the various collections, and the order that it was going to run in. I tried to visualize how the function would be called from the very top level collection, and how it would call itself when it discovered a collection within that top level one. I did figure it out, but I can’t say that I will ever be comfortable writing recursive functions.

Once I had a script that would export my objects, I realized that I had only solved half of the problem. I would still have to import those assets into the Unreal Engine, so I was still staring down an enormous amount of work. Thankfully, the Unreal Engine editor can also be scripted with Python, so I knew that I could write a script to handle the imports. I needed a way to have the editor import not only the 3D objects themselves, but also to create the materials for those objects first. It would do no good to import 460+ objects and then have to apply materials to each and every one. This led down a rabbit-hole that resulted in the project import and export managers pictured below.

Image 2: The import manager in the Unreal Engine editor, showing all of the data for the long house modular kit. This is a read-only UI and is only used to verify that all of the data is correct. There are mouse-over popups that show that asset’s JSON entry, allowing for a deeper dive into the data, if the user wishes.
Image 3: The export manager in Blender allows the user to set up the export data needed to successfully export the assets and generate the JSON file needed by the import manager. Even in a project with 460+ assets, setting up the export data only took around an hour and a half at most.

This tool (or tools if you want to count them separately) is why it took so long for a new post to be added to the website. These were a real trial to get to work, but I feel that the time spent will more than pay for itself when we are making other modular kits for ‘At the Crossroads’ as well as any other games that we create. All of the data entered into the export manager is saved in the .blend file that contains the modular kit assets. So, while it does take some time to enter that information, it only needs to be entered once.

When the data entry is complete, pressing the ‘OK’ button starts the export process. The tool uses the data entered for the materials and textures to generate part of a JSON file describing where these assets are on disk, as well as where they should be saved in the UE project when they are imported. It then ‘walks’ it’s way through the collections contained within the ‘Kit Root Collection’ discovering 3D assets as it goes. As it exports each 3D object, it finds that asset’s collision geometry and exports it along with the asset. So, when the import manager within the UE editor imports each 3D asset, it will have the correct collision. All of the 3D-asset-specific information is added to the JSON file as each asset is exported.

The import manager reads the JSON file and imports the textures, materials, master materials, and instanced materials. I say that it imports the materials, but it is more accurate to say that these are created using the data generated by the export manager and saved in the JSON file. The master materials that are defined in the export manager are created, and then all of the instanced materials are inherited from these. Once this is all complete, the 3D assets can be imported and have all of the correct materials and/or instanced materials applied. The whole import process took just under five hours to complete. Imagine how long this would take to do all of these imports individually. This will be a huge time saver in the long run, as there is very little work that needs to be done from that point.

After the import is complete, the only thing that is left to do is to define the actual node networks for the base materials and the master materials. This has to be done by hand due to the fact that, while it is possible to map some of the material nodes from Blender to the UE editor, it would be extremely difficult to do. Some nodes in Blender, like the math node, do have equivalents in UE, but there are others that do not map across the two applications at all. Also, when you get into the more complex materials in either program, trying to make these base node mappings work gets even more complicated. It was decided that it would be best to just define the node networks in the UE editor by hand. It doesn’t take that much time to do, and we get to use our preferred workflow in Unreal without having to worry about how it all has to be created in Blender to make the translation process successful.

After all of this, were these tools worth it? In the short-term, no. I could have exported all of the assets from Blender by hand, and then import them into the Unreal Engine in less time than was required to write these tools. In the long-term though, these tools will pay for themselves many times over. While the export manager does force a specific work-flow on the artist, it is not that much of a constraint. And, the time saved overall will make these tools valuable assets for us going forward.

Thank you for taking the time to read this, and I hope it has sparked some ideas that you may have for tools to improve your work-flow. Have a great day.

Procedural Modelling and More

Well, it’s been quite some time since I last updated the site, and I’ve been hard at work during that time. I have completed the majority of the work on the modding code to allow modders to create content of their own and package it all up into a mod that players could unzip into the “Mods” folder of the game. This is done via the UGC Plugin that was developed by Epic as part of their VR game, titled RoboRecall. However, there were some pretty glaring omissions in the functionality, due to the intentions of the plugin’s scope.

The UGC Plugin was meant to be very bare-bones in functionality, allowing the developer to decide what, exactly, they wanted to allow their modders to do. The plugin could enable mods as small as just remeshing/skinning some of the in-game weapons, to fully side-loading entire levels with custom game modes. The largest omission of functionality was the ability to easily get players to a new level provided by the mod, and get them back to the main world. This needs to be done entirely within the mod, without the ability to place anything in the main world, and with as few other limitations as possible on the mod developer. I came up with a solution that I hope modders will find a good compromise…spawnable POIs.

A spawnable POI (or Point Of Interest) is a self-contained POI that can include a special trigger volume that will transport players to a separate map that the mod developer has created. This system can handle POIs that have multiple entry/exit points, without the mod developer having to jump through too many hoops to get it all to work. This would allow, for example, a cave complex with numerous entrances to it. When the player(s) enter the cave, they will spawn in at the correct entrance and, when they leave the cave complex, they will spawn back into the main world in the correct location. If they enter through a jungle cave and exit right back out the way they came in, they should be in the jungle. However, if they enter the jungle cave entrance and exit via the tundra cave entrance, they should spawn into the world in the tundra. This seems simple, but keep in mind that the mod developers will not have any way to directly place anything in the main world. Everything will have to be done via the spawnable POIs. We wouldn’t have a problem including the main world, but we are using quite a lot of licensed assets, and we do not have the legal right to distribute those assets to mod developers. Copyright issues are a tricky subject, and are best avoided whenever possible.

The last bit of functionality for our version of the UGC Plugin will be UI related. This functionality isn’t worked out yet, but it really does need to be in place when the game ships. I realized that this was missing while watching some videos on YouTube. I noticed a content creator using a mod for a game, and this mod reorganized the UI for that game. This is something that I will need to add to our version of the plugin, but I don’t anticipate any major hurdles to this…famous last words, right?

Aside from all of the work on the UGC Plugin, I have been working on some procedurally generated models that are part of a modular kit that will be used in the game. This modular kit is for a long house used by some of the peoples that inhabit the tundra region of the crossroads. These are the Daursynka people, and they are loosely modelled after the Iroquois Confederacy of the north-eastern region of North America and the Viking peoples of Scandinavia. These houses were a challenge due to their size and detail. They need to be large enough to house an entire extended family, and be detailed enough to maintain the visual fidelity that the other game assets are already at. But, I also needed a fair amount of variety in the pieces because there will be multiple longhouses at each settlement. I want to avoid obvious repetition as much as possible, while maintaining a reasonable degree of performance. The latter part of the previous sentence is key here; performance must always be considered in any real-time application.

To create the kit pieces for the longhouses, I chose to procedurally create all of them from “building block” pieces that I could easily obtain from Quixel. For example, the roof tiles seen in the featured image for this article are all positioned via geometry nodes in Blender. This allows me to randomize the individual tiles and get a nice variation between the roof sections. Please note, however, that I was lazy in the creation of the image above and I just used an “Array” modifier in Blender to duplicate the roof sections (I am sufficiently ashamed by my laziness here). The modular kit features numerous variations of the roof section, not just a single section with a single tile pattern. This approach allows me to use a set of textures for the tiles, wall slats, beams, and other individual pieces and get a level of quality that would have required much more texture space in the RAM of the player’s video card if I had went with a more traditional approach. The traditional approach is to create all of the geometry in your software of choice (Maya, 3DS MAX, Houdini, Blender, etc.), and then import that geometry into Substance Designer or Quixel Mixer to “paint” the textures onto the geometry. With this more traditional approach, we would need to use a much larger texture to get the same visual quality. We are still using a not-insignificant amount of RAM, but nowhere near the amount that would be needed to get both this level of quality and this degree of variation in the kit.

Image 1: A side view of a simple render of a longhouse using this modular kit. The six roof sections are a single mesh duplicated with an array modifier in Blender (I am ashamed of myself for this). The roof tiles for the peak of the roof are duplicates of the same object as well…I really did get very lazy here. The ground plane is a very simple texture on a flat plane. The sky was added in Gimp using the very nice image provided by calibra of Pixabay, which can be found here.

In Image 1, you can see the picture used as the featured image of this post. Each element that makes up the modular kit pieces is an individual object that is placed via geometry nodes, with it’s rotation randomly tweaked ever-so-slightly to break up the uniformity of just laying everything out using the modifiers available in Blender. This could also be done in an application like Houdini, which I have used before. Blender doesn’t feature the same freedom in it’s procedural tools as Houdini, but the geometry nodes are quite powerful, and do allow for an amazing amount to be done with them. Doing something like this by hand would take many, many more hours than I spent learning the geometry nodes in Blender. The same basic approach was taken for the wall pieces, which are made up of slats with the gaps in between being filled with tar covered thatch. Geometry nodes can also be used to affect the vertex colors of geometry, and this was used to allow blending between a “tar” material and the material used for the wooden slats. You can see the effect of this in Image 2. The tarred thatch is represented by a simple plane that is textured to look like thatch that has been dipped into a vat of tar. At least, that is what I hope it looks like.

Image 2. The entrance to one of the more extravagant longhouses that can be built with this modular kit.

The image above shows a simple rendering of the front of the example longhouse. If you look at the wall that is set further back, you can see that the slats have “smears” of tar where they come close to the plane representing the tarred thatch. However, to work with the vertex colors of the generated mesh, the modifier for the geometry node network used to generate the wall piece needs to be applied. Only after that was I able to add the vertex color map and use the geometry node network that alters the vertex colors. If you look closely at the wall for the front of the foyer, you will notice that it lacks the darker smears of tar that the back wall features. This is because the foyer’s front wall hasn’t had the node network generating it applied yet. Without this, any vertex color map added to it will not be accessible to the node network designed to change the vertex colors. At least, I couldn’t get it to work, and I spent quite some time trying.

What you don’t see in the images above is the sheer volume of variety that can be obtained by creating the wall pieces (or any pieces for that matter) via the geometry node network approach. Each slat type used is a separate mesh, with it’s material applied to it. There are six different slats, all held in a single collection, that the geometry node network chooses from when placing each individual slat. All of the wall slats for any of the wall pieces can be randomized not only in the slat mesh chosen, but also the positioning and rotation as well. Once the vertex color network is applied to the wall piece, it is hard to believe that it is made up of nothing more than six different slat meshes randomly chosen and placed.

Another feature of these longhouses that is not visible in the images above is the thatch cards that are placed on the plane that represents the tarred thatch that is shoved in between each slat. It is highly unlikely that anyone stuffing thatch into these gaps would get it into the gap perfectly, which means that there would be a bit of thatch sticking out here and there to flutter in the breeze. That is where those little thatch cards come into play. Using a traditional modelling approach, placing each thatch card would be done by hand, probably by an intern who was questioning their life choices as they positioned each little thatch card. But, through the power of procedural modelling by virtue of geometry nodes, we can easily place thatch cards in between each wall slat. The best part is that no matter how each slat is rotated and positioned, the geometry node network for the thatched tar plane will recalculate where a thatch card can be placed without it ever being positioned where a slat is.

Image 3: A closer look at the thatch cards and their placement via Raycast in Blender’s geometry nodes. Notice that none of the thatch cards are protruding from within a slat. The thatch cards dynamically position themselves as the placement of the slats changes.

In Image 3, you can see a small portion of the node network used to place the thatch cards on the plane representing the tarred thatch. The entirety of the node network isn’t shown because the view would need to be zoomed so far out that you wouldn’t be able to read or see anything of note. The key idea to take away from Image 3 is the Raycast node in the network (you should be able to right-click on the image above and view it in a separate tab, allowing you to zoom in to read the node names). First, I used a “Distribute Points on Face” node to randomly place points on the tarred thatch plane. These points are where thatch cards could potentially be placed. With the Raycast node, we can do line traces and check if there is an intersection anywhere. In my case, I didn’t want to place a card anywhere that there was an intersection with the wall slats. Only where the raycast found no intersection should a thatch card be placed. The Raycast node is a bit strange to get used to, because it doesn’t work exactly the way that a line trace does in, say, the Unreal Engine. So if your interested in using this node, read the documentation and experiment a bit with it. It is worth your time to learn it.

Well, that was a lot to take in. I hope that I was clear in my descriptions, but the topics covered in this post are very complex. Without a large number of visual aids to help, it can be difficult to get my point across. Modding is a huge feature that I felt would be a great benefit to the game. Players will not be beholden solely to us for game content. If a mod developer wants to create a completely new dimension to the game (a new level of Hell perhaps), they will be able to do so. And, with the power of procedural modelling via Blender (or some other software like Houdini), they will be able to shorten the development time needed to make the custom assets they want. Thank you for taking the time to read this post, and I hope that you have a great day.

World Composition LOD system

I want to preface this entire article by stating that the information below has been gathered by experimenting with the system, and as such, is incomplete. I will need to dive into the C++ code to get a really solid idea of what is going on, but hopefully, what follows will be enough to help you along. Once I have dove into the code to see what, exactly, is happening when we press the generate button to make our levels-of-detail, I will post another article. I don’t know when that will be, so no promises.

Image 1: The level details dialog opened with sublevel E5-2 selected. We are currently viewing the basic settings for the sublevel’s first defined level-of-detail. There are a total of four levels-of-detail defined for each of this test map’s sublevels. That is the limit for the number of levels-of-detail that W.C. will allow.

To get started with setting up the levels-of-detail for the sublevels in World Composition (W.C. going forward), you’ll need to have at least one sublevel selected in the Levels tab and press the small level details button. I’ve circled this in red in Image 1. This will bring up the level details dialog allowing you to define all of the information that you want to use when generating your levels-of-detail. When you first open this dialog, you won’t have any levels-of-detail defined, and the only control that you will see under the “LODSettings” rollout is the one labeled “Num LOD”…not very descriptive, I know. This is the first step to defining your levels-of-detail and how they will be created.

By setting the value for “Num LOD” to 1, you will then be able to define a single LOD level for each sublevel in your W.C. map. Each of the other LOD levels will follow the exact same steps to create, but the values you enter will be different for each LOD level. If you want to have three levels-of-detail for your W.C. map, you would type 3 into the field for “Num LOD”. For LOD1, you will want to use the best quality settings that you feel you can get away with. Each game is different, and each one will have it’s own requirements for performance. Obviously, we don’t want a very noticeable “pop” when the player crosses the point where each LOD level transitions to the next. Which leads me to the first setting to pay attention to, which is the “Relative Distance” field.

Relative distance is the distance that this LOD level will use to transition to the next level-of-detail. This will be added to the base streaming distance. For example, the “Uncategorized” layer in W.C. has a default streaming distance of 50,000. Once the player is further away from that sublevel, it will be removed from the player’s viewport, even if they are looking directly at it. This is where our first level-of-detail, LOD1, would be streamed in to take the place of the actual sublevel. The setting for “Relative Distance” serves the same purpose as the default streaming distance; it is the point at which we want our LOD1 to be removed from the player’s viewport and the next level-of-detail to be streamed in. However, it is very important to note that the value for “Relative Distance” is cumulative. It is added to the values preceding it in the level-of-detail settings. An example is in order here to make this a bit more clear.

If you have not defined any other layers in W.C. and all of your sublevels are contained within “Uncategorized”, their default streaming distance is 50,000. In Image 1, you can see that the relative distance is set to 418,353. This may seem like a strange number to choose, but I arrived at this value after some mathematical calculations and quite a bit of experimentation. What you can’t see in the image above is that ALL of the LOD levels defined have the same relative distance value. When the player moves more than 50,000 units away from the sublevel, LOD1 will be streamed in and once that streaming is complete, the sublevel will be removed from view and LOD1 will be displayed in it’s place. Then, once the player has moved more than 468,353 units away from the sublevel’s location, the engine will start to stream in LOD2, and once LOD2 is fully loaded into memory, LOD1 will be removed and LOD2 will be displayed. This is the key part to remember: the relative distance value is added to the previous total. So, when the player has moved a total of 886706 units away from the sublevel’s location, Unreal will start to stream in the content for LOD3 and will remove LOD2 once that streaming process is complete. Why 886706? Because the engine is doing the following math: 50,000+418,353+418,353 to come up with the total distance the player needs to be for LOD2 to be too far away for the player to see, requiring LOD3 to be streamed in.

Most of the fields under the “Simplification Details” rollout are similar to the fields used when using the Actor Merging feature of the engine, so those won’t be covered here. There are a few key differences between the actor merging tool and the W.C. LOD generation tool.

One of the cool features of this system is that it will combine all of the static mesh objects within your sublevel in W.C. This is very similar to the Actor Merging feature, with a few differences. The first being that you do not get to chose what LOD level will be used when merging the static meshes contained in the sublevel. The system is choosing a single LOD level (which appears to be the lowest) from each static mesh asset and using that when combining them. We don’t have any control over this portion of the generation process. It is done behind-the-scenes, but we do get to specify a “Static Mesh Details Percentage”, which is the second departure from the Actor Merging feature. I am guessing a bit, but I believe that after the static meshes are merged, the result is then being reduced further in an attempt to reach the percentage we specify. So, if your merged static mesh is 15,000 triangles, and you enter a value of 68.5 for the percentage, the actual merged static mesh used in the W.C. LOD asset will be approximately 10,275. Considering the fact that the system is already (apparently) using the lowest LOD level from each static mesh, the resulting combined mesh is already pretty light-weight. Further reduction via this “Static Mesh Details Percentage” field would probably result in an unusable merged mesh if pushed too far.

The next field that we really need to pay attention to is the “Landscape Export LOD” field. At first glance, the “Static Mesh Details Percentage” field may appear to be an option to use instead of “Landscape Export LOD”, but in fact they are completely different. When W.C. generates the static mesh for the landscape actor (not the static meshes within the sublevel, but the actual landscape itself), it will use the LOD level specified in this field. This field defaults to LOD7 for the landscape actor, which will result in a static mesh with approximately 2,048 triangles. If you want/need higher resolution for the static mesh generated from the landscape actor for your sublevel, you will need to enter a different value for this field. In Image 1 you can see that I chose 3 as the LOD level to use from the landscape actor when generating the static mesh for LOD1 in W.C. This resulted in a static mesh that has most of the detail that is contained in the actual landscape actor, while being significantly less triangles. In Image 2, you can see a screen capture of the landscape with the actual landscape actors being displayed.

Image 2: Here we can see the landscape stretching off into the distance. This is showing the sublevels near the character with the actual landscape actors and assets. There is no levels-of-detail being displayed in our viewport. When looking at this image, pay close attention to the very dark mountains in the background.

In Image 3, we can see that the character has moved far enough away from one of the sublevels to cause it’s first LOD to be displayed. The static mesh that is being displayed instead of the landscape actor retains almost all of the detail of the actual landscape actor itself. The silhouette that can be seen against the sky matches very closely, and even if there is a little ‘pop’ when the LOD is swapped for the original, it shouldn’t be dramatic. Some experimentation will be needed to wring as much performance out of each sublevel’s LODs. I would advise finding a “Landscape Export LOD” value that works for your most distinctive landscape features, and stick with that value for all sublevels. If you have a sublevel that is relatively flat, and doesn’t have incredibly distinct skyline silhouettes, you might be tempted to set it’s “Landscape Export LOD” to a more aggressive value than the others around it. But remember that the edges of the surrounding proxy meshes have to match up, and using different “Landscape Export LOD” settings may result in gaps that can be seen by the player.

Image 3: One of the sublevels have been removed and it’s LOD1 asset has been displayed in it’s place. We can see that the generated mesh is fairly close to the landscape actor that it is derived from. However, the material created for this LOD is far from acceptable. This was due to the settings that I provided to the system. Garbage In, Garbage Out.

For the last two features that are unique to W.C.’s level-of-detail generation system, we will look at the “Bake Foliage to Landscape” and “Bake Grass to Landscape” options. These will render the foliage or grass assets to the 2D texture that will be used on the landscape proxy mesh. This is to give the player the impression that the foliage or grass is still on the landscape proxy mesh, even though the player is just seeing a 2D texture on the landscape proxy mesh. However, the system doesn’t appear to capture the color of tree leaves very well (or at all), which reveals the lack of trees very easily. The system does render the color of the tree trunk to the texture reasonably well. In Image 3, if there were trees on the slope of that mountain, it would be very obvious that the actual 3D assets were no longer there. I think there is a way around this (though, I haven’t tried it yet), and I will cover that briefly below. It may be that if you are using a stylized look, where your tree’s aren’t using masked materials for your leaves/branches, you could end up with a much better result. I am not sure how much better, though, because I have done very little testing using stylized assets.

One detail in Image 3 that would be hard to miss is the large difference between the LOD asset’s material and the material of the landscape actor that is adjacent to it. There is a hard-edge that would be nearly impossible to hide. This was due to the default settings that I chose when setting up the “Landscape Material Settings” for the generation of this LOD asset. I did not want to incur the cost of having separate textures for specular and roughness, so I used constant values instead. When the landscape proxy mesh is generated, the material assigned to it will use these constants for specular and roughness. Because I set the values so poorly, it resulted in a very reflective surface, which is why it appears the way that it does in Image 3. When I created these LOD assets for this open world map, I actually selected all of the sublevels and set their options, resulting in all LOD1 assets sharing these constant values in their generated materials. You can see this in Image 4. The sublevel adjacent to the first one we were observing has been removed and it’s LOD1 asset is being shown in it’s place.

Image 4: The character has moved far enough away to cause the adjacent sublevel to use it’s LOD1 asset as well. As you can see from the image, there is something very wrong with the texture for the tree in the middle of the village.

In Image 5, you can see the landscape proxy meshes for both sublevels with much better constant values for specular and roughness. While this isn’t a perfect match to the sublevel’s landscape material, it does provide a huge step in the right direction. This is why I highly advise you to do some testing on a single sublevel’s LOD settings and find material settings that work well with your map. Make sure to move your lighting around the same way that it might be moved in-game. This way, you will see if your lights are going to cause problems with the specular and roughness values if they are set too aggressively. Yet, if you don’t have any specularity and you set the roughness all the way to 1, you will lose any definition in your landscape proxy mesh. It will just look like a flat, 2D card that you have placed in the distance, with no highlighting of any of the landscape features. Once you have values that you are happy with, you can then use those values to generate all of the LOD assets for your sublevels in W.C.

Image 5: Both landscape proxy meshes materials have much better specular and roughness constant values. The ‘pop’ from the sublevel to the LOD asset is noticeable if you’re looking directly at it, but not extreme.

Like Image 3, Image 4 and Image 5 have their own detail that would be impossible to miss. The tree that I placed in the middle of the small test village has been included in the merged static mesh for that LOD asset. But, no matter what settings I used, I could not get the tree to merge with the other assets in that sublevel correctly. I duplicated it in the project, and then deleted the lowest LOD for the asset, thinking that it may be the fact that LOD4 for that asset was effectively two quads turned at right angles with a texture of a few branches on them. That didn’t work. I changed the material settings in the “Static Mesh Material Settings” rollout, changing the material type to masked instead of opaque. That resulted in the same broken looking tree after regenerating the LOD asset. But, don’t get too frustrated, because I haven’t mentioned the last aspect of W.C.’s LOD generation system that I am going to cover.

When I make a reference to the sublevel’s LOD asset, I am actually referring to a completely different sublevel that W.C. streams in and uses to replace the parent sublevel. For the sublevel being shown in Image 1, you will see that we are looking at the options for LOD1 of sublevel E5-2. When W.C. generates an LOD asset for E5-2, it actually creates a completely separate level which has the landscape proxy mesh and merged static mesh actors contained within it. It is a complete level! This is stored in a folder named E5-2LOD, and contains all of the LOD assets for the sublevel E5-2 with the name E5-2_LOD1. There are no lights within the level E5-2_LOD1, because I have my direction light in the persistent level. Actually, there isn’t much in this level to be honest, but it is a complete level. To fix the tree issue, I have exported the merged static mesh for E5-2_LOD1 and removed all of the triangles for the tree. I then set this new FBX file as the source file for the editor to use for the asset. After that, I just pressed the reimport button in the editor for that merged static mesh and the tree was gone. I know, you’re probably saying that this isn’t much of a solution; that you want your tree. But, after I removed the tree from the merged static mesh, I opened E5-2_LOD1 and placed the tree back into the level as a separate asset. Once I saved E5-2_LOD1, the replacement tree was now a part of the LOD asset for E5-2 and whenever the player moved far enough away from E5-2, the LOD1 asset would be streamed in and displayed. Sure enough, there was my replacement tree; exactly what I wanted. Because the replacement tree will still follow all the rules set out in it’s own LOD settings, as the tree takes up less-and-less of the screen space, it will use lower-and-lower LODs of the mesh.

With the realization that each LOD asset generated by W.C. for each sublevel is nothing more than a separate level object that is swapped in, you may be having the same idea as me. We may be able to just open these LOD assets and use the foliage mode in the editor to place simplified versions of our trees into these LOD assets. They are, after all, just levels like any other that we might work with. We would have to define different static mesh foliage assets, because we would want to use a much lower static mesh LOD for these. But I don’t believe that these static mesh foliage assets are very large, so the cost may be well worth the results. I haven’t tried this (yet), but I see no reason why this wouldn’t work.

Throughout this article, with the exception of a few places, I have been talking about LOD1 for the sublevel named E5-2. Nevertheless, everything that I have said applies to all of the sublevels in my open world test map. Not just for LOD1 either, but for all four of the levels-of-detail that W.C. will allow us to create per sublevel. There are a total of 36 sublevels making up this open world test map, and for each of these I have the maximum of four LOD assets. That brings the grand total up to 144 levels that W.C. creates for me to use as the LOD assets for the sublevels. Yes, there is still quite a bit of work that would need to be done if I was to use these, but it is a huge help that W.C. can generate these for us.

One last word of warning. Do not edit any of the LOD assets until you are sure that you are happy with your LOD settings in W.C. This is because when W.C. generates these LOD assets, it will gleefully overwrite any changes that you have made. When I removed the tree in the middle of the village, I regenerated LOD1 for that sublevel again, and the merged static mesh actor that was created had the tree right back where it was. My change was gone, but I knew that would be the case. The same is true for all of the meshes and/or materials for the LOD assets. Only alter them once you know you will not be regenerating them again.

Well, this has been one of the longest posts I have made on the site. I don’t claim to be an expert with this tool, and with UE5 moving to World Partition, we may never see any information coming from Epic about this tool again. I hope that it helps somebody. Thanks for sticking with me through this very long article. Have a great day and happy developing!

Landscape Optimization

There has been so much going on since the last post. The design document has received more work, detailing some changes to the backstory that is going to directly effect the overall structure of the game. Also, in the design document, we have detailed more recent game features that we are adding. Each major region of the game is going to have it’s own environmental game feature. For example, in the jungle, players will be able to obtain a grappling hook and swing through the trees. In the forest area the player will be able to gain access to a wing suit and glide through parts of that region. This last feature lead to some interesting observations and some decisions about the size of the game itself.

It was always our intention to make this a reasonably large open world. But, I hadn’t given much thought to just how big the main map would be. While testing the wing suit in it’s prototype project, I was able to glide over 0.7km (0.43m) in a single glide. This lead to the obvious question: Do we want to nerf this feature, or do we want to go big on the landscape? Having done Capuchin Capers as a series of reasonably large islands, and having dealt with the optimizations needed for performance, I knew we needed to test truly large landscapes before making this decision.

Landscapes are a big topic, no pun intended. I spent numerous days in Unreal Engine 5 Early Access 2 because I knew that Epic wanted to focus more on open worlds with UE5. After my testing, I have decided that we are going to stick with UE4 for this game. There are some great features in UE5 that directly impacts exactly how developers go about building open worlds, but I just can’t count on all of the performance issues with landscape actors in UE5 to be fixed at launch. A landscape in UE4 that would run at a very comfortable +120fps runs in the low 60’s to high 50’s in UE5. I know that Epic will get that sorted out, but we can’t put years of work into this project, all the while hoping that these issues will be resolved to a reasonable degree. World Partition, Lumen, Nanite, and MetaSounds are all great features, and I really wanted to take advantage of these. That is why I spent almost four days trying various approaches to be able to use UE5. But in terms of the workflow building landscapes, UE5 has a long way to go.

My experience with UE5 lead me back to UE4 and World Composition. I had watched the live-streams for W.C. and read the documentation for the system. Unfortunately, there isn’t much information for this system compared to other parts of the engine…it just doesn’t get used nearly as much as the ‘main’ areas of the engine. But, after spending several days testing and experimenting, I feel that I am getting my feet underneath me enough to be confident that we will be able to achieve our goals with W.C. There is a lot to learn about it, and some serious quarks to the whole thing that I spent way too much time fiddling with.

The first is making a truly large area without any seams between the landscapes, stored in the various levels in World Composition. I wanted a total map size of approximately 12km2, which would require a heightmap with a total size of 120992 (this was slightly off by a few pixels, but it didn’t really make a difference). I tried to create each landscape separately, in it’s own sub-level in W.C., but I kept running into a common problem. I was getting very small cracks between the individual landscapes. At first I thought it was due to the heightmaps having ever-so-slight differences in their color values along these seams. That was not the case. You can do any color corrections that you want, but it won’t get rid of those seams. Worse, you can’t use the sculpting tools to fix these seams because they are due to a separation of two different landscape objects. You can’t ‘paint’ across the boundaries, because in fact, the landscapes don’t share any boundaries. They just happen to sit next to one another. After watching the live-stream several times to try to get a hint of how to fix this, I noticed a really nice feature that is mentioned around the 47:36 time mark. Add Adjacent Landscape Level. Those are currently my favorite four words in the English language (that is why I highlighted them the way I did…I love those four words).

There are two approaches to using heightmaps in Unreal. The first is during landscape creation, via the ‘Import from File’ option. This approach will create the landscape based on the file chosen, generating presets for the number of components, section size, as well as the other settings for a landscape. This is nice if you have a single landscape in your level that isn’t larger than the limit for a landscape object. However, if you are attempting to create a map on the scale of an open world, this option won’t work. You inevitably get those seems between the landscapes.

Image 1: The first approach that you might take to creating a landscape using a heightmap.

The second approach to using a heightmap is to first create your landscape object, defining all of the various settings in advance, and creating a totally flat landscape (see Image 2 below). Then, once the landscape has already been created, you can load in a heightmap after-the-fact in the “Target Layers” rollout of the sculpting settings (see Image 3).

Image 2: Creating a landscape manually, by defining all of its parameters and pressing the create button. This results in a flat landscape with the dimensions that you defined in the rollout above.
Image 3: Using the orange ‘Heightmap’ button to load in a heightmap to be applied to your landscape.

Normally, when you are loading in a heightmap for a single landscape, there is little difference between the two techniques. With either technique, you can set your landscape’s component count, section size, etc. But no matter what you do, using these techniques alone, you can’t create a single landscape large enough to be used for an open world. That is where “Add Adjacent Landscape Level” finally comes into play and all of this makes more sense.

If you first define one of your landscape levels in W.C. via the ‘Create New’ menu in the W.C. tab and create your landscape tile using the “Create New” option, you can define your first landscape with all of the settings that you would need for a single tile in your open world map. In my case, that was the settings that you can see in Image 1 above. This is just one of nine tiles that make up my open world map, though, and I needed to add eight more to make the entire map. But make sure to keep in mind that I did not create that first tile via a heightmap. I predefined the landscape with the settings that I knew I wanted, and I created a flat landscape. Once that landscape was created, I used the “Add Adjacent Landscape Level” feature to create the other eight landscape tiles that I needed. You can see in the live-stream that when you use this feature, the landscape created does in fact go into it’s own level. But, it is sharing the vertices along the boundary with other landscapes created in this way. So, while each landscape is in it’s own level and benefits from all of the tools in W.C. that apply, the landscape objects themselves are seamless. You can sculpt or paint across the boundaries without issues.

That wraps up this rather long post, and in the next post I will discuss some of the issues that I ran into while working with W.C.’s LOD system that you can use. You can see the effects of that LOD system in the main image for this post (which I am also including below). Thanks for sticking with me on this post, and I hope that it helps you build the open world of your dreams!

Image 4: The open world test map. Some of the mountains in the background are over 3km away! The actual mountains are mesh proxies being used in World Compositions LOD system. More on that in the next post…that is called a teaser, and I am a jerk for doing that. But, that didn’t stop me, did it? 😀

Continuous Training

Having wrapped up the bulk of the work on Capuchin Capers, preproduction planning has begun on At the Crossroads, our next title. At the Crossroads will utilize quite a few aspects of the Unreal Engine that we have yet to work with in a game. Which raises an interesting question: Should we push on into production and learn on the fly, or should we train ourselves up before production starts?

This may, at first glance, appear to be nothing more than a question based on your world-view, but there is more to it. Time is a resource. How much time will be spent developing with a new technology if you are constantly required to stop development to research how to use that technology in a specific use-case? How many times does this have to happen before you’ve spent more time in this manner than would have been spent taking a course or two on the technology? This is the situation that I have been facing since preproduction on At the Crossroads has started. In particular, the Gameplay Ability System has made me rethink my approach to this key question.

The Gameplay Ability System, or GAS for short, is a system designed by Epic from the ground up to build entire sets of abilities for Action, RPG, MOBA, Battle Royale, and many other game genres. Because it was meant to be used in such a wide range of situations, the GAS is fairly complex, with many pieces that interact with one another. Attempting to watch some Youtube videos and read a few forum posts would be detrimental when trying to implement such a complex system. The number of times that we would have to stop development to scour the Internet for an answer to a problem would be large. And, the amount of time spent doing that research would quickly out-pace the amount of time spent taking a course on Udemy and truly studying up on the GAS before production began.

There are many aspects of game development that will require you to ask yourself the question above. The GAS is just one example. For another example, lets take a look networking in Unreal. Networking, which is known as replication in Unreal-speak, is not a trivial task and requires a deeper knowledge of not only how replication itself works, but also Unreal’s Gameplay Framework. If your knowledge of the Gameplay Framework is lacking, and you were making a single-player game, you could just put functions and variables wherever you wanted. When it comes time to implement replication, however, you are likely to be in for a world of pain and suffering. The time spent undoing a lot of work that violates some of the design principles of the Gameplay Framework could easily be longer than just going through one or more of the courses on Udemy or Unreal’s Knowledge Portal.

Technology in general, and game development specifically, seems to change at a break-neck speed. Thus, the requirement to learn these new technologies is never-ending. As game developers, we will always have to retrain ourselves to keep our skills sharp. Whether it be a new app that we need to use to created assets, or a new system in our game engine of choice, learning and working with new technologies is an endless part of our lives.

In the end, it all comes down to how we want to spend the most precious currency that we, as indie developers, have: Time. It may be tempting to jump into the deep end, and sometimes that can work for the best. But we always need to spend that most precious currency wisely. We only have a limited amount of it.

Proper Preproduction Planning Provides Positive Performance

With Capuchin Capers essentially wrapped up, planning for our next game can commence. Through the last two projects, I have tried to formulate a plan in some way. In Odin’s Eye, I did some level planning, storyboarding, and concept art. But it wasn’t nearly enough, and the project was much harder as a result. For Capuchin Capers I decided that more planning needed to be done before the majority of work was started. But, much like Odin’s Eye, not enough preproduction planning was done and there were points when the project suffered as a result. As I may have mentioned in a previous post, the cinematic storyboarding should have been done long before any character modeling was started. Because I didn’t do that, Suzy’s bone structure didn’t support some of the movement I wanted in the final cinematic shown to the player when they win Capuchin Capers. I made it work, but it was far more difficult than it should have been. Proper preproduction planning would have prevented that.

For our next game, At the Crossroads, we are working to do much more preproduction planning before the first line of code for the game is written or the first polygon is created in Blender. We have some concept art already generated for the creatures in the game, with more to be created and added to the game design document. I can’t stress to you how much easier it will be to create this game once we have everything planned out ahead of time. We’ll know exactly what animations, assets, code, textures, and so on that we will need before we start. Code development in particular is going to be considerably more clean and tight as a result.

An added bonus to a well-made game design document is the feel of legitimacy that it conveys when you are looking at it. If laid out properly, with a nice selection of fonts for the main body of text as well as fonts and backgrounds for the title and headers, it gives a feel to the project that starts everything off on the right foot. The text styles created in the design document can also be used for the instruction manual, if you plan on including one. This is nice, because after reading the game design document for months, or even years, you will know if the text styles selected will cause problems with people’s ability to read the manual…aside from everyone’s natural reluctance to read a manual, of course.

But don’t feel as if the design document, or preproduction planning in general, will stifle your creativity. You don’t need to document every step the player can take in the game. This may lead to a game that feels too formulaic in nature if you do document every small detail. By capturing the significant details about the game, like how many maps there are and how they’re structured, what monsters are in the game and what their major attacks are, as well as other high-level details, you will have an excellent idea of the steps needed to build your game. Another example is dialog scripts. Writing scripts for dialog is very important if you are hiring voice actors/actresses to do the dialog. How much dialog will there be? How long will the recording sessions take? What inflections do the voice talent need to put on certain lines of the dialog? Making these decisions while standing in a recording booth that you are renting by the hour is a bad time to realize that you didn’t think this through as well as you should have.

When we are done writing the design document to a stage where we feel comfortable to move into production, we will have a document that is over twenty-six pages long (it’s current length at the time of this blog post). We will have several terrain studies written to document the various biomes where the game will take place. We will have a document outlining our marketing strategies and the steps needed to realize them. And, we will have proper storyboards detailing the major shots in the cinematics for the game. Will this be a complete, step-by-step guide to making At the Crossroads? No. It won’t. But it will give us a very good idea of everything that is going to go into the game.

Surprises in life can be fun; surprises in development are almost always painful. Plan accordingly.