It has been a long time since the last post, and there is good reason for that. I have put a lot of time and effort into creating the design documents for At the Crossroads. I have most of the main storylines fleshed out and many of the supporting character’s created. As I have worked on this, the understanding of just what it will take to create this game became more and more clear. As I am the only person working on this, it became clear that this just isn’t possible.
I have put off this post for months now, hoping that I could figure out a way to still make this project happen. My original idea was to create the opening for the game in it’s entirety, with all of the gameplay systems fully functional, and have the open world (which makes up the vast majority of the actual game) created with all of the assets in place. But, there wouldn’t be any NPCs in the open world and no story related content; it would just be an opportunity for potential supporters to see the world, and decide if they wanted to back the game. Then, this demo would be made available through a Kickstarter campaign, with anyone being able to download the demo to play through. The demo would give the community a chance to see the quality of the game, and decide if it was something that they would like to help make a reality. But, as the game’s story outline began to expand into a reasonably accurate view of what would be necessary to tell this story, it became abundantly clear that no one person could make this happen.
I don’t have any intentions of leaving the world that I have created unused. That is where the title to this post comes from. I have to look at reality, and be honest about what I am going to be able to do as a single developer. Story driven games are great, but there is a reason why indie developers choose to make small, sandbox-based, games with very little story to them. It isn’t as much writing the stories. It is the massive work that goes into creating all the game-play related aspects of the story in the game.
At the Crossroads, as a setting, is still going to see the light of day. I think a much smaller, more tightly focused, game that is set in the Crossroads would be the best approach. If that goes over well and generates enough revenue to hire others to help on the full story-based game, then the game that I’ve been designing could be made. I think the approach for full funding would still need to follow the strategy outlined above (with a demo that potential supporters could play before committing their support).
In closing I have to say that, as an indie developer, I want to make huge epic games that people lose themselves in for a time. Epic Games has given us so many tools, like MetaHumans, PCG graphs, and full access to Quixel, sometimes I convince myself that I can do more than is really possible. I think the story that I’ve created is worth seeing the light of day, and hopefully someday, it will.
Many months have passed since the last update on the site. Work has continued on the development of At the Crossroads, but it has been slow-going due to real life encroaching on my time. However, I won’t let that stop me from moving forward, even if the progress is at a snail’s pace. The writing on AtC has been difficult to make progress on, because you can’t force inspiration. In programming, you can overcome an issue by putting in more work to find a solution. The same can’t be said about creative efforts like writing. I do not want to rely on ChatGPT to write my story/characters for me, so that is not an option. But sometimes a break is exactly what creativity needs to recharge the batteries. Luckily, Epic has provided just such a break with the Procedural Content Generation system (or PCG for short).
The PCG system is a great addition to UE 5.2 and is going to provide a considerable degree of control over how our content can be created in our games. Previously, I used the Procedural Foliage Spawner system (or PFS for short) within Unreal to create an accurate biome. While this system is experimental (just like the PCG system), it is a really nice system that, when finely tuned, can generate fairly accurate results. This is the technique that was used in Capuchin Capers to create the foliage for all of the islands. While this system can be a great time-saver when set up correctly, and can provide a good way of realizing the data that we gain from our terrain studies in a concrete way, it definitely has it’s limitations.
The PFS system will generate a spawn location for a type of foliage and calculate the spread of that foliage over generations. This results in clumps of foliage types, such as ferns, where there are “older” ferns in the middle of the clumps, and “younger” ferns on the periphery to simulate the gradual spread of that plant type within it’s environment. While this is seen within nature, it doesn’t always play out that way. There are some types of plants that, in nature, will choke out other plants in various ways. The PFS system tends to produce these clumps, or islands, of foliage types in a higher abundance than you would witness in real life. This effect can be lessened by adjusting the values for the individual foliage types that are to be spawned. But to a certain extent, the PFS system leans in this direction no matter what you do…it is the basic premise of the system, after all. To be sure, the PFS system is still a very good tool and I loved the results that I was getting over hand-placing the foliage with the foliage paint mode in the editor. But, I always felt that I just couldn’t get the results that I was after no matter the amount of tweaking that I did. I feel that the PCG system can remedy that.
One of the aspects of the PFS system that I struggled with was getting shade-loving plants to consistently spawn within the “shade bounds” of a large tree. No matter what settings I used, I just couldn’t get a consistent result that placed the foliage types where I wanted them. This is because of the rules that are baked into the PFS system (which, unless you’re willing to change the C++ code, are immutable). Now, with the PCG system, we not only have easier access to the code responsible for spawning the foliage, we actually get to write that code! Some may see this as a regression, in that we are required to write the spawn logic for our foliage, rather than having a built-in system to aid us in this. But this is much more of an unshackling of our creativity than a ball-and-chain that increases the effort required to realize our vision. The extra work is far outweighed by the freedom that the PCG system grants us. We get almost all of the benefits of targeted foliage placement while retaining the great performance that we receive from instanced foliage. It is true that procedural placement will never be as accurate as hand-placing every single asset, but we can come much closer than we ever could before, in the least amount of time. We can do our terrain studies to find out all the information that we need about a biome, and then implement that data in our PCG graphs to produce a much more realistic result than we could ever achieve with the PFS system.
Another limitation with the PFS system is that you can’t specify a static mesh foliage type to be used for saplings or very young plants. It will scale the static mesh that is specified in the static mesh foliage asset. Not only is this unrealistic, it can lead to serious visual artifacts if a static mesh has a material that implements World Position Offset to create wind effects. The animated leaves for these scaled assets can cause severe “smearing” or stretching that is instantly noticeable and very undesirable (see Image 1 below). This can occur with PCG, of course, but only if you rely solely on scaling an asset to achieve a “younger” version of that foliage. Obviously, with PCG we are not limited to simply scaling a mesh; we can provide a completely different static mesh for these saplings/sprouts within our graphs!
I am very excited by the direction that the PCG system is taking content generation within the Unreal Engine, and by extension, gaming in general. And make no mistake about it, with this type of open-ended procedural content generation that was only available in software packages like Houdini now being available in UE, other engine developers will be forced to implement something similar or risk being left behind. The PCG system benefits all developers, but not equally. Small studios or individual indie developers will benefit far more than AAA studios, because those huge studios could already create their vision with precise accuracy in a (relatively) short period of time. For the indie studio, however, throwing more work-hours at a problem just wasn’t an option.
Thus far, I have spoken only of the PCG system as a means of placing foliage on a landscape, but it is capable of SO much more than that, and the examples used above are just that: examples. There are very few limits to the PCG system and I believe we will see it’s use throughout UE content generation/development. With further development, it will become more feature-rich and performant, which will open new doors that were previously closed in game development.
Thank you for stopping by and reading this article, and I hope that you have a great day.
Real life has a way of creeping into every project, and this project isn’t immune by any means. A large scale, time sensitive issue came up and consumed several months of time that was meant for development of the game. I am happy to say that I can finally turn by attention back to working on At the Crossroads.
I have been outlining the individual plot lines, as there is one for each region of the game, and that is where most of my efforts are being directed towards. Because I haven’t spent years studying writing, this has been a challenging task. However, I am beginning to make reasonable progress again. There just isn’t much that can be commented on here, and there are no screenshots to share. This is one of the main reasons for the very long delay between this post and the last one.
The site isn’t dead, nor is the project. It’s just that life happened, and there was no way around that. Anyway, I hope that anyone reading this has a great day, and thanks for stopping by.
This is a long overdue update, but a lot has been going on since the last post.
First, another modular kit that will be needed for the game has been, for the most part, completed. There are some accent pieces that still need to be made, to help dress up the areas that utilize this kit. Several versions of stela need to be modeled for the kit, but before that could be done, I ran into a set-back. To make the stelas show information that is relevant to the backstory, we needed a more detailed look into that backstory.
This need to dive into the backstory/lore of the game world forced me to review what we already had for the story, character arcs, and plots that take place in the game’s storyline. Not being a writer by profession, and only having a novice level of experience with the proper way to write a complex story, I’ve had to take a lot of time to dive deeper into this part of the game’s development.
We’ve always known that the story for this game needed to be strong, as it is really this story, and the world in which it takes place, that would hold the biggest appeal for the game. However, developing a well written story with believable characters that people will care about is very difficult, and that is under-stating it. We are so lucky when it comes to the amount of resources available to aspiring writers. From the numerous blog posts by successful authors, to the YouTube videos by some of the best selling authors today, there is hope, even for someone like me.
In my research on writing a proper story outline, I came across the YouTube videos for the BYU course on creative writing, by Brandon Sanderson. This has been invaluable to me, because I am definitely not a discovery writer, who can just sit down and start writing a decent story. I’ve tried. I failed. I have to outline everything so that I can picture how all the key events will fall together to build the story.
Since discovering Mr. Sanderson’s lectures, I have fleshed out a fair amount of the overall story and how it will progress. I have the major and minor plots figured out, and I just need to detail the events that have to occur for each of these. This has been fairly difficult, though, because we are making this into a game, rather than a novel. We can’t control the exact order that events will occur while the player is progressing, so I’ve had to take even more time in this process.
In the end, we hope to have a story that will be compelling and enjoyable. We know that we can’t complete this game if we have to do everything ourselves. We can’t model every asset, nor can we write every single line of code. We have to lean on the Unreal Engine Marketplace to get a fair amount of our assets. But there isn’t an engine plugin to write and implement our story for us. And, we anticipate that it is this story that will allow us to compete, on a very limited level, with larger studios that have more experienced teams. With resources like Mr. Sanderson’s lectures, books on writing stories and screenplays, and other sources of guidance, we have a chance to make our hope a reality.
Thank you for taking the time to read this post, and I hope that you’ve gotten something out of it. Have a great day.
The Daursynka long house is done, and we are pretty happy with it (see the featured image for this post, and Image 1). There are many variations for each piece in the kit, to reduce the ’tiling’ effect that can be seen in some games. For example, there are twelve different variations of the main roof’s tile sections, to reduce the repetitive use of each piece. The human eye spots patterns fairly well, and repeating patterns are easiest to pick up on when there are many examples of the same pattern right next to one another. So, this approach was used for all of the pieces in the kit. However, this meant we ended up with over 460 pieces for this kit alone!
When looking at the prospects of exporting each piece, along with it’s collision geometry, it became clear very quickly that scripting the export process was going to be mandatory. When we say scripting in Blender, that means Python. I must admit, I have never been (nor will I ever be) a huge fan of Python. I don’t like the language as a whole, and so I didn’t have much experience with it. That meant first learning Python beyond a quick overview of the languages features. Even if the only scripts that you were ever going to write in Blender were small, relatively simple ‘helper’ scripts, the effort would absolutely be worth it. Especially while creating something as large as an entire modular kit.
After going beyond the introductory lessons on Python at W3Schools, I turned my attentions to writing a script to help with the export. It became obvious very quickly that this script was going to be much more than just a simple little script. I wanted to keep my collection hierarchy in Blender and use it as the folder structure on the hard drive when exporting the assets. This meant a recursive function. I want to be completely honest here. In all of the time that I have ever done programming of any kind, I have never had to write a recursive function. So this was a first for me, and it was an…experience.
Recursive functions are usually deceptively short, and therefore they are deceptively simple. But, at first, I found it very difficult to get my head wrapped around exactly how the function was going to work. I had to look at it in a completely new way (for me), which was so different to anything I had done before. The hardest part for me was thinking about how the function was going to ‘walk’ through the various collections, and the order that it was going to run in. I tried to visualize how the function would be called from the very top level collection, and how it would call itself when it discovered a collection within that top level one. I did figure it out, but I can’t say that I will ever be comfortable writing recursive functions.
Once I had a script that would export my objects, I realized that I had only solved half of the problem. I would still have to import those assets into the Unreal Engine, so I was still staring down an enormous amount of work. Thankfully, the Unreal Engine editor can also be scripted with Python, so I knew that I could write a script to handle the imports. I needed a way to have the editor import not only the 3D objects themselves, but also to create the materials for those objects first. It would do no good to import 460+ objects and then have to apply materials to each and every one. This led down a rabbit-hole that resulted in the project import and export managers pictured below.
This tool (or tools if you want to count them separately) is why it took so long for a new post to be added to the website. These were a real trial to get to work, but I feel that the time spent will more than pay for itself when we are making other modular kits for ‘At the Crossroads’ as well as any other games that we create. All of the data entered into the export manager is saved in the .blend file that contains the modular kit assets. So, while it does take some time to enter that information, it only needs to be entered once.
When the data entry is complete, pressing the ‘OK’ button starts the export process. The tool uses the data entered for the materials and textures to generate part of a JSON file describing where these assets are on disk, as well as where they should be saved in the UE project when they are imported. It then ‘walks’ it’s way through the collections contained within the ‘Kit Root Collection’ discovering 3D assets as it goes. As it exports each 3D object, it finds that asset’s collision geometry and exports it along with the asset. So, when the import manager within the UE editor imports each 3D asset, it will have the correct collision. All of the 3D-asset-specific information is added to the JSON file as each asset is exported.
The import manager reads the JSON file and imports the textures, materials, master materials, and instanced materials. I say that it imports the materials, but it is more accurate to say that these are created using the data generated by the export manager and saved in the JSON file. The master materials that are defined in the export manager are created, and then all of the instanced materials are inherited from these. Once this is all complete, the 3D assets can be imported and have all of the correct materials and/or instanced materials applied. The whole import process took just under five hours to complete. Imagine how long this would take to do all of these imports individually. This will be a huge time saver in the long run, as there is very little work that needs to be done from that point.
After the import is complete, the only thing that is left to do is to define the actual node networks for the base materials and the master materials. This has to be done by hand due to the fact that, while it is possible to map some of the material nodes from Blender to the UE editor, it would be extremely difficult to do. Some nodes in Blender, like the math node, do have equivalents in UE, but there are others that do not map across the two applications at all. Also, when you get into the more complex materials in either program, trying to make these base node mappings work gets even more complicated. It was decided that it would be best to just define the node networks in the UE editor by hand. It doesn’t take that much time to do, and we get to use our preferred workflow in Unreal without having to worry about how it all has to be created in Blender to make the translation process successful.
After all of this, were these tools worth it? In the short-term, no. I could have exported all of the assets from Blender by hand, and then import them into the Unreal Engine in less time than was required to write these tools. In the long-term though, these tools will pay for themselves many times over. While the export manager does force a specific work-flow on the artist, it is not that much of a constraint. And, the time saved overall will make these tools valuable assets for us going forward.
Thank you for taking the time to read this, and I hope it has sparked some ideas that you may have for tools to improve your work-flow. Have a great day.
There has been so much going on since the last post. The design document has received more work, detailing some changes to the backstory that is going to directly effect the overall structure of the game. Also, in the design document, we have detailed more recent game features that we are adding. Each major region of the game is going to have it’s own environmental game feature. For example, in the jungle, players will be able to obtain a grappling hook and swing through the trees. In the forest area the player will be able to gain access to a wing suit and glide through parts of that region. This last feature lead to some interesting observations and some decisions about the size of the game itself.
It was always our intention to make this a reasonably large open world. But, I hadn’t given much thought to just how big the main map would be. While testing the wing suit in it’s prototype project, I was able to glide over 0.7km (0.43m) in a single glide. This lead to the obvious question: Do we want to nerf this feature, or do we want to go big on the landscape? Having done Capuchin Capers as a series of reasonably large islands, and having dealt with the optimizations needed for performance, I knew we needed to test truly large landscapes before making this decision.
Landscapes are a big topic, no pun intended. I spent numerous days in Unreal Engine 5 Early Access 2 because I knew that Epic wanted to focus more on open worlds with UE5. After my testing, I have decided that we are going to stick with UE4 for this game. There are some great features in UE5 that directly impacts exactly how developers go about building open worlds, but I just can’t count on all of the performance issues with landscape actors in UE5 to be fixed at launch. A landscape in UE4 that would run at a very comfortable +120fps runs in the low 60’s to high 50’s in UE5. I know that Epic will get that sorted out, but we can’t put years of work into this project, all the while hoping that these issues will be resolved to a reasonable degree. World Partition, Lumen, Nanite, and MetaSounds are all great features, and I really wanted to take advantage of these. That is why I spent almost four days trying various approaches to be able to use UE5. But in terms of the workflow building landscapes, UE5 has a long way to go.
My experience with UE5 lead me back to UE4 and World Composition. I had watched the live-streams for W.C. and read the documentation for the system. Unfortunately, there isn’t much information for this system compared to other parts of the engine…it just doesn’t get used nearly as much as the ‘main’ areas of the engine. But, after spending several days testing and experimenting, I feel that I am getting my feet underneath me enough to be confident that we will be able to achieve our goals with W.C. There is a lot to learn about it, and some serious quarks to the whole thing that I spent way too much time fiddling with.
The first is making a truly large area without any seams between the landscapes, stored in the various levels in World Composition. I wanted a total map size of approximately 12km2, which would require a heightmap with a total size of 120992 (this was slightly off by a few pixels, but it didn’t really make a difference). I tried to create each landscape separately, in it’s own sub-level in W.C., but I kept running into a common problem. I was getting very small cracks between the individual landscapes. At first I thought it was due to the heightmaps having ever-so-slight differences in their color values along these seams. That was not the case. You can do any color corrections that you want, but it won’t get rid of those seams. Worse, you can’t use the sculpting tools to fix these seams because they are due to a separation of two different landscape objects. You can’t ‘paint’ across the boundaries, because in fact, the landscapes don’t share any boundaries. They just happen to sit next to one another. After watching the live-stream several times to try to get a hint of how to fix this, I noticed a really nice feature that is mentioned around the 47:36 time mark. Add Adjacent Landscape Level. Those are currently my favorite four words in the English language (that is why I highlighted them the way I did…I love those four words).
There are two approaches to using heightmaps in Unreal. The first is during landscape creation, via the ‘Import from File’ option. This approach will create the landscape based on the file chosen, generating presets for the number of components, section size, as well as the other settings for a landscape. This is nice if you have a single landscape in your level that isn’t larger than the limit for a landscape object. However, if you are attempting to create a map on the scale of an open world, this option won’t work. You inevitably get those seems between the landscapes.
The second approach to using a heightmap is to first create your landscape object, defining all of the various settings in advance, and creating a totally flat landscape (see Image 2 below). Then, once the landscape has already been created, you can load in a heightmap after-the-fact in the “Target Layers” rollout of the sculpting settings (see Image 3).
Normally, when you are loading in a heightmap for a single landscape, there is little difference between the two techniques. With either technique, you can set your landscape’s component count, section size, etc. But no matter what you do, using these techniques alone, you can’t create a single landscape large enough to be used for an open world. That is where “Add Adjacent Landscape Level” finally comes into play and all of this makes more sense.
If you first define one of your landscape levels in W.C. via the ‘Create New’ menu in the W.C. tab and create your landscape tile using the “Create New” option, you can define your first landscape with all of the settings that you would need for a single tile in your open world map. In my case, that was the settings that you can see in Image 1 above. This is just one of nine tiles that make up my open world map, though, and I needed to add eight more to make the entire map. But make sure to keep in mind that I did not create that first tile via a heightmap. I predefined the landscape with the settings that I knew I wanted, and I created a flat landscape. Once that landscape was created, I used the “Add Adjacent Landscape Level” feature to create the other eight landscape tiles that I needed. You can see in the live-stream that when you use this feature, the landscape created does in fact go into it’s own level. But, it is sharing the vertices along the boundary with other landscapes created in this way. So, while each landscape is in it’s own level and benefits from all of the tools in W.C. that apply, the landscape objects themselves are seamless. You can sculpt or paint across the boundaries without issues.
That wraps up this rather long post, and in the next post I will discuss some of the issues that I ran into while working with W.C.’s LOD system that you can use. You can see the effects of that LOD system in the main image for this post (which I am also including below). Thanks for sticking with me on this post, and I hope that it helps you build the open world of your dreams!