Many months have passed since the last update on the site. Work has continued on the development of At the Crossroads, but it has been slow-going due to real life encroaching on my time. However, I won’t let that stop me from moving forward, even if the progress is at a snail’s pace. The writing on AtC has been difficult to make progress on, because you can’t force inspiration. In programming, you can overcome an issue by putting in more work to find a solution. The same can’t be said about creative efforts like writing. I do not want to rely on ChatGPT to write my story/characters for me, so that is not an option. But sometimes a break is exactly what creativity needs to recharge the batteries. Luckily, Epic has provided just such a break with the Procedural Content Generation system (or PCG for short).
The PCG system is a great addition to UE 5.2 and is going to provide a considerable degree of control over how our content can be created in our games. Previously, I used the Procedural Foliage Spawner system (or PFS for short) within Unreal to create an accurate biome. While this system is experimental (just like the PCG system), it is a really nice system that, when finely tuned, can generate fairly accurate results. This is the technique that was used in Capuchin Capers to create the foliage for all of the islands. While this system can be a great time-saver when set up correctly, and can provide a good way of realizing the data that we gain from our terrain studies in a concrete way, it definitely has it’s limitations.
The PFS system will generate a spawn location for a type of foliage and calculate the spread of that foliage over generations. This results in clumps of foliage types, such as ferns, where there are “older” ferns in the middle of the clumps, and “younger” ferns on the periphery to simulate the gradual spread of that plant type within it’s environment. While this is seen within nature, it doesn’t always play out that way. There are some types of plants that, in nature, will choke out other plants in various ways. The PFS system tends to produce these clumps, or islands, of foliage types in a higher abundance than you would witness in real life. This effect can be lessened by adjusting the values for the individual foliage types that are to be spawned. But to a certain extent, the PFS system leans in this direction no matter what you do…it is the basic premise of the system, after all. To be sure, the PFS system is still a very good tool and I loved the results that I was getting over hand-placing the foliage with the foliage paint mode in the editor. But, I always felt that I just couldn’t get the results that I was after no matter the amount of tweaking that I did. I feel that the PCG system can remedy that.
One of the aspects of the PFS system that I struggled with was getting shade-loving plants to consistently spawn within the “shade bounds” of a large tree. No matter what settings I used, I just couldn’t get a consistent result that placed the foliage types where I wanted them. This is because of the rules that are baked into the PFS system (which, unless you’re willing to change the C++ code, are immutable). Now, with the PCG system, we not only have easier access to the code responsible for spawning the foliage, we actually get to write that code! Some may see this as a regression, in that we are required to write the spawn logic for our foliage, rather than having a built-in system to aid us in this. But this is much more of an unshackling of our creativity than a ball-and-chain that increases the effort required to realize our vision. The extra work is far outweighed by the freedom that the PCG system grants us. We get almost all of the benefits of targeted foliage placement while retaining the great performance that we receive from instanced foliage. It is true that procedural placement will never be as accurate as hand-placing every single asset, but we can come much closer than we ever could before, in the least amount of time. We can do our terrain studies to find out all the information that we need about a biome, and then implement that data in our PCG graphs to produce a much more realistic result than we could ever achieve with the PFS system.
Another limitation with the PFS system is that you can’t specify a static mesh foliage type to be used for saplings or very young plants. It will scale the static mesh that is specified in the static mesh foliage asset. Not only is this unrealistic, it can lead to serious visual artifacts if a static mesh has a material that implements World Position Offset to create wind effects. The animated leaves for these scaled assets can cause severe “smearing” or stretching that is instantly noticeable and very undesirable (see Image 1 below). This can occur with PCG, of course, but only if you rely solely on scaling an asset to achieve a “younger” version of that foliage. Obviously, with PCG we are not limited to simply scaling a mesh; we can provide a completely different static mesh for these saplings/sprouts within our graphs!
I am very excited by the direction that the PCG system is taking content generation within the Unreal Engine, and by extension, gaming in general. And make no mistake about it, with this type of open-ended procedural content generation that was only available in software packages like Houdini now being available in UE, other engine developers will be forced to implement something similar or risk being left behind. The PCG system benefits all developers, but not equally. Small studios or individual indie developers will benefit far more than AAA studios, because those huge studios could already create their vision with precise accuracy in a (relatively) short period of time. For the indie studio, however, throwing more work-hours at a problem just wasn’t an option.
Thus far, I have spoken only of the PCG system as a means of placing foliage on a landscape, but it is capable of SO much more than that, and the examples used above are just that: examples. There are very few limits to the PCG system and I believe we will see it’s use throughout UE content generation/development. With further development, it will become more feature-rich and performant, which will open new doors that were previously closed in game development.
Thank you for stopping by and reading this article, and I hope that you have a great day.
The Daursynka long house is done, and we are pretty happy with it (see the featured image for this post, and Image 1). There are many variations for each piece in the kit, to reduce the ’tiling’ effect that can be seen in some games. For example, there are twelve different variations of the main roof’s tile sections, to reduce the repetitive use of each piece. The human eye spots patterns fairly well, and repeating patterns are easiest to pick up on when there are many examples of the same pattern right next to one another. So, this approach was used for all of the pieces in the kit. However, this meant we ended up with over 460 pieces for this kit alone!
When looking at the prospects of exporting each piece, along with it’s collision geometry, it became clear very quickly that scripting the export process was going to be mandatory. When we say scripting in Blender, that means Python. I must admit, I have never been (nor will I ever be) a huge fan of Python. I don’t like the language as a whole, and so I didn’t have much experience with it. That meant first learning Python beyond a quick overview of the languages features. Even if the only scripts that you were ever going to write in Blender were small, relatively simple ‘helper’ scripts, the effort would absolutely be worth it. Especially while creating something as large as an entire modular kit.
After going beyond the introductory lessons on Python at W3Schools, I turned my attentions to writing a script to help with the export. It became obvious very quickly that this script was going to be much more than just a simple little script. I wanted to keep my collection hierarchy in Blender and use it as the folder structure on the hard drive when exporting the assets. This meant a recursive function. I want to be completely honest here. In all of the time that I have ever done programming of any kind, I have never had to write a recursive function. So this was a first for me, and it was an…experience.
Recursive functions are usually deceptively short, and therefore they are deceptively simple. But, at first, I found it very difficult to get my head wrapped around exactly how the function was going to work. I had to look at it in a completely new way (for me), which was so different to anything I had done before. The hardest part for me was thinking about how the function was going to ‘walk’ through the various collections, and the order that it was going to run in. I tried to visualize how the function would be called from the very top level collection, and how it would call itself when it discovered a collection within that top level one. I did figure it out, but I can’t say that I will ever be comfortable writing recursive functions.
Once I had a script that would export my objects, I realized that I had only solved half of the problem. I would still have to import those assets into the Unreal Engine, so I was still staring down an enormous amount of work. Thankfully, the Unreal Engine editor can also be scripted with Python, so I knew that I could write a script to handle the imports. I needed a way to have the editor import not only the 3D objects themselves, but also to create the materials for those objects first. It would do no good to import 460+ objects and then have to apply materials to each and every one. This led down a rabbit-hole that resulted in the project import and export managers pictured below.
This tool (or tools if you want to count them separately) is why it took so long for a new post to be added to the website. These were a real trial to get to work, but I feel that the time spent will more than pay for itself when we are making other modular kits for ‘At the Crossroads’ as well as any other games that we create. All of the data entered into the export manager is saved in the .blend file that contains the modular kit assets. So, while it does take some time to enter that information, it only needs to be entered once.
When the data entry is complete, pressing the ‘OK’ button starts the export process. The tool uses the data entered for the materials and textures to generate part of a JSON file describing where these assets are on disk, as well as where they should be saved in the UE project when they are imported. It then ‘walks’ it’s way through the collections contained within the ‘Kit Root Collection’ discovering 3D assets as it goes. As it exports each 3D object, it finds that asset’s collision geometry and exports it along with the asset. So, when the import manager within the UE editor imports each 3D asset, it will have the correct collision. All of the 3D-asset-specific information is added to the JSON file as each asset is exported.
The import manager reads the JSON file and imports the textures, materials, master materials, and instanced materials. I say that it imports the materials, but it is more accurate to say that these are created using the data generated by the export manager and saved in the JSON file. The master materials that are defined in the export manager are created, and then all of the instanced materials are inherited from these. Once this is all complete, the 3D assets can be imported and have all of the correct materials and/or instanced materials applied. The whole import process took just under five hours to complete. Imagine how long this would take to do all of these imports individually. This will be a huge time saver in the long run, as there is very little work that needs to be done from that point.
After the import is complete, the only thing that is left to do is to define the actual node networks for the base materials and the master materials. This has to be done by hand due to the fact that, while it is possible to map some of the material nodes from Blender to the UE editor, it would be extremely difficult to do. Some nodes in Blender, like the math node, do have equivalents in UE, but there are others that do not map across the two applications at all. Also, when you get into the more complex materials in either program, trying to make these base node mappings work gets even more complicated. It was decided that it would be best to just define the node networks in the UE editor by hand. It doesn’t take that much time to do, and we get to use our preferred workflow in Unreal without having to worry about how it all has to be created in Blender to make the translation process successful.
After all of this, were these tools worth it? In the short-term, no. I could have exported all of the assets from Blender by hand, and then import them into the Unreal Engine in less time than was required to write these tools. In the long-term though, these tools will pay for themselves many times over. While the export manager does force a specific work-flow on the artist, it is not that much of a constraint. And, the time saved overall will make these tools valuable assets for us going forward.
Thank you for taking the time to read this, and I hope it has sparked some ideas that you may have for tools to improve your work-flow. Have a great day.
Well, it’s been quite some time since I last updated the site, and I’ve been hard at work during that time. I have completed the majority of the work on the modding code to allow modders to create content of their own and package it all up into a mod that players could unzip into the “Mods” folder of the game. This is done via the UGC Plugin that was developed by Epic as part of their VR game, titled RoboRecall. However, there were some pretty glaring omissions in the functionality, due to the intentions of the plugin’s scope.
The UGC Plugin was meant to be very bare-bones in functionality, allowing the developer to decide what, exactly, they wanted to allow their modders to do. The plugin could enable mods as small as just remeshing/skinning some of the in-game weapons, to fully side-loading entire levels with custom game modes. The largest omission of functionality was the ability to easily get players to a new level provided by the mod, and get them back to the main world. This needs to be done entirely within the mod, without the ability to place anything in the main world, and with as few other limitations as possible on the mod developer. I came up with a solution that I hope modders will find a good compromise…spawnable POIs.
A spawnable POI (or Point Of Interest) is a self-contained POI that can include a special trigger volume that will transport players to a separate map that the mod developer has created. This system can handle POIs that have multiple entry/exit points, without the mod developer having to jump through too many hoops to get it all to work. This would allow, for example, a cave complex with numerous entrances to it. When the player(s) enter the cave, they will spawn in at the correct entrance and, when they leave the cave complex, they will spawn back into the main world in the correct location. If they enter through a jungle cave and exit right back out the way they came in, they should be in the jungle. However, if they enter the jungle cave entrance and exit via the tundra cave entrance, they should spawn into the world in the tundra. This seems simple, but keep in mind that the mod developers will not have any way to directly place anything in the main world. Everything will have to be done via the spawnable POIs. We wouldn’t have a problem including the main world, but we are using quite a lot of licensed assets, and we do not have the legal right to distribute those assets to mod developers. Copyright issues are a tricky subject, and are best avoided whenever possible.
The last bit of functionality for our version of the UGC Plugin will be UI related. This functionality isn’t worked out yet, but it really does need to be in place when the game ships. I realized that this was missing while watching some videos on YouTube. I noticed a content creator using a mod for a game, and this mod reorganized the UI for that game. This is something that I will need to add to our version of the plugin, but I don’t anticipate any major hurdles to this…famous last words, right?
Aside from all of the work on the UGC Plugin, I have been working on some procedurally generated models that are part of a modular kit that will be used in the game. This modular kit is for a long house used by some of the peoples that inhabit the tundra region of the crossroads. These are the Daursynka people, and they are loosely modelled after the Iroquois Confederacy of the north-eastern region of North America and the Viking peoples of Scandinavia. These houses were a challenge due to their size and detail. They need to be large enough to house an entire extended family, and be detailed enough to maintain the visual fidelity that the other game assets are already at. But, I also needed a fair amount of variety in the pieces because there will be multiple longhouses at each settlement. I want to avoid obvious repetition as much as possible, while maintaining a reasonable degree of performance. The latter part of the previous sentence is key here; performance must always be considered in any real-time application.
To create the kit pieces for the longhouses, I chose to procedurally create all of them from “building block” pieces that I could easily obtain from Quixel. For example, the roof tiles seen in the featured image for this article are all positioned via geometry nodes in Blender. This allows me to randomize the individual tiles and get a nice variation between the roof sections. Please note, however, that I was lazy in the creation of the image above and I just used an “Array” modifier in Blender to duplicate the roof sections (I am sufficiently ashamed by my laziness here). The modular kit features numerous variations of the roof section, not just a single section with a single tile pattern. This approach allows me to use a set of textures for the tiles, wall slats, beams, and other individual pieces and get a level of quality that would have required much more texture space in the RAM of the player’s video card if I had went with a more traditional approach. The traditional approach is to create all of the geometry in your software of choice (Maya, 3DS MAX, Houdini, Blender, etc.), and then import that geometry into Substance Designer or Quixel Mixer to “paint” the textures onto the geometry. With this more traditional approach, we would need to use a much larger texture to get the same visual quality. We are still using a not-insignificant amount of RAM, but nowhere near the amount that would be needed to get both this level of quality and this degree of variation in the kit.
In Image 1, you can see the picture used as the featured image of this post. Each element that makes up the modular kit pieces is an individual object that is placed via geometry nodes, with it’s rotation randomly tweaked ever-so-slightly to break up the uniformity of just laying everything out using the modifiers available in Blender. This could also be done in an application like Houdini, which I have used before. Blender doesn’t feature the same freedom in it’s procedural tools as Houdini, but the geometry nodes are quite powerful, and do allow for an amazing amount to be done with them. Doing something like this by hand would take many, many more hours than I spent learning the geometry nodes in Blender. The same basic approach was taken for the wall pieces, which are made up of slats with the gaps in between being filled with tar covered thatch. Geometry nodes can also be used to affect the vertex colors of geometry, and this was used to allow blending between a “tar” material and the material used for the wooden slats. You can see the effect of this in Image 2. The tarred thatch is represented by a simple plane that is textured to look like thatch that has been dipped into a vat of tar. At least, that is what I hope it looks like.
The image above shows a simple rendering of the front of the example longhouse. If you look at the wall that is set further back, you can see that the slats have “smears” of tar where they come close to the plane representing the tarred thatch. However, to work with the vertex colors of the generated mesh, the modifier for the geometry node network used to generate the wall piece needs to be applied. Only after that was I able to add the vertex color map and use the geometry node network that alters the vertex colors. If you look closely at the wall for the front of the foyer, you will notice that it lacks the darker smears of tar that the back wall features. This is because the foyer’s front wall hasn’t had the node network generating it applied yet. Without this, any vertex color map added to it will not be accessible to the node network designed to change the vertex colors. At least, I couldn’t get it to work, and I spent quite some time trying.
What you don’t see in the images above is the sheer volume of variety that can be obtained by creating the wall pieces (or any pieces for that matter) via the geometry node network approach. Each slat type used is a separate mesh, with it’s material applied to it. There are six different slats, all held in a single collection, that the geometry node network chooses from when placing each individual slat. All of the wall slats for any of the wall pieces can be randomized not only in the slat mesh chosen, but also the positioning and rotation as well. Once the vertex color network is applied to the wall piece, it is hard to believe that it is made up of nothing more than six different slat meshes randomly chosen and placed.
Another feature of these longhouses that is not visible in the images above is the thatch cards that are placed on the plane that represents the tarred thatch that is shoved in between each slat. It is highly unlikely that anyone stuffing thatch into these gaps would get it into the gap perfectly, which means that there would be a bit of thatch sticking out here and there to flutter in the breeze. That is where those little thatch cards come into play. Using a traditional modelling approach, placing each thatch card would be done by hand, probably by an intern who was questioning their life choices as they positioned each little thatch card. But, through the power of procedural modelling by virtue of geometry nodes, we can easily place thatch cards in between each wall slat. The best part is that no matter how each slat is rotated and positioned, the geometry node network for the thatched tar plane will recalculate where a thatch card can be placed without it ever being positioned where a slat is.
In Image 3, you can see a small portion of the node network used to place the thatch cards on the plane representing the tarred thatch. The entirety of the node network isn’t shown because the view would need to be zoomed so far out that you wouldn’t be able to read or see anything of note. The key idea to take away from Image 3 is the Raycast node in the network (you should be able to right-click on the image above and view it in a separate tab, allowing you to zoom in to read the node names). First, I used a “Distribute Points on Face” node to randomly place points on the tarred thatch plane. These points are where thatch cards could potentially be placed. With the Raycast node, we can do line traces and check if there is an intersection anywhere. In my case, I didn’t want to place a card anywhere that there was an intersection with the wall slats. Only where the raycast found no intersection should a thatch card be placed. The Raycast node is a bit strange to get used to, because it doesn’t work exactly the way that a line trace does in, say, the Unreal Engine. So if your interested in using this node, read the documentation and experiment a bit with it. It is worth your time to learn it.
Well, that was a lot to take in. I hope that I was clear in my descriptions, but the topics covered in this post are very complex. Without a large number of visual aids to help, it can be difficult to get my point across. Modding is a huge feature that I felt would be a great benefit to the game. Players will not be beholden solely to us for game content. If a mod developer wants to create a completely new dimension to the game (a new level of Hell perhaps), they will be able to do so. And, with the power of procedural modelling via Blender (or some other software like Houdini), they will be able to shorten the development time needed to make the custom assets they want. Thank you for taking the time to read this post, and I hope that you have a great day.
I want to preface this entire article by stating that the information below has been gathered by experimenting with the system, and as such, is incomplete. I will need to dive into the C++ code to get a really solid idea of what is going on, but hopefully, what follows will be enough to help you along. Once I have dove into the code to see what, exactly, is happening when we press the generate button to make our levels-of-detail, I will post another article. I don’t know when that will be, so no promises.
To get started with setting up the levels-of-detail for the sublevels in World Composition (W.C. going forward), you’ll need to have at least one sublevel selected in the Levels tab and press the small level details button. I’ve circled this in red in Image 1. This will bring up the level details dialog allowing you to define all of the information that you want to use when generating your levels-of-detail. When you first open this dialog, you won’t have any levels-of-detail defined, and the only control that you will see under the “LODSettings” rollout is the one labeled “Num LOD”…not very descriptive, I know. This is the first step to defining your levels-of-detail and how they will be created.
By setting the value for “Num LOD” to 1, you will then be able to define a single LOD level for each sublevel in your W.C. map. Each of the other LOD levels will follow the exact same steps to create, but the values you enter will be different for each LOD level. If you want to have three levels-of-detail for your W.C. map, you would type 3 into the field for “Num LOD”. For LOD1, you will want to use the best quality settings that you feel you can get away with. Each game is different, and each one will have it’s own requirements for performance. Obviously, we don’t want a very noticeable “pop” when the player crosses the point where each LOD level transitions to the next. Which leads me to the first setting to pay attention to, which is the “Relative Distance” field.
Relative distance is the distance that this LOD level will use to transition to the next level-of-detail. This will be added to the base streaming distance. For example, the “Uncategorized” layer in W.C. has a default streaming distance of 50,000. Once the player is further away from that sublevel, it will be removed from the player’s viewport, even if they are looking directly at it. This is where our first level-of-detail, LOD1, would be streamed in to take the place of the actual sublevel. The setting for “Relative Distance” serves the same purpose as the default streaming distance; it is the point at which we want our LOD1 to be removed from the player’s viewport and the next level-of-detail to be streamed in. However, it is very important to note that the value for “Relative Distance” is cumulative. It is added to the values preceding it in the level-of-detail settings. An example is in order here to make this a bit more clear.
If you have not defined any other layers in W.C. and all of your sublevels are contained within “Uncategorized”, their default streaming distance is 50,000. In Image 1, you can see that the relative distance is set to 418,353. This may seem like a strange number to choose, but I arrived at this value after some mathematical calculations and quite a bit of experimentation. What you can’t see in the image above is that ALL of the LOD levels defined have the same relative distance value. When the player moves more than 50,000 units away from the sublevel, LOD1 will be streamed in and once that streaming is complete, the sublevel will be removed from view and LOD1 will be displayed in it’s place. Then, once the player has moved more than 468,353 units away from the sublevel’s location, the engine will start to stream in LOD2, and once LOD2 is fully loaded into memory, LOD1 will be removed and LOD2 will be displayed. This is the key part to remember: the relative distance value is added to the previous total. So, when the player has moved a total of 886706 units away from the sublevel’s location, Unreal will start to stream in the content for LOD3 and will remove LOD2 once that streaming process is complete. Why 886706? Because the engine is doing the following math: 50,000+418,353+418,353 to come up with the total distance the player needs to be for LOD2 to be too far away for the player to see, requiring LOD3 to be streamed in.
Most of the fields under the “Simplification Details” rollout are similar to the fields used when using the Actor Merging feature of the engine, so those won’t be covered here. There are a few key differences between the actor merging tool and the W.C. LOD generation tool.
One of the cool features of this system is that it will combine all of the static mesh objects within your sublevel in W.C. This is very similar to the Actor Merging feature, with a few differences. The first being that you do not get to chose what LOD level will be used when merging the static meshes contained in the sublevel. The system is choosing a single LOD level (which appears to be the lowest) from each static mesh asset and using that when combining them. We don’t have any control over this portion of the generation process. It is done behind-the-scenes, but we do get to specify a “Static Mesh Details Percentage”, which is the second departure from the Actor Merging feature. I am guessing a bit, but I believe that after the static meshes are merged, the result is then being reduced further in an attempt to reach the percentage we specify. So, if your merged static mesh is 15,000 triangles, and you enter a value of 68.5 for the percentage, the actual merged static mesh used in the W.C. LOD asset will be approximately 10,275. Considering the fact that the system is already (apparently) using the lowest LOD level from each static mesh, the resulting combined mesh is already pretty light-weight. Further reduction via this “Static Mesh Details Percentage” field would probably result in an unusable merged mesh if pushed too far.
The next field that we really need to pay attention to is the “Landscape Export LOD” field. At first glance, the “Static Mesh Details Percentage” field may appear to be an option to use instead of “Landscape Export LOD”, but in fact they are completely different. When W.C. generates the static mesh for the landscape actor (not the static meshes within the sublevel, but the actual landscape itself), it will use the LOD level specified in this field. This field defaults to LOD7 for the landscape actor, which will result in a static mesh with approximately 2,048 triangles. If you want/need higher resolution for the static mesh generated from the landscape actor for your sublevel, you will need to enter a different value for this field. In Image 1 you can see that I chose 3 as the LOD level to use from the landscape actor when generating the static mesh for LOD1 in W.C. This resulted in a static mesh that has most of the detail that is contained in the actual landscape actor, while being significantly less triangles. In Image 2, you can see a screen capture of the landscape with the actual landscape actors being displayed.
In Image 3, we can see that the character has moved far enough away from one of the sublevels to cause it’s first LOD to be displayed. The static mesh that is being displayed instead of the landscape actor retains almost all of the detail of the actual landscape actor itself. The silhouette that can be seen against the sky matches very closely, and even if there is a little ‘pop’ when the LOD is swapped for the original, it shouldn’t be dramatic. Some experimentation will be needed to wring as much performance out of each sublevel’s LODs. I would advise finding a “Landscape Export LOD” value that works for your most distinctive landscape features, and stick with that value for all sublevels. If you have a sublevel that is relatively flat, and doesn’t have incredibly distinct skyline silhouettes, you might be tempted to set it’s “Landscape Export LOD” to a more aggressive value than the others around it. But remember that the edges of the surrounding proxy meshes have to match up, and using different “Landscape Export LOD” settings may result in gaps that can be seen by the player.
For the last two features that are unique to W.C.’s level-of-detail generation system, we will look at the “Bake Foliage to Landscape” and “Bake Grass to Landscape” options. These will render the foliage or grass assets to the 2D texture that will be used on the landscape proxy mesh. This is to give the player the impression that the foliage or grass is still on the landscape proxy mesh, even though the player is just seeing a 2D texture on the landscape proxy mesh. However, the system doesn’t appear to capture the color of tree leaves very well (or at all), which reveals the lack of trees very easily. The system does render the color of the tree trunk to the texture reasonably well. In Image 3, if there were trees on the slope of that mountain, it would be very obvious that the actual 3D assets were no longer there. I think there is a way around this (though, I haven’t tried it yet), and I will cover that briefly below. It may be that if you are using a stylized look, where your tree’s aren’t using masked materials for your leaves/branches, you could end up with a much better result. I am not sure how much better, though, because I have done very little testing using stylized assets.
One detail in Image 3 that would be hard to miss is the large difference between the LOD asset’s material and the material of the landscape actor that is adjacent to it. There is a hard-edge that would be nearly impossible to hide. This was due to the default settings that I chose when setting up the “Landscape Material Settings” for the generation of this LOD asset. I did not want to incur the cost of having separate textures for specular and roughness, so I used constant values instead. When the landscape proxy mesh is generated, the material assigned to it will use these constants for specular and roughness. Because I set the values so poorly, it resulted in a very reflective surface, which is why it appears the way that it does in Image 3. When I created these LOD assets for this open world map, I actually selected all of the sublevels and set their options, resulting in all LOD1 assets sharing these constant values in their generated materials. You can see this in Image 4. The sublevel adjacent to the first one we were observing has been removed and it’s LOD1 asset is being shown in it’s place.
In Image 5, you can see the landscape proxy meshes for both sublevels with much better constant values for specular and roughness. While this isn’t a perfect match to the sublevel’s landscape material, it does provide a huge step in the right direction. This is why I highly advise you to do some testing on a single sublevel’s LOD settings and find material settings that work well with your map. Make sure to move your lighting around the same way that it might be moved in-game. This way, you will see if your lights are going to cause problems with the specular and roughness values if they are set too aggressively. Yet, if you don’t have any specularity and you set the roughness all the way to 1, you will lose any definition in your landscape proxy mesh. It will just look like a flat, 2D card that you have placed in the distance, with no highlighting of any of the landscape features. Once you have values that you are happy with, you can then use those values to generate all of the LOD assets for your sublevels in W.C.
Like Image 3, Image 4 and Image 5 have their own detail that would be impossible to miss. The tree that I placed in the middle of the small test village has been included in the merged static mesh for that LOD asset. But, no matter what settings I used, I could not get the tree to merge with the other assets in that sublevel correctly. I duplicated it in the project, and then deleted the lowest LOD for the asset, thinking that it may be the fact that LOD4 for that asset was effectively two quads turned at right angles with a texture of a few branches on them. That didn’t work. I changed the material settings in the “Static Mesh Material Settings” rollout, changing the material type to masked instead of opaque. That resulted in the same broken looking tree after regenerating the LOD asset. But, don’t get too frustrated, because I haven’t mentioned the last aspect of W.C.’s LOD generation system that I am going to cover.
When I make a reference to the sublevel’s LOD asset, I am actually referring to a completely different sublevel that W.C. streams in and uses to replace the parent sublevel. For the sublevel being shown in Image 1, you will see that we are looking at the options for LOD1 of sublevel E5-2. When W.C. generates an LOD asset for E5-2, it actually creates a completely separate level which has the landscape proxy mesh and merged static mesh actors contained within it. It is a complete level! This is stored in a folder named E5-2LOD, and contains all of the LOD assets for the sublevel E5-2 with the name E5-2_LOD1. There are no lights within the level E5-2_LOD1, because I have my direction light in the persistent level. Actually, there isn’t much in this level to be honest, but it is a complete level. To fix the tree issue, I have exported the merged static mesh for E5-2_LOD1 and removed all of the triangles for the tree. I then set this new FBX file as the source file for the editor to use for the asset. After that, I just pressed the reimport button in the editor for that merged static mesh and the tree was gone. I know, you’re probably saying that this isn’t much of a solution; that you want your tree. But, after I removed the tree from the merged static mesh, I opened E5-2_LOD1 and placed the tree back into the level as a separate asset. Once I saved E5-2_LOD1, the replacement tree was now a part of the LOD asset for E5-2 and whenever the player moved far enough away from E5-2, the LOD1 asset would be streamed in and displayed. Sure enough, there was my replacement tree; exactly what I wanted. Because the replacement tree will still follow all the rules set out in it’s own LOD settings, as the tree takes up less-and-less of the screen space, it will use lower-and-lower LODs of the mesh.
With the realization that each LOD asset generated by W.C. for each sublevel is nothing more than a separate level object that is swapped in, you may be having the same idea as me. We may be able to just open these LOD assets and use the foliage mode in the editor to place simplified versions of our trees into these LOD assets. They are, after all, just levels like any other that we might work with. We would have to define different static mesh foliage assets, because we would want to use a much lower static mesh LOD for these. But I don’t believe that these static mesh foliage assets are very large, so the cost may be well worth the results. I haven’t tried this (yet), but I see no reason why this wouldn’t work.
Throughout this article, with the exception of a few places, I have been talking about LOD1 for the sublevel named E5-2. Nevertheless, everything that I have said applies to all of the sublevels in my open world test map. Not just for LOD1 either, but for all four of the levels-of-detail that W.C. will allow us to create per sublevel. There are a total of 36 sublevels making up this open world test map, and for each of these I have the maximum of four LOD assets. That brings the grand total up to 144 levels that W.C. creates for me to use as the LOD assets for the sublevels. Yes, there is still quite a bit of work that would need to be done if I was to use these, but it is a huge help that W.C. can generate these for us.
One last word of warning. Do not edit any of the LOD assets until you are sure that you are happy with your LOD settings in W.C. This is because when W.C. generates these LOD assets, it will gleefully overwrite any changes that you have made. When I removed the tree in the middle of the village, I regenerated LOD1 for that sublevel again, and the merged static mesh actor that was created had the tree right back where it was. My change was gone, but I knew that would be the case. The same is true for all of the meshes and/or materials for the LOD assets. Only alter them once you know you will not be regenerating them again.
Well, this has been one of the longest posts I have made on the site. I don’t claim to be an expert with this tool, and with UE5 moving to World Partition, we may never see any information coming from Epic about this tool again. I hope that it helps somebody. Thanks for sticking with me through this very long article. Have a great day and happy developing!
There has been so much going on since the last post. The design document has received more work, detailing some changes to the backstory that is going to directly effect the overall structure of the game. Also, in the design document, we have detailed more recent game features that we are adding. Each major region of the game is going to have it’s own environmental game feature. For example, in the jungle, players will be able to obtain a grappling hook and swing through the trees. In the forest area the player will be able to gain access to a wing suit and glide through parts of that region. This last feature lead to some interesting observations and some decisions about the size of the game itself.
It was always our intention to make this a reasonably large open world. But, I hadn’t given much thought to just how big the main map would be. While testing the wing suit in it’s prototype project, I was able to glide over 0.7km (0.43m) in a single glide. This lead to the obvious question: Do we want to nerf this feature, or do we want to go big on the landscape? Having done Capuchin Capers as a series of reasonably large islands, and having dealt with the optimizations needed for performance, I knew we needed to test truly large landscapes before making this decision.
Landscapes are a big topic, no pun intended. I spent numerous days in Unreal Engine 5 Early Access 2 because I knew that Epic wanted to focus more on open worlds with UE5. After my testing, I have decided that we are going to stick with UE4 for this game. There are some great features in UE5 that directly impacts exactly how developers go about building open worlds, but I just can’t count on all of the performance issues with landscape actors in UE5 to be fixed at launch. A landscape in UE4 that would run at a very comfortable +120fps runs in the low 60’s to high 50’s in UE5. I know that Epic will get that sorted out, but we can’t put years of work into this project, all the while hoping that these issues will be resolved to a reasonable degree. World Partition, Lumen, Nanite, and MetaSounds are all great features, and I really wanted to take advantage of these. That is why I spent almost four days trying various approaches to be able to use UE5. But in terms of the workflow building landscapes, UE5 has a long way to go.
My experience with UE5 lead me back to UE4 and World Composition. I had watched the live-streams for W.C. and read the documentation for the system. Unfortunately, there isn’t much information for this system compared to other parts of the engine…it just doesn’t get used nearly as much as the ‘main’ areas of the engine. But, after spending several days testing and experimenting, I feel that I am getting my feet underneath me enough to be confident that we will be able to achieve our goals with W.C. There is a lot to learn about it, and some serious quarks to the whole thing that I spent way too much time fiddling with.
The first is making a truly large area without any seams between the landscapes, stored in the various levels in World Composition. I wanted a total map size of approximately 12km2, which would require a heightmap with a total size of 120992 (this was slightly off by a few pixels, but it didn’t really make a difference). I tried to create each landscape separately, in it’s own sub-level in W.C., but I kept running into a common problem. I was getting very small cracks between the individual landscapes. At first I thought it was due to the heightmaps having ever-so-slight differences in their color values along these seams. That was not the case. You can do any color corrections that you want, but it won’t get rid of those seams. Worse, you can’t use the sculpting tools to fix these seams because they are due to a separation of two different landscape objects. You can’t ‘paint’ across the boundaries, because in fact, the landscapes don’t share any boundaries. They just happen to sit next to one another. After watching the live-stream several times to try to get a hint of how to fix this, I noticed a really nice feature that is mentioned around the 47:36 time mark. Add Adjacent Landscape Level. Those are currently my favorite four words in the English language (that is why I highlighted them the way I did…I love those four words).
There are two approaches to using heightmaps in Unreal. The first is during landscape creation, via the ‘Import from File’ option. This approach will create the landscape based on the file chosen, generating presets for the number of components, section size, as well as the other settings for a landscape. This is nice if you have a single landscape in your level that isn’t larger than the limit for a landscape object. However, if you are attempting to create a map on the scale of an open world, this option won’t work. You inevitably get those seems between the landscapes.
The second approach to using a heightmap is to first create your landscape object, defining all of the various settings in advance, and creating a totally flat landscape (see Image 2 below). Then, once the landscape has already been created, you can load in a heightmap after-the-fact in the “Target Layers” rollout of the sculpting settings (see Image 3).
Normally, when you are loading in a heightmap for a single landscape, there is little difference between the two techniques. With either technique, you can set your landscape’s component count, section size, etc. But no matter what you do, using these techniques alone, you can’t create a single landscape large enough to be used for an open world. That is where “Add Adjacent Landscape Level” finally comes into play and all of this makes more sense.
If you first define one of your landscape levels in W.C. via the ‘Create New’ menu in the W.C. tab and create your landscape tile using the “Create New” option, you can define your first landscape with all of the settings that you would need for a single tile in your open world map. In my case, that was the settings that you can see in Image 1 above. This is just one of nine tiles that make up my open world map, though, and I needed to add eight more to make the entire map. But make sure to keep in mind that I did not create that first tile via a heightmap. I predefined the landscape with the settings that I knew I wanted, and I created a flat landscape. Once that landscape was created, I used the “Add Adjacent Landscape Level” feature to create the other eight landscape tiles that I needed. You can see in the live-stream that when you use this feature, the landscape created does in fact go into it’s own level. But, it is sharing the vertices along the boundary with other landscapes created in this way. So, while each landscape is in it’s own level and benefits from all of the tools in W.C. that apply, the landscape objects themselves are seamless. You can sculpt or paint across the boundaries without issues.
That wraps up this rather long post, and in the next post I will discuss some of the issues that I ran into while working with W.C.’s LOD system that you can use. You can see the effects of that LOD system in the main image for this post (which I am also including below). Thanks for sticking with me on this post, and I hope that it helps you build the open world of your dreams!
Having wrapped up the bulk of the work on Capuchin Capers, preproduction planning has begun on At the Crossroads, our next title. At the Crossroads will utilize quite a few aspects of the Unreal Engine that we have yet to work with in a game. Which raises an interesting question: Should we push on into production and learn on the fly, or should we train ourselves up before production starts?
This may, at first glance, appear to be nothing more than a question based on your world-view, but there is more to it. Time is a resource. How much time will be spent developing with a new technology if you are constantly required to stop development to research how to use that technology in a specific use-case? How many times does this have to happen before you’ve spent more time in this manner than would have been spent taking a course or two on the technology? This is the situation that I have been facing since preproduction on At the Crossroads has started. In particular, the Gameplay Ability System has made me rethink my approach to this key question.
The Gameplay Ability System, or GAS for short, is a system designed by Epic from the ground up to build entire sets of abilities for Action, RPG, MOBA, Battle Royale, and many other game genres. Because it was meant to be used in such a wide range of situations, the GAS is fairly complex, with many pieces that interact with one another. Attempting to watch some Youtube videos and read a few forum posts would be detrimental when trying to implement such a complex system. The number of times that we would have to stop development to scour the Internet for an answer to a problem would be large. And, the amount of time spent doing that research would quickly out-pace the amount of time spent taking a course on Udemy and truly studying up on the GAS before production began.
There are many aspects of game development that will require you to ask yourself the question above. The GAS is just one example. For another example, lets take a look networking in Unreal. Networking, which is known as replication in Unreal-speak, is not a trivial task and requires a deeper knowledge of not only how replication itself works, but also Unreal’s Gameplay Framework. If your knowledge of the Gameplay Framework is lacking, and you were making a single-player game, you could just put functions and variables wherever you wanted. When it comes time to implement replication, however, you are likely to be in for a world of pain and suffering. The time spent undoing a lot of work that violates some of the design principles of the Gameplay Framework could easily be longer than just going through one or more of the courses on Udemy or Unreal’s Knowledge Portal.
Technology in general, and game development specifically, seems to change at a break-neck speed. Thus, the requirement to learn these new technologies is never-ending. As game developers, we will always have to retrain ourselves to keep our skills sharp. Whether it be a new app that we need to use to created assets, or a new system in our game engine of choice, learning and working with new technologies is an endless part of our lives.
In the end, it all comes down to how we want to spend the most precious currency that we, as indie developers, have: Time. It may be tempting to jump into the deep end, and sometimes that can work for the best. But we always need to spend that most precious currency wisely. We only have a limited amount of it.
I feel that a disclaimer is required here: I am not a professionally trained AI engineer. I am self-taught, and most of my knowledge is anecdotal. What follows are my observations that were formed from the creation of the games that I have worked on. Your mileage may vary.
In most games the AI will be given, and allowed to act on, information that it shouldn’t have. In short, it’s allowed to cheat. The AI might be given the location of the player, or knowledge that the player is about to run out of ammunition. Being given information that the AI shouldn’t have isn’t the only way for the game to cheat. The approach called “rubber banding” in gaming AI is the act of tweaking the numbers that drive the AI in response to something that the player does. A game might make an AI’s aim better near the end of a map if the designers want the pressure turned up on the player. You may ask yourself why developers would do this, and the answer is simple: It makes creating a consistent, challenging experience easier. In fact, it may be impossible to make a game consistently challenging and fun without it.
An autonomous AI, on the other hand, can create a completely unique and unpredictable experience every play-through. A player will never know what to expect because the AI isn’t being over-written by the developer stepping in and changing values or behaviors based on what the player is doing. The AI is driving all of it’s own behavior internally. Plus, the developer doesn’t have to give the AI information that it isn’t supposed to have. One of the downsides to the approach, as far as I can see, is that the unpredictable nature of an autonomous AI leads to an inconsistent experience for the player.
When I set out to create Capuchin Capers, it all stemmed from the desire to learn more about AI design. I wanted to attempt to create an AI that would run around the map looking for fruit objects, but without any cheating. At no point should the AI be given the location of a fruit object. In that way, the AI would be autonomous. I added a Director AI to help make the game a bit more challenging for the player when the number of unfound fruit objects remaining reached a certain level. Another reason for the Director AI was because many games use this approach to manage the overall experience that the player is having. I knew in the future that I would want to be able to take advantage of this approach.
I have largely succeeded in my goals. For the most part, the AI will run around the map and pick up fruit objects without any outside aid. And, at times can be quite challenging to defeat. At times. The outcome of the match is heavily dependent upon the placement of the fruit objects. I chose to create a Goal Generator that would randomly place the fruit objects on the ground. In this way the player is never allowed to just memorize where the fruit is. But this also means that the AI won’t always provide a challenge to the player. This is the problem that I am having right now.
Players want a consistent experience. If they choose the easy difficulty for a game, they expect it to be easy…at least comparatively so. If they choose the hard difficulty, they naturally expect it to be much harder than it was when set to easy. A match on easy can’t be more difficult than a different match that was played on the hard setting on the same map. This is inconsistent and frustrating for a player and will probably lead the player to quit playing the game and move on to something else.
A truly autonomous AI is what I wanted to create. An AI that would act on it’s sensory inputs and would make basic decisions based on that information. But in creating this, I have an AI that I can’t easily tweak to provide a consistently challenging experience for the user. I had a goal at the beginning of this project, and that goal was the whole point of the project. I won’t insert code that will allow the AI to cheat. That wasn’t part of the plan, and in my eyes would constitute a complete failure for this project. By adjusting when the Director AI begins providing aid to Suzy (the capuchin monkey), I can adjust the difficulty by some degree. But it won’t provide consistency.
I will do my best to balance the levels so that the AI provides a reasonable experience, even if it is a bit inconsistent. I am sure my AI design is factoring into this issue, so an AI that was designed by someone with more experience would perform much better under the circumstances. I set out to design an autonomous AI that would pick up randomly placed fruit, and that is what I have.
I guess the old adage is true: “Be careful of what you wish for”.
Over the course of making Capuchin Capers, I have tried very hard to document as much as I can. I do this so that I can refer to these documents after the game is done and have a better picture as to what it takes to create a game design document. Now, if you go to Gamasutra, you can find articles on various developers views as to what needs to be in a game design document and how it should be formatted. But, does their design documents really fit the way that you create video games?
There is some information that needs to be in any game design document, of course. Anybody creating a design document knows that the levels themselves need to be described. Characters or creatures, including monsters, need to be detailed. There are elements that should be included, but does your design documents formatting need to adhere to the same template that I might use? Absolutely not. At least, that is how I feel about it. If you’re submitting a proposal to a publisher, things change. They have their own expectations for what needs to be in place in a proposal and/or design document. You would need to follow those guidelines to ensure your game was given the best chance to succeed and get published. But when you’re developing your own games as an indie developer, nobody outside of your small team are ever going to see these documents. They can be formatted in any manner that makes the most sense to you as a team.
When I started Capuchin Capers, I wanted to have a roadmap for the game. I wanted to know what assets I needed to create, and the overall architecture of the game. I wasn’t even close. If you view the Trello board for this game, which can be found here, you will see that I resorted to creating a ‘General’ card just to cover some of the more egregious things that I missed when first planning this game. As I discovered the many parts of the game that I had failed to plan out at the beginning, I started to write those documents post-creation. I did this, if for no other reason, to have a clearer picture of what I needed to plan for the next game. During this process, I discovered some things about the way that I create documents and what I end up putting in those documents. My documents aren’t formatted or structured the same as the documents featured in some of the aforementioned articles.
I would have worried about this when I first started creating whole games, instead of modifications to pre-existing games. But I’m not worried about this at all now that I have a little more experience. We all think differently, and that’s a good thing. So, why should my documents follow a template that another developer might use? Is my way the best way, or even a good way? It clearly isn’t the best way, since I overlooked so much; it may not be a good way for you, either. It probably isn’t, to be completely honest. But, it is a good way for me and that is what matters.
When I am done with Capuchin Capers, I will take the documents that I have generated, along with the topics that I know that I missed, and I will combine them into a single file. This will give me a good starting point for our next game. Will it be complete? No, it won’t. Will it be the right way to write a game design document? Yes, it will. Because it will be the right way to write a game design document for me.
In game development today, there seems to be no shortage of tools vying for our attention. From programming to texturing to modeling, the selection of tools can be dizzying. When Epic announced the inclusion of the modeling tools plugin for the Unreal editor, I thought that this was nothing more than a replacement for the older BSPs already available. A nice addition to be sure, but not a serious tool to be used to create game content. Then Quixel released their videos on the creation of their medieval village demo, and I found a new appreciation for the tools that Epic has generously given us.
The obvious use is for blocking out a scene, and I have mixed feelings about their use for this purpose. Every time that you use a boolean operation on two objects, it creates a third object in the content folder. This can lead to a huge number of useless intermediary objects before you get to the final shape that you want for your block out. Worse, the created objects are all named something cryptic like ‘Boolean_a2b3x092xi3202’ or some such name. The editor appears to take the name of the operation and appends a UUID value to it. You can specify a name other than ‘Boolean’ in the tools, so you could use this to separate the final object you want from the intermediary objects you don’t care about. This leaves you with many unwanted objects with the name ‘Boolean-xxxx’ and one named with the value you provided in the UI. This is the approach that I used, and while it isn’t the most convenient, it does work. Still, this tool is far better that BSPs in my opinion, and is a welcome addition to the editor.
Where this tool really seems to shine is an application that I wouldn’t have thought it useful, but is shown to great effect in Quixel’s videos mentioned above. Using preexisting assets, along with the tools to reshape them, allows for the reuse of assets in a way that would have been much harder otherwise. What I really like about this toolkit, and even BSPs to some extent, is the fact that you are in the game level itself while using the tools. You can shape something to fit the exact position and placement that you need, with the look that you want. This could be done if you are creating all of your levels geometry in a separate DCC, but I have never liked this approach. I want to see what my level or asset looks like in the engine, not in the renderer that is shipped with the DCC. No matter what settings I tweak, I have never gotten MAX, Blender, or Houdini to render my assets the same as Unreal does. There is also the overhead of having to define each material twice; you define it in the DCC of your choice, and you define it again in your engine. We’ve all been there, and there is an element to this that cannot be escaped. It is a necessary evil. But, it is nice that this can be lessened to a degree.
I have recently finished the bamboo hut where the player will go to initiate the start of the level in Capuchin Capers. This will allow the player to explore the island a bit and get familiar with the terrain…or just sightsee if they like. Once they are ready, they will enter the hut and the level will begin. Because of this, the hut will be included in every level and is the only structure in the game. It is likely to receive quite a bit of scrutiny from the player, so it has to match the visual fidelity of the rest of the level as well as having no strange issues with collision or scale. I decided to use the editors modeling tools to block this out. Previously, I would have either used BSPs (if I could talk myself into enduring that experience), or I would use an exported mannequin model as reference for scale. The latter would have been a big mistake.
I wanted a ramp leading up to the entrance to the hut, but the ramp needs to be long enough to clip through the terrain. I don’t know every location where this will be placed yet, so the model needs to account for that placement. But, I also do not want the hut as a whole to take up more space than is absolutely necessary. I was able to make the angle of the ramp steep enough to make it compact-ish, while still being able to actually walk up the ramp. This could have been done with BSPs, but that would have been a painful experience, to be sure. Aside from the ramp, I was able to easily get the overall shape of the hut the way that I wanted it. I had a specific look in mind and it was fairly easy to get to that look with the tools. I was still using the tools like a caveman, due to my experience with BSPs, so I could have refined the hut’s shape far more than I did in-editor. But my block-out was complete, with all the windows where I wanted them and at the correct heights. I exported this block-out to Blender to build the actual geometry for the hut.
I used geometry from preexisting assets in an attempt to maintain some continuity in the materials used. The tree trunks that make up the posts for the hut are palm trees that are actually in the levels. Similar assets were ‘repurposed’ in the same way, such as bamboo. I then used the same technique shown in Quixel’s video on the creation of their houses in their demo. Utilizing a separate UV channel to introduce mud, dirt, and grime to the hut really made all the difference. While most of the geometry used to build out the hut has shared UVs, or tiling textures, the approach Quixel demonstrates allowed me to break up the feeling that the materials are all shared. It gave each piece of geometry the feel of being a separate component in the hut, not just a bunch of copies of the same thing…which, of course, they are. I used Painter to bake out my AO, curvature, thickness and other maps, and then to create the masks needed to create this effect in Unreal.
I could have used Unreal’s modeling tools for much more than I did. They are not just a toy, or a replacement for BSPs, as I originally thought. They are a valuable tool in the toolkit, and one I plan to explore further. Thanks for the read.
Work has steadily continued on Suzy’s AI, and it was necessary too. As she was at the time of the last post, she would have been absolutely no challenge to the player. I could have just allowed her to cheat, knowing where all of the fruit objects were and used some form of random number generation to decide if she ‘found’ one or not. That is the approach that some games take, and I suspect that many indie developers would have done just that. I can’t blame them; creating AI with any amount of intelligence is very hard. It would be very easy to do that and move on, especially if you are an indie developer who is building a commercial game. But, I absolutely will not take this route. The whole point behind this entire project is to get experience and gain knowledge about making good AI. Did I think about giving in? Yes. I have thought how much easier it would be many times, but again, that would be missing the point of this project.
So I asked myself how I might go about searching for these fruit objects, or any objects, in an environment as large as a tropical island. The first thing I would need to do to search effectively is to orient myself in my surroundings by finding some landmarks so that I won’t get lost. Aha! I can have the AI travel to landmarks that are all over the islands. By doing this, it will also give the appearance that Suzy really is using some form of intelligence to perform this search. If the player watches her, they will notice that she is traveling to a location that has a very distinct landmark. Hopefully this will feel fair to the player, as they too can orient themselves by using these landmarks. Moving Suzy around the island isn’t that difficult with the custom EQS generator that was made specifically for this purpose. But, how exactly should the AI chose which landmark to visit? And how should she go about visiting each one? I chose to have her randomly select a landmark to visit, but I didn’t want her to visit the same landmarks over and over without going to each one first. That was important to me, because it would then feel more like she really is searching, rather than just randomly running around. To implement this, I chose to use the “Traveling Salesman” approach for visiting each location.
The idea behind the traveling salesman approach is that the ‘salesman’, Suzy in our case, will travel from her starting location to each of the destination points, or landmarks. But, she will not backtrack to a previously visited landmark. She will only go to unvisited landmarks until all of them have been visited. Only after all of the landmarks have been visited will the AI be allowed to revisit a landmark.
Once at the landmark, I wanted Suzy to give the place a good search. So how did I implement this? The same technique that was used to move her to the landmark: Traveling Salesman. I actually implemented this first, so that I could make sure that it would work the way that I wanted it to. Once she arrives at the landmark, the behavior tree task generates a random number of points within a range that can be set in the behavior tree. For this, I felt that 3-6 points around the landmark would be fairly good, but the range can be set between 2-8. Once these points are generated they are handed off to Suzy’s behavior tree so that she can run these points, utilizing the traveling salesman approach. It gives Suzy a nice appearance of being an excitable little monkey that is running around trying to find these fruit objects.
While Suzy’s AI still has some work to do before she will be challenging enough to make this game fun, I think that I am close having the developmental part done. If I can just make this a little more successful at finding the fruit objects quicker, it will just be a matter of balancing the numbers to get things just right. I hope.