AI Autonomy in Games: Boon or Bane

I feel that a disclaimer is required here: I am not a professionally trained AI engineer. I am self-taught, and most of my knowledge is anecdotal. What follows are my observations that were formed from the creation of the games that I have worked on. Your mileage may vary.

In most games the AI will be given, and allowed to act on, information that it shouldn’t have. In short, it’s allowed to cheat. The AI might be given the location of the player, or knowledge that the player is about to run out of ammunition. Being given information that the AI shouldn’t have isn’t the only way for the game to cheat. The approach called “rubber banding” in gaming AI is the act of tweaking the numbers that drive the AI in response to something that the player does. A game might make an AI’s aim better near the end of a map if the designers want the pressure turned up on the player. You may ask yourself why developers would do this, and the answer is simple: It makes creating a consistent, challenging experience easier. In fact, it may be impossible to make a game consistently challenging and fun without it.

An autonomous AI, on the other hand, can create a completely unique and unpredictable experience every play-through. A player will never know what to expect because the AI isn’t being over-written by the developer stepping in and changing values or behaviors based on what the player is doing. The AI is driving all of it’s own behavior internally. Plus, the developer doesn’t have to give the AI information that it isn’t supposed to have. One of the downsides to the approach, as far as I can see, is that the unpredictable nature of an autonomous AI leads to an inconsistent experience for the player.

When I set out to create Capuchin Capers, it all stemmed from the desire to learn more about AI design. I wanted to attempt to create an AI that would run around the map looking for fruit objects, but without any cheating. At no point should the AI be given the location of a fruit object. In that way, the AI would be autonomous. I added a Director AI to help make the game a bit more challenging for the player when the number of unfound fruit objects remaining reached a certain level. Another reason for the Director AI was because many games use this approach to manage the overall experience that the player is having. I knew in the future that I would want to be able to take advantage of this approach.

I have largely succeeded in my goals. For the most part, the AI will run around the map and pick up fruit objects without any outside aid. And, at times can be quite challenging to defeat. At times. The outcome of the match is heavily dependent upon the placement of the fruit objects. I chose to create a Goal Generator that would randomly place the fruit objects on the ground. In this way the player is never allowed to just memorize where the fruit is. But this also means that the AI won’t always provide a challenge to the player. This is the problem that I am having right now.

Players want a consistent experience. If they choose the easy difficulty for a game, they expect it to be easy…at least comparatively so. If they choose the hard difficulty, they naturally expect it to be much harder than it was when set to easy. A match on easy can’t be more difficult than a different match that was played on the hard setting on the same map. This is inconsistent and frustrating for a player and will probably lead the player to quit playing the game and move on to something else.

A truly autonomous AI is what I wanted to create. An AI that would act on it’s sensory inputs and would make basic decisions based on that information. But in creating this, I have an AI that I can’t easily tweak to provide a consistently challenging experience for the user. I had a goal at the beginning of this project, and that goal was the whole point of the project. I won’t insert code that will allow the AI to cheat. That wasn’t part of the plan, and in my eyes would constitute a complete failure for this project. By adjusting when the Director AI begins providing aid to Suzy (the capuchin monkey), I can adjust the difficulty by some degree. But it won’t provide consistency.

I will do my best to balance the levels so that the AI provides a reasonable experience, even if it is a bit inconsistent. I am sure my AI design is factoring into this issue, so an AI that was designed by someone with more experience would perform much better under the circumstances. I set out to design an autonomous AI that would pick up randomly placed fruit, and that is what I have.

I guess the old adage is true: “Be careful of what you wish for”.

Finding Your Own Process

Over the course of making Capuchin Capers, I have tried very hard to document as much as I can. I do this so that I can refer to these documents after the game is done and have a better picture as to what it takes to create a game design document. Now, if you go to Gamasutra, you can find articles on various developers views as to what needs to be in a game design document and how it should be formatted. But, does their design documents really fit the way that you create video games?

There is some information that needs to be in any game design document, of course. Anybody creating a design document knows that the levels themselves need to be described. Characters or creatures, including monsters, need to be detailed. There are elements that should be included, but does your design documents formatting need to adhere to the same template that I might use? Absolutely not. At least, that is how I feel about it. If you’re submitting a proposal to a publisher, things change. They have their own expectations for what needs to be in place in a proposal and/or design document. You would need to follow those guidelines to ensure your game was given the best chance to succeed and get published. But when you’re developing your own games as an indie developer, nobody outside of your small team are ever going to see these documents. They can be formatted in any manner that makes the most sense to you as a team.

When I started Capuchin Capers, I wanted to have a roadmap for the game. I wanted to know what assets I needed to create, and the overall architecture of the game. I wasn’t even close. If you view the Trello board for this game, which can be found here, you will see that I resorted to creating a ‘General’ card just to cover some of the more egregious things that I missed when first planning this game. As I discovered the many parts of the game that I had failed to plan out at the beginning, I started to write those documents post-creation. I did this, if for no other reason, to have a clearer picture of what I needed to plan for the next game. During this process, I discovered some things about the way that I create documents and what I end up putting in those documents. My documents aren’t formatted or structured the same as the documents featured in some of the aforementioned articles.

I would have worried about this when I first started creating whole games, instead of modifications to pre-existing games. But I’m not worried about this at all now that I have a little more experience. We all think differently, and that’s a good thing. So, why should my documents follow a template that another developer might use? Is my way the best way, or even a good way? It clearly isn’t the best way, since I overlooked so much; it may not be a good way for you, either. It probably isn’t, to be completely honest. But, it is a good way for me and that is what matters.

When I am done with Capuchin Capers, I will take the documents that I have generated, along with the topics that I know that I missed, and I will combine them into a single file. This will give me a good starting point for our next game. Will it be complete? No, it won’t. Will it be the right way to write a game design document? Yes, it will. Because it will be the right way to write a game design document for me.

Hoping for a Storyboard Ending

Procedural Cinematic Storyboard

As I’ve said before, I’m really reaching for these article names. Anyway, I just wanted to briefly touch on the idea of Indie developers doing storyboarding for their in-game cinematics.

I don’t know how typical I am to the average Indie developer (I’m talking the lone-wolf developer doing it all alone), but I didn’t start out with any storyboards for Odin’s Eye. I thought to myself that the end-game cinematics would be so short that it would be a waste of time to do any type of storyboarding. I already had an idea of what I wanted to do there, so off I went, with my half-formed plan.

It was a disaster. There was nothing that I could use from that first attempt.

So, I knew then that I had to do more than open the editor and start to throw things together. Without any formal training, but some basic idea of what a storyboard was supposed to look like, I started to create my first draft of the storyboard. I say “first draft” because you will need to move cells around to get the scene flow that you want. That may be a storyboards greatest attribute…the fact that you can shuffle the cells around if you don’t like the way each shot flows into the next.

There is another reason why I said “first draft”, and I actually made the same mistake again. A little background may be helpful in understanding the mistake I am referring to. I spent quite a lot of time in art classes when I was young. I wanted to make cartoons like all the ones that I loved, so that is what I planned to do with my life. Spoiler, I didn’t do that. Anyway, I would spend hours drawing things. Anything, really. I eventually lost my love for art, but that is a bitter story that I won’t share, and it isn’t important anyway. I can draw. At one time, I could draw really well, but not great. So when it came time to create the storyboards for Odin’s Eye, I thought this would be a good opportunity to pick up my pencils and create the storyboards that way. I had at one time been talented and skilled enough to do that, so why not? Well, that part about “had at one time” is key. Compared to your average person, I can still draw well. But to attempt to do something like create quality storyboards? Nope.

After wasting the better part of a day creating a portion of the storyboard for the cinematic that plays when the player fails in Odin’s Eye, I stopped in disgust. The art was terrible (for someone who spent years in art classes), and I hadn’t even finished the storyboard. The next morning I was honest with myself. I admitted that I didn’t have it anymore, and that to continue like that would waste at least one more day. That was time I didn’t have to waste.

I decided to play to my strengths, such as they are. I had animations and models right there in the editor. I knew how to use Sequencer so that I could create the cells for the storyboards using actual in-game assets. The very same assets that were going to be used in the cinematic itself. Why not just use those instead of wasting a bunch of time? Ego could have gotten the best of me, but I always try to be honest with myself. This was a case where it paid off. So, how did I make this mistake again? Well, the same reason why I made it on Odin’s Eye, to be perfectly honest. For some reason, I believed that I could create it faster with my pencils than I could in Sequencer. I was wrong then, and I was wrong now.

Even though I didn’t look up the proper formatting for a storyboard, if there is such a thing, it was immensely helpful to have something to refer to when laying out the cinematic in Sequencer. I review the storyboards that I create many times during the process of creating the cinematics. No matter how they are made, they will be useful to have, so don’t hesitate. You will waste more time redoing things by “winging it” than it would take to create some storyboards. They are worth the effort.

A (Game) State of Affairs

Until recently, the ‘game’ wasn’t much of a game. There really wasn’t a beginning to the level while playing through. Once the level loaded, Suzy’s AI would immediately begin to run and off she would go. But, it didn’t matter how many of the fruit objects she picked up: one, three, ten…all. It didn’t make any difference. There was no true ending to the level. That is where the GameMode comes in.

The name “GameMode” is a bit misleading, as you may not expect the game mode to handle the state of the game. But according to the documentation here, the game mode keeps track of the current levels progress according to the rules of that particular game. In the case of Capuchin Capers, the rules are very simple. When the level starts, you are in a pre-game state, and are able to run around the level to get a feel for it’s size and shape. While in the pre-game state, you can’t see any of the fruit objects or the game UI (which could reveal the number of fruit objects if visible). The game proper doesn’t start until you enter the grass hut located near your beginning position. Once you enter the hut, the game state changes to the in-game state.

The in-game state is where you are competing with Suzy to find the most fruit objects. The UI is visible and Suzy’s AI, as well as the Director’s AI, is enabled and she will immediately begin searching for fruit objects. The game mode will remain in this state until one of two things happens. If Suzy finds half of the fruit objects (rounded down) she wins the competition. Or, if the player finds more than half of the fruit objects, they win. As you can see, Suzy wins all ties according to these simple rules. Once one of these two things occurs, the game will transition to the post-game state.

In the post-game state, the end of the level is executed, starting by taking the player’s camera and translating it high above the player’s head. From this bird’s-eye-view, the camera’s position will move to a location high above the hut, looking down towards it. From that known location, we can start the actual cinematic to end the level. These steps to this point are necessary, because we can’t know where the player is on the island when they or Suzy wins the match. We need a way to get the camera to a position that we can always guarantee, regardless of where the player was on the island at the end. Once the camera is in this “safe” location, we trigger the level sequence. The level sequence’s camera starts at the exact same location as the “safe” location, so the cinematics’ start is seamless. From there, we play the cinematic for the player as a reward for playing through the level.

In all of this, there are a lot of moving parts, even in a game as simple as Capuchin Capers. So, how is all of this done in the game mode? A state machine. The implementation of the state machine is actually fairly simple compared to it’s impact. Warning: C++ incoming. There is a base class named CCStateBase with the following declaration:

class CAPUCHINCAPERS_API CCStateBase
{
public:
	CCStateBase();
	virtual ~CCStateBase();

	/** Called when the state is first entered. */
	virtual void Enter();

	/** Called each tick for this state. */
	virtual void Update(float Delta);

	/** Called before the state is exited. */
	virtual void Exit();

	void SetParent(ACapuchinCapersGameMode* ParentParam);
	ACapuchinCapersGameMode* GetParent();

protected:
	ACapuchinCapersGameMode* Parent;
};

Each of the states mentioned above (pre-game, in-game, and post-game) all derive from this class. The derived classes define what each of their methods should actually do. Obviously, the Enter() method for the in-game state would be completely different than that of the post-game state. This is really nice to work with, because it compartmentalizes the functionality of each state and makes it easier to visualize how everything flows. From the above code, it is easy to see that the game mode is the owner of the state machine and is set as “Parent” in each of the state machines objects.

In the game mode each of the three game states are created as objects, with a pointer to the base class CCStateBase being declared as well. That pointer, which is named “CurrentState”, is where all the magic happens. By using polymorphism we can smoothly change between states, with all the functionality that we have built into our state machine, and the game mode object never needs to be aware of the current state of the state machine. This is done, primarily, by the SetNextState() method:

void ACapuchinCapersGameMode::SetNextState(ECCGameState NextState)
{
	// If this is the first call to this method,
	// CurrentState will be nullptr. Otherwise,
	// we need to exit the current state before
	// entering the next.
	if (CurrentState)
	{
		CurrentState->Exit();
	}

	switch (NextState)
	{
	case ECCGameState::IngameState:
		CurrentState = &IngameState;
		break;

	case ECCGameState::PostgameState:
		CurrentState = &PostgameState;
		break;

	default:
	case ECCGameState::PregameState:
		CurrentState = &PregameState;
		break;
	}

	if (CurrentState)
	{
		CurrentState->Enter();

		// Broadcast the state change to any
		// listeners of this delegate.
		OnGameStateChanged.Broadcast(NextState);
	}
	else
	{
		UE_LOG(LogTemp,
			Warning,
			TEXT("CurrentState is NULL."));
	}
}

For a mechanism that controls the entirety of the game’s rules, this state machine is incredibly simple. There is quite a bit that isn’t being shown here, of course. This article isn’t meant to be a tutorial on how to implement a state machine. For that, you would definitely be better served to find a good book or tutorial aimed specifically at that. I can recommend “Game Development Patterns and Best Practices” by John P. Doran and Matt Casanova, as that is where I learned this design pattern.

In closing I have to say that the structure of the Unreal Engine is complex, and what every part is supposed to be doing isn’t always clear when you are just starting out. I never would have thought for a moment that the state machine that controls the game would go in the game mode, while all of the data such as player scores, would go in the AGameStateBase derived object. This can all be a bit confusing when first learning it, but once you understand what everything is doing it gets more clear and easier to grasp. Happy developing!

The Importance of Staying Grounded

As I am sure your aware, when it comes to game development, there are so many things that can break immersion for the player that it can make your head spin. They can range from a small, unrealistic twitch in a looping animation to the character being able to survive things that we know no person every could. These and many more things can ruin all of our hard work in trying to bring the player into the worlds that we create. But, there are few things that will break immersion quicker, and mark a developer as supremely lazy, as characters that float in midair as they stand on the ground.

This was the problem that I decided to tackle in the last week. After combing over various videos on Youtube showing how to do this in Blueprint, and coding everything in the editor, I had something that was reasonably good. I felt pretty good about what I had, even though I wasn’t sure of how performant the code was. With all of the foliage in Capuchin Capers, I don’t have a single CPU cycle to spare. I didn’t see any real difference in FPS on the first level of the game, so I felt things were headed in the right direction with the IK system that was implemented.

I watched one of the video streams from Epic on the Paragon characters animations and how they were achieved. In that video, speed warping was mentioned as a way to blend two animations together even if their frame rates were completely different. I thought that would be a great way to adjust Suzy’s IdleWalkRun blend space so that her run animation could be stretched a bit at higher speeds to reduce foot slide. I knew I didn’t want to take the time to learn exactly what had to be done to implement this, let alone code it. So off I went, on a search of the marketplace to see if there were any plugins that would do that.

Imagine my surprise when I found that not only was there a plugin that could do speed warping, but that I already owned it! Not only could it do speed warping, called stride scaling in the plugin, but it was a full IK solution. I am embarrassed to admit this, but it never even occurred to me to look at all the assets I have from the marketplace. I didn’t remember seeing anything like this in my asset packs, so I just assumed that we had to implement this ourselves. About that plugin…

PowerIK is a plugin that Epic acquired late in 2020 and made it free for everyone. It has several very powerful IK solvers already built as graph nodes that we can use in our animation blueprints. While it doesn’t have speed warping as a stand-alone graph node, the source code is included with the plugin. I don’t have a ton of experience coding plugins for Unreal, but this isn’t a situation where I’m starting from scratch either. So I should be able to make a graph node from the code base that is already there.

This was great news because PowerIK is really powerful (no pun intended), and I was able to replace most of the Blueprint code that I had with the PowerIK Ground node. The foot-to-ground alignment seemed broken, so I used the solution that I had from my previous work. But you may note that I replaced most of what I had done over the previous week with that single node. That is a feel bad moment. I could have had everything setup in a single day if I had just slowed down and checked all the assets that I already possessed.

This is a journey. There are going to be mistakes along the way…quite a lot of them, to be honest. The lesson here is that I shouldn’t have flown off to develop my own IK system to align the character’s legs and feet. I learned a lot, but ultimately the time wasn’t well spent. You live and you learn.

All the Modeling Tools

In game development today, there seems to be no shortage of tools vying for our attention. From programming to texturing to modeling, the selection of tools can be dizzying. When Epic announced the inclusion of the modeling tools plugin for the Unreal editor, I thought that this was nothing more than a replacement for the older BSPs already available. A nice addition to be sure, but not a serious tool to be used to create game content. Then Quixel released their videos on the creation of their medieval village demo, and I found a new appreciation for the tools that Epic has generously given us.

The obvious use is for blocking out a scene, and I have mixed feelings about their use for this purpose. Every time that you use a boolean operation on two objects, it creates a third object in the content folder. This can lead to a huge number of useless intermediary objects before you get to the final shape that you want for your block out. Worse, the created objects are all named something cryptic like ‘Boolean_a2b3x092xi3202’ or some such name. The editor appears to take the name of the operation and appends a UUID value to it. You can specify a name other than ‘Boolean’ in the tools, so you could use this to separate the final object you want from the intermediary objects you don’t care about. This leaves you with many unwanted objects with the name ‘Boolean-xxxx’ and one named with the value you provided in the UI. This is the approach that I used, and while it isn’t the most convenient, it does work. Still, this tool is far better that BSPs in my opinion, and is a welcome addition to the editor.

Where this tool really seems to shine is an application that I wouldn’t have thought it useful, but is shown to great effect in Quixel’s videos mentioned above. Using preexisting assets, along with the tools to reshape them, allows for the reuse of assets in a way that would have been much harder otherwise. What I really like about this toolkit, and even BSPs to some extent, is the fact that you are in the game level itself while using the tools. You can shape something to fit the exact position and placement that you need, with the look that you want. This could be done if you are creating all of your levels geometry in a separate DCC, but I have never liked this approach. I want to see what my level or asset looks like in the engine, not in the renderer that is shipped with the DCC. No matter what settings I tweak, I have never gotten MAX, Blender, or Houdini to render my assets the same as Unreal does. There is also the overhead of having to define each material twice; you define it in the DCC of your choice, and you define it again in your engine. We’ve all been there, and there is an element to this that cannot be escaped. It is a necessary evil. But, it is nice that this can be lessened to a degree.

The finished hut, placed in the first level. While this image doesn’t show the full detail of the hut, it gives a good impression of it’s look and feel. To better see the weathering written about below, see the featured image.

I have recently finished the bamboo hut where the player will go to initiate the start of the level in Capuchin Capers. This will allow the player to explore the island a bit and get familiar with the terrain…or just sightsee if they like. Once they are ready, they will enter the hut and the level will begin. Because of this, the hut will be included in every level and is the only structure in the game. It is likely to receive quite a bit of scrutiny from the player, so it has to match the visual fidelity of the rest of the level as well as having no strange issues with collision or scale. I decided to use the editors modeling tools to block this out. Previously, I would have either used BSPs (if I could talk myself into enduring that experience), or I would use an exported mannequin model as reference for scale. The latter would have been a big mistake.

I wanted a ramp leading up to the entrance to the hut, but the ramp needs to be long enough to clip through the terrain. I don’t know every location where this will be placed yet, so the model needs to account for that placement. But, I also do not want the hut as a whole to take up more space than is absolutely necessary. I was able to make the angle of the ramp steep enough to make it compact-ish, while still being able to actually walk up the ramp. This could have been done with BSPs, but that would have been a painful experience, to be sure. Aside from the ramp, I was able to easily get the overall shape of the hut the way that I wanted it. I had a specific look in mind and it was fairly easy to get to that look with the tools. I was still using the tools like a caveman, due to my experience with BSPs, so I could have refined the hut’s shape far more than I did in-editor. But my block-out was complete, with all the windows where I wanted them and at the correct heights. I exported this block-out to Blender to build the actual geometry for the hut.

I used geometry from preexisting assets in an attempt to maintain some continuity in the materials used. The tree trunks that make up the posts for the hut are palm trees that are actually in the levels. Similar assets were ‘repurposed’ in the same way, such as bamboo. I then used the same technique shown in Quixel’s video on the creation of their houses in their demo. Utilizing a separate UV channel to introduce mud, dirt, and grime to the hut really made all the difference. While most of the geometry used to build out the hut has shared UVs, or tiling textures, the approach Quixel demonstrates allowed me to break up the feeling that the materials are all shared. It gave each piece of geometry the feel of being a separate component in the hut, not just a bunch of copies of the same thing…which, of course, they are. I used Painter to bake out my AO, curvature, thickness and other maps, and then to create the masks needed to create this effect in Unreal.

I could have used Unreal’s modeling tools for much more than I did. They are not just a toy, or a replacement for BSPs, as I originally thought. They are a valuable tool in the toolkit, and one I plan to explore further. Thanks for the read.

Optimize early, optimize often

With so much that goes into making a game, it is easy to just push forward and add all the content and worry about optimization later. I start out doing that very thing, and it can cause many problems. Now I cook/package a level and run it on my development system as well as my low-end test system. This is very important when it comes to catching performance issues early, while I can still understand exactly when the performance took a nose-dive.

For example, in the last post, I stated that the ocean system was causing a large hit in performance which would severely impact playability on lower-end hardware. While this is true of the ocean system in general, it is especially true when the wave generators are being used. If you just added in all the content and worried about it later, you may not realize that the wave generators for the ocean system are the largest contributor to the low FPS that you are seeing. By adding these things in incrementally, and testing for performance along the way, we can better understand where our performance is going.

I am currently reworking the bog area in the second level of the game. I had already tweaked everything to look exactly how I wanted it. All the foliage was simulated correctly, with the right density of large and small grasses, along with ferns, bushes, and other plants. It looked great! The only problem? Performance was low. Very low. On my development system, I was seeing frame rates in the low 40s. Keep in mind, my development system has a GTX 1080ti, 32GB of DDR3200 RAM, and a Ryzen 7 1700x overclocked to 3.8Ghz. Not a world-beater by any stretch, but far from a potato.

What happened when I ran this exact same test on my low-end system? A wonderful looking slide show. The framerate would dip near ten frames per second, and would only average around 15fps. That was the case no matter what I chose in the settings configuration file (the UI isn’t implemented yet, so its the config file or nothing). Just awful performance. Now, it’s true that my test system has a lowly FX5850 and an RX-570. This is the lowest hardware that I want to use as a recommended system, and if the game runs on this system, it should run well on newer hardware. Ten frames per second is obviously unacceptable.

Had I just moved forward with the game, I may not have known where to look to claw back some performance in the bog area of the second level. Because I cooked/packaged a test game, I was able to find this problem and address it. I have made changes to the foliage spawner used for the bog biome, and while I haven’t tested it on the low-end system yet, I gained a good 20-25fps on my development rig. I would expect to see livable, if not impressive, increases on the test system as well.

Profiling code doesn’t need to be done every step of the way; packaging a test game after every change isn’t advisable; testing on your test-rig every hour would waste time. But, if you package a test level a few times a week, or after you have added a new game feature, you can keep performance levels reasonably high. By doing that, you will know how much ‘head room’ you have remaining while moving forward. If your game is averaging 30fps on a high-end system, and you haven’t added any MOBs yet, you’ve past the point where you should have optimized for performance.

An Ocean of Problems

The last week has been a trying time on the project. Performance concerns are always valid, but when there are no foliage assets in the level and your only seeing around 30fps, that is cause for worry. Even more cause for concern is when you cook and package a test level and the game crashes on start-up. These were a few of the issues that I ran into this week.

The crashing was caused by the fruit objects that are passed to the goal generator when the level is being opened. In-editor these Blueprint based objects exist and will work fine. However, when you cook and package the game these assets are converted to BlueprintGeneratedClass type objects. And as the name implies, these are not an Actor based object and will cause a crash when you attempt to use SpawnActor to create one in the level. But, thanks to this post, I was able to finally get the packaged game to open correctly with no crashes. It took quite some time to figure this out, because the error that is reported is similar to an error that can be seen in other games, such as GTA 5. That lead me to believe that this was an issue with Windows 10 and not the game itself. Clearly that was my mistake.

As for the performance issues, they all stemmed from the ocean system that is being used. I knew that it was expensive, but I had no idea that it would cost upwards of 45fps. This was true regardless of the scalability settings that were chosen. I thought that nativizing the Blueprints to C++ may help, and it did seem to, but I only gained a few frames from that. It was actually the nativization feature that lead me to cook and package the game to begin with. So after quite a bit of testing and tweaking, I have a somewhat workable solution. Unfortunately it means that for people setting the scalability settings to medium or low, they will not have the ocean system. They will have a fairly basic water material applied; there is no sea foam on the beach, no waves crashing on the shore, no dampness that ebbs-and-flows with the waves. These are the tradeoffs that will have to be made.

On one last note, I was painting the landscape layers to define the different regions on the test map. Because I am using the original Brushify layers, along with duplicate layers that have been renamed for the foliage spawners, I was having a really difficult time differentiating between the various layers. So I added a color overlay to each of the layer functions outputs and connecting these to a parameter switch that would allow me to turn this effect on or off. It isn’t beneficial to have it on all the time, clearly, but when painting the layers it is a huge help. The featured image for this article shows the effect while it is turned on. The image below shows the same terrain with the effect turned off.

Figure 1. The terrain without the overlay effect.

Without the overlay, there is no way of telling the difference between the various landscape layers. These landscape layers are used to control the types of foliage that can spawn in a given location. Beach is easy to pick out, but the different types of grass all blend in together as they were meant to. Without the overlay, you can’t accurately define the biomes that will be controlled by the painted landscape layers. With this, I am now able to easily view exactly where each layer is and how they blend together.

Game development is difficult. There is a reason that studios employ a large number of highly specialized, highly skilled people to create a game. The production of a game is akin to a movie now, and it requires a lot of effort to get anything worthwhile. But, when things are all done and the game is what you envisioned at the start of the project, your efforts will all be worth it.

A Traveling Salesman Walks into a Bar…

Work has steadily continued on Suzy’s AI, and it was necessary too. As she was at the time of the last post, she would have been absolutely no challenge to the player. I could have just allowed her to cheat, knowing where all of the fruit objects were and used some form of random number generation to decide if she ‘found’ one or not. That is the approach that some games take, and I suspect that many indie developers would have done just that. I can’t blame them; creating AI with any amount of intelligence is very hard. It would be very easy to do that and move on, especially if you are an indie developer who is building a commercial game. But, I absolutely will not take this route. The whole point behind this entire project is to get experience and gain knowledge about making good AI. Did I think about giving in? Yes. I have thought how much easier it would be many times, but again, that would be missing the point of this project.

So I asked myself how I might go about searching for these fruit objects, or any objects, in an environment as large as a tropical island. The first thing I would need to do to search effectively is to orient myself in my surroundings by finding some landmarks so that I won’t get lost. Aha! I can have the AI travel to landmarks that are all over the islands. By doing this, it will also give the appearance that Suzy really is using some form of intelligence to perform this search. If the player watches her, they will notice that she is traveling to a location that has a very distinct landmark. Hopefully this will feel fair to the player, as they too can orient themselves by using these landmarks. Moving Suzy around the island isn’t that difficult with the custom EQS generator that was made specifically for this purpose. But, how exactly should the AI chose which landmark to visit? And how should she go about visiting each one? I chose to have her randomly select a landmark to visit, but I didn’t want her to visit the same landmarks over and over without going to each one first. That was important to me, because it would then feel more like she really is searching, rather than just randomly running around. To implement this, I chose to use the “Traveling Salesman” approach for visiting each location.

The idea behind the traveling salesman approach is that the ‘salesman’, Suzy in our case, will travel from her starting location to each of the destination points, or landmarks. But, she will not backtrack to a previously visited landmark. She will only go to unvisited landmarks until all of them have been visited. Only after all of the landmarks have been visited will the AI be allowed to revisit a landmark.

Once at the landmark, I wanted Suzy to give the place a good search. So how did I implement this? The same technique that was used to move her to the landmark: Traveling Salesman. I actually implemented this first, so that I could make sure that it would work the way that I wanted it to. Once she arrives at the landmark, the behavior tree task generates a random number of points within a range that can be set in the behavior tree. For this, I felt that 3-6 points around the landmark would be fairly good, but the range can be set between 2-8. Once these points are generated they are handed off to Suzy’s behavior tree so that she can run these points, utilizing the traveling salesman approach. It gives Suzy a nice appearance of being an excitable little monkey that is running around trying to find these fruit objects.

While Suzy’s AI still has some work to do before she will be challenging enough to make this game fun, I think that I am close having the developmental part done. If I can just make this a little more successful at finding the fruit objects quicker, it will just be a matter of balancing the numbers to get things just right. I hope.

The Path Most Followed

Well, I have just spent the better part of the last week working on a custom EQS generator that creates a cone-shaped graph of points and then uses the A* algorithm to plot a path through the graph. Having virtually no experience in this field of programming I had no right to hope that I could produce anything of any value, but thanks to Red Blob Games excellent articles on this I was able to implement this in Unreal.

To say that this was challenging (for me, at least) is an understatement of massive proportions. While I am an experienced C++ programmer, this field of programming is very demanding. I am very happy with the results that I was able to achieve from studying the code examples and explanations on Red Blob Games site. If you have any interest in how pathfinding works in games or other applications, you owe it to yourself to read every word on their site.

My implementation uses a cone as the shape of the graph to generate the initial points for the graph. Next comes the line traces to find all of the blocking volumes that may have points generating within them. These need to be marked as ‘wall’ points for the A* algorithm to plot a path around them. There is also terrain costing that can be implemented, so that path generation can take the type of terrain into account while plotting through the graph. While this hasn’t been implemented in my generator yet, there is a cost function that is already being called. It just returns a value of 1 for every point in the graph, but later a data table can be added to allow for different terrain types to cost different amounts. This will require some rewriting of the code because at the moment the generator isn’t doing any line traces to the landscape underneath each graph point to find the material at that location. This approach may not be possible for a variety if reasons, especially considering Unreal’s landscape material architecture, but even if that is the case, there is always a way to do something if your really determined.

Having a generator that will create paths that avoid blocking volumes is crucial, because I had written another EQS generator that would create its points along a straight line from Suzy to the target. It is crude, but effective in some instances…except where it sends Suzy running through the ocean. Obviously, this isn’t ideal. In the image for this post, you can see the EQS generator has created a short path for Suzy to follow. This image doesn’t show off what the generator can really do, but does show it in action. Just as an aside, the image also shows off the new female mannequin that Epic has given the community. The hair groom was a quick job in Blender to test out export settings to finally get a groom out of Blender. The tutorial is by Marvel Master and does what it claims to.

It has taken a lot of work to create this EQS generator, and while it isn’t perfect, it works very well and allows Suzy to run from one side of the test Island all the way to the other. I don’t know how far it is in-game, but it has to be at least a kilometer and is likely more. This generator isn’t just a great asset to this project, but it is now a permanent tool in our toolkit, and I have no doubt that the time spent creating it will be paid back ten-fold in the future. Hard work pays off.