AI Autonomy in Games: Boon or Bane

I feel that a disclaimer is required here: I am not a professionally trained AI engineer. I am self-taught, and most of my knowledge is anecdotal. What follows are my observations that were formed from the creation of the games that I have worked on. Your mileage may vary.

In most games the AI will be given, and allowed to act on, information that it shouldn’t have. In short, it’s allowed to cheat. The AI might be given the location of the player, or knowledge that the player is about to run out of ammunition. Being given information that the AI shouldn’t have isn’t the only way for the game to cheat. The approach called “rubber banding” in gaming AI is the act of tweaking the numbers that drive the AI in response to something that the player does. A game might make an AI’s aim better near the end of a map if the designers want the pressure turned up on the player. You may ask yourself why developers would do this, and the answer is simple: It makes creating a consistent, challenging experience easier. In fact, it may be impossible to make a game consistently challenging and fun without it.

An autonomous AI, on the other hand, can create a completely unique and unpredictable experience every play-through. A player will never know what to expect because the AI isn’t being over-written by the developer stepping in and changing values or behaviors based on what the player is doing. The AI is driving all of it’s own behavior internally. Plus, the developer doesn’t have to give the AI information that it isn’t supposed to have. One of the downsides to the approach, as far as I can see, is that the unpredictable nature of an autonomous AI leads to an inconsistent experience for the player.

When I set out to create Capuchin Capers, it all stemmed from the desire to learn more about AI design. I wanted to attempt to create an AI that would run around the map looking for fruit objects, but without any cheating. At no point should the AI be given the location of a fruit object. In that way, the AI would be autonomous. I added a Director AI to help make the game a bit more challenging for the player when the number of unfound fruit objects remaining reached a certain level. Another reason for the Director AI was because many games use this approach to manage the overall experience that the player is having. I knew in the future that I would want to be able to take advantage of this approach.

I have largely succeeded in my goals. For the most part, the AI will run around the map and pick up fruit objects without any outside aid. And, at times can be quite challenging to defeat. At times. The outcome of the match is heavily dependent upon the placement of the fruit objects. I chose to create a Goal Generator that would randomly place the fruit objects on the ground. In this way the player is never allowed to just memorize where the fruit is. But this also means that the AI won’t always provide a challenge to the player. This is the problem that I am having right now.

Players want a consistent experience. If they choose the easy difficulty for a game, they expect it to be easy…at least comparatively so. If they choose the hard difficulty, they naturally expect it to be much harder than it was when set to easy. A match on easy can’t be more difficult than a different match that was played on the hard setting on the same map. This is inconsistent and frustrating for a player and will probably lead the player to quit playing the game and move on to something else.

A truly autonomous AI is what I wanted to create. An AI that would act on it’s sensory inputs and would make basic decisions based on that information. But in creating this, I have an AI that I can’t easily tweak to provide a consistently challenging experience for the user. I had a goal at the beginning of this project, and that goal was the whole point of the project. I won’t insert code that will allow the AI to cheat. That wasn’t part of the plan, and in my eyes would constitute a complete failure for this project. By adjusting when the Director AI begins providing aid to Suzy (the capuchin monkey), I can adjust the difficulty by some degree. But it won’t provide consistency.

I will do my best to balance the levels so that the AI provides a reasonable experience, even if it is a bit inconsistent. I am sure my AI design is factoring into this issue, so an AI that was designed by someone with more experience would perform much better under the circumstances. I set out to design an autonomous AI that would pick up randomly placed fruit, and that is what I have.

I guess the old adage is true: “Be careful of what you wish for”.

Finding Your Own Process

Over the course of making Capuchin Capers, I have tried very hard to document as much as I can. I do this so that I can refer to these documents after the game is done and have a better picture as to what it takes to create a game design document. Now, if you go to Gamasutra, you can find articles on various developers views as to what needs to be in a game design document and how it should be formatted. But, does their design documents really fit the way that you create video games?

There is some information that needs to be in any game design document, of course. Anybody creating a design document knows that the levels themselves need to be described. Characters or creatures, including monsters, need to be detailed. There are elements that should be included, but does your design documents formatting need to adhere to the same template that I might use? Absolutely not. At least, that is how I feel about it. If you’re submitting a proposal to a publisher, things change. They have their own expectations for what needs to be in place in a proposal and/or design document. You would need to follow those guidelines to ensure your game was given the best chance to succeed and get published. But when you’re developing your own games as an indie developer, nobody outside of your small team are ever going to see these documents. They can be formatted in any manner that makes the most sense to you as a team.

When I started Capuchin Capers, I wanted to have a roadmap for the game. I wanted to know what assets I needed to create, and the overall architecture of the game. I wasn’t even close. If you view the Trello board for this game, which can be found here, you will see that I resorted to creating a ‘General’ card just to cover some of the more egregious things that I missed when first planning this game. As I discovered the many parts of the game that I had failed to plan out at the beginning, I started to write those documents post-creation. I did this, if for no other reason, to have a clearer picture of what I needed to plan for the next game. During this process, I discovered some things about the way that I create documents and what I end up putting in those documents. My documents aren’t formatted or structured the same as the documents featured in some of the aforementioned articles.

I would have worried about this when I first started creating whole games, instead of modifications to pre-existing games. But I’m not worried about this at all now that I have a little more experience. We all think differently, and that’s a good thing. So, why should my documents follow a template that another developer might use? Is my way the best way, or even a good way? It clearly isn’t the best way, since I overlooked so much; it may not be a good way for you, either. It probably isn’t, to be completely honest. But, it is a good way for me and that is what matters.

When I am done with Capuchin Capers, I will take the documents that I have generated, along with the topics that I know that I missed, and I will combine them into a single file. This will give me a good starting point for our next game. Will it be complete? No, it won’t. Will it be the right way to write a game design document? Yes, it will. Because it will be the right way to write a game design document for me.

Hoping for a Storyboard Ending

Procedural Cinematic Storyboard

As I’ve said before, I’m really reaching for these article names. Anyway, I just wanted to briefly touch on the idea of Indie developers doing storyboarding for their in-game cinematics.

I don’t know how typical I am to the average Indie developer (I’m talking the lone-wolf developer doing it all alone), but I didn’t start out with any storyboards for Odin’s Eye. I thought to myself that the end-game cinematics would be so short that it would be a waste of time to do any type of storyboarding. I already had an idea of what I wanted to do there, so off I went, with my half-formed plan.

It was a disaster. There was nothing that I could use from that first attempt.

So, I knew then that I had to do more than open the editor and start to throw things together. Without any formal training, but some basic idea of what a storyboard was supposed to look like, I started to create my first draft of the storyboard. I say “first draft” because you will need to move cells around to get the scene flow that you want. That may be a storyboards greatest attribute…the fact that you can shuffle the cells around if you don’t like the way each shot flows into the next.

There is another reason why I said “first draft”, and I actually made the same mistake again. A little background may be helpful in understanding the mistake I am referring to. I spent quite a lot of time in art classes when I was young. I wanted to make cartoons like all the ones that I loved, so that is what I planned to do with my life. Spoiler, I didn’t do that. Anyway, I would spend hours drawing things. Anything, really. I eventually lost my love for art, but that is a bitter story that I won’t share, and it isn’t important anyway. I can draw. At one time, I could draw really well, but not great. So when it came time to create the storyboards for Odin’s Eye, I thought this would be a good opportunity to pick up my pencils and create the storyboards that way. I had at one time been talented and skilled enough to do that, so why not? Well, that part about “had at one time” is key. Compared to your average person, I can still draw well. But to attempt to do something like create quality storyboards? Nope.

After wasting the better part of a day creating a portion of the storyboard for the cinematic that plays when the player fails in Odin’s Eye, I stopped in disgust. The art was terrible (for someone who spent years in art classes), and I hadn’t even finished the storyboard. The next morning I was honest with myself. I admitted that I didn’t have it anymore, and that to continue like that would waste at least one more day. That was time I didn’t have to waste.

I decided to play to my strengths, such as they are. I had animations and models right there in the editor. I knew how to use Sequencer so that I could create the cells for the storyboards using actual in-game assets. The very same assets that were going to be used in the cinematic itself. Why not just use those instead of wasting a bunch of time? Ego could have gotten the best of me, but I always try to be honest with myself. This was a case where it paid off. So, how did I make this mistake again? Well, the same reason why I made it on Odin’s Eye, to be perfectly honest. For some reason, I believed that I could create it faster with my pencils than I could in Sequencer. I was wrong then, and I was wrong now.

Even though I didn’t look up the proper formatting for a storyboard, if there is such a thing, it was immensely helpful to have something to refer to when laying out the cinematic in Sequencer. I review the storyboards that I create many times during the process of creating the cinematics. No matter how they are made, they will be useful to have, so don’t hesitate. You will waste more time redoing things by “winging it” than it would take to create some storyboards. They are worth the effort.

A (Game) State of Affairs

Until recently, the ‘game’ wasn’t much of a game. There really wasn’t a beginning to the level while playing through. Once the level loaded, Suzy’s AI would immediately begin to run and off she would go. But, it didn’t matter how many of the fruit objects she picked up: one, three, ten…all. It didn’t make any difference. There was no true ending to the level. That is where the GameMode comes in.

The name “GameMode” is a bit misleading, as you may not expect the game mode to handle the state of the game. But according to the documentation here, the game mode keeps track of the current levels progress according to the rules of that particular game. In the case of Capuchin Capers, the rules are very simple. When the level starts, you are in a pre-game state, and are able to run around the level to get a feel for it’s size and shape. While in the pre-game state, you can’t see any of the fruit objects or the game UI (which could reveal the number of fruit objects if visible). The game proper doesn’t start until you enter the grass hut located near your beginning position. Once you enter the hut, the game state changes to the in-game state.

The in-game state is where you are competing with Suzy to find the most fruit objects. The UI is visible and Suzy’s AI, as well as the Director’s AI, is enabled and she will immediately begin searching for fruit objects. The game mode will remain in this state until one of two things happens. If Suzy finds half of the fruit objects (rounded down) she wins the competition. Or, if the player finds more than half of the fruit objects, they win. As you can see, Suzy wins all ties according to these simple rules. Once one of these two things occurs, the game will transition to the post-game state.

In the post-game state, the end of the level is executed, starting by taking the player’s camera and translating it high above the player’s head. From this bird’s-eye-view, the camera’s position will move to a location high above the hut, looking down towards it. From that known location, we can start the actual cinematic to end the level. These steps to this point are necessary, because we can’t know where the player is on the island when they or Suzy wins the match. We need a way to get the camera to a position that we can always guarantee, regardless of where the player was on the island at the end. Once the camera is in this “safe” location, we trigger the level sequence. The level sequence’s camera starts at the exact same location as the “safe” location, so the cinematics’ start is seamless. From there, we play the cinematic for the player as a reward for playing through the level.

In all of this, there are a lot of moving parts, even in a game as simple as Capuchin Capers. So, how is all of this done in the game mode? A state machine. The implementation of the state machine is actually fairly simple compared to it’s impact. Warning: C++ incoming. There is a base class named CCStateBase with the following declaration:

class CAPUCHINCAPERS_API CCStateBase
{
public:
	CCStateBase();
	virtual ~CCStateBase();

	/** Called when the state is first entered. */
	virtual void Enter();

	/** Called each tick for this state. */
	virtual void Update(float Delta);

	/** Called before the state is exited. */
	virtual void Exit();

	void SetParent(ACapuchinCapersGameMode* ParentParam);
	ACapuchinCapersGameMode* GetParent();

protected:
	ACapuchinCapersGameMode* Parent;
};

Each of the states mentioned above (pre-game, in-game, and post-game) all derive from this class. The derived classes define what each of their methods should actually do. Obviously, the Enter() method for the in-game state would be completely different than that of the post-game state. This is really nice to work with, because it compartmentalizes the functionality of each state and makes it easier to visualize how everything flows. From the above code, it is easy to see that the game mode is the owner of the state machine and is set as “Parent” in each of the state machines objects.

In the game mode each of the three game states are created as objects, with a pointer to the base class CCStateBase being declared as well. That pointer, which is named “CurrentState”, is where all the magic happens. By using polymorphism we can smoothly change between states, with all the functionality that we have built into our state machine, and the game mode object never needs to be aware of the current state of the state machine. This is done, primarily, by the SetNextState() method:

void ACapuchinCapersGameMode::SetNextState(ECCGameState NextState)
{
	// If this is the first call to this method,
	// CurrentState will be nullptr. Otherwise,
	// we need to exit the current state before
	// entering the next.
	if (CurrentState)
	{
		CurrentState->Exit();
	}

	switch (NextState)
	{
	case ECCGameState::IngameState:
		CurrentState = &IngameState;
		break;

	case ECCGameState::PostgameState:
		CurrentState = &PostgameState;
		break;

	default:
	case ECCGameState::PregameState:
		CurrentState = &PregameState;
		break;
	}

	if (CurrentState)
	{
		CurrentState->Enter();

		// Broadcast the state change to any
		// listeners of this delegate.
		OnGameStateChanged.Broadcast(NextState);
	}
	else
	{
		UE_LOG(LogTemp,
			Warning,
			TEXT("CurrentState is NULL."));
	}
}

For a mechanism that controls the entirety of the game’s rules, this state machine is incredibly simple. There is quite a bit that isn’t being shown here, of course. This article isn’t meant to be a tutorial on how to implement a state machine. For that, you would definitely be better served to find a good book or tutorial aimed specifically at that. I can recommend “Game Development Patterns and Best Practices” by John P. Doran and Matt Casanova, as that is where I learned this design pattern.

In closing I have to say that the structure of the Unreal Engine is complex, and what every part is supposed to be doing isn’t always clear when you are just starting out. I never would have thought for a moment that the state machine that controls the game would go in the game mode, while all of the data such as player scores, would go in the AGameStateBase derived object. This can all be a bit confusing when first learning it, but once you understand what everything is doing it gets more clear and easier to grasp. Happy developing!

All the Modeling Tools

In game development today, there seems to be no shortage of tools vying for our attention. From programming to texturing to modeling, the selection of tools can be dizzying. When Epic announced the inclusion of the modeling tools plugin for the Unreal editor, I thought that this was nothing more than a replacement for the older BSPs already available. A nice addition to be sure, but not a serious tool to be used to create game content. Then Quixel released their videos on the creation of their medieval village demo, and I found a new appreciation for the tools that Epic has generously given us.

The obvious use is for blocking out a scene, and I have mixed feelings about their use for this purpose. Every time that you use a boolean operation on two objects, it creates a third object in the content folder. This can lead to a huge number of useless intermediary objects before you get to the final shape that you want for your block out. Worse, the created objects are all named something cryptic like ‘Boolean_a2b3x092xi3202’ or some such name. The editor appears to take the name of the operation and appends a UUID value to it. You can specify a name other than ‘Boolean’ in the tools, so you could use this to separate the final object you want from the intermediary objects you don’t care about. This leaves you with many unwanted objects with the name ‘Boolean-xxxx’ and one named with the value you provided in the UI. This is the approach that I used, and while it isn’t the most convenient, it does work. Still, this tool is far better that BSPs in my opinion, and is a welcome addition to the editor.

Where this tool really seems to shine is an application that I wouldn’t have thought it useful, but is shown to great effect in Quixel’s videos mentioned above. Using preexisting assets, along with the tools to reshape them, allows for the reuse of assets in a way that would have been much harder otherwise. What I really like about this toolkit, and even BSPs to some extent, is the fact that you are in the game level itself while using the tools. You can shape something to fit the exact position and placement that you need, with the look that you want. This could be done if you are creating all of your levels geometry in a separate DCC, but I have never liked this approach. I want to see what my level or asset looks like in the engine, not in the renderer that is shipped with the DCC. No matter what settings I tweak, I have never gotten MAX, Blender, or Houdini to render my assets the same as Unreal does. There is also the overhead of having to define each material twice; you define it in the DCC of your choice, and you define it again in your engine. We’ve all been there, and there is an element to this that cannot be escaped. It is a necessary evil. But, it is nice that this can be lessened to a degree.

The finished hut, placed in the first level. While this image doesn’t show the full detail of the hut, it gives a good impression of it’s look and feel. To better see the weathering written about below, see the featured image.

I have recently finished the bamboo hut where the player will go to initiate the start of the level in Capuchin Capers. This will allow the player to explore the island a bit and get familiar with the terrain…or just sightsee if they like. Once they are ready, they will enter the hut and the level will begin. Because of this, the hut will be included in every level and is the only structure in the game. It is likely to receive quite a bit of scrutiny from the player, so it has to match the visual fidelity of the rest of the level as well as having no strange issues with collision or scale. I decided to use the editors modeling tools to block this out. Previously, I would have either used BSPs (if I could talk myself into enduring that experience), or I would use an exported mannequin model as reference for scale. The latter would have been a big mistake.

I wanted a ramp leading up to the entrance to the hut, but the ramp needs to be long enough to clip through the terrain. I don’t know every location where this will be placed yet, so the model needs to account for that placement. But, I also do not want the hut as a whole to take up more space than is absolutely necessary. I was able to make the angle of the ramp steep enough to make it compact-ish, while still being able to actually walk up the ramp. This could have been done with BSPs, but that would have been a painful experience, to be sure. Aside from the ramp, I was able to easily get the overall shape of the hut the way that I wanted it. I had a specific look in mind and it was fairly easy to get to that look with the tools. I was still using the tools like a caveman, due to my experience with BSPs, so I could have refined the hut’s shape far more than I did in-editor. But my block-out was complete, with all the windows where I wanted them and at the correct heights. I exported this block-out to Blender to build the actual geometry for the hut.

I used geometry from preexisting assets in an attempt to maintain some continuity in the materials used. The tree trunks that make up the posts for the hut are palm trees that are actually in the levels. Similar assets were ‘repurposed’ in the same way, such as bamboo. I then used the same technique shown in Quixel’s video on the creation of their houses in their demo. Utilizing a separate UV channel to introduce mud, dirt, and grime to the hut really made all the difference. While most of the geometry used to build out the hut has shared UVs, or tiling textures, the approach Quixel demonstrates allowed me to break up the feeling that the materials are all shared. It gave each piece of geometry the feel of being a separate component in the hut, not just a bunch of copies of the same thing…which, of course, they are. I used Painter to bake out my AO, curvature, thickness and other maps, and then to create the masks needed to create this effect in Unreal.

I could have used Unreal’s modeling tools for much more than I did. They are not just a toy, or a replacement for BSPs, as I originally thought. They are a valuable tool in the toolkit, and one I plan to explore further. Thanks for the read.