A Traveling Salesman Walks into a Bar…

Work has steadily continued on Suzy’s AI, and it was necessary too. As she was at the time of the last post, she would have been absolutely no challenge to the player. I could have just allowed her to cheat, knowing where all of the fruit objects were and used some form of random number generation to decide if she ‘found’ one or not. That is the approach that some games take, and I suspect that many indie developers would have done just that. I can’t blame them; creating AI with any amount of intelligence is very hard. It would be very easy to do that and move on, especially if you are an indie developer who is building a commercial game. But, I absolutely will not take this route. The whole point behind this entire project is to get experience and gain knowledge about making good AI. Did I think about giving in? Yes. I have thought how much easier it would be many times, but again, that would be missing the point of this project.

So I asked myself how I might go about searching for these fruit objects, or any objects, in an environment as large as a tropical island. The first thing I would need to do to search effectively is to orient myself in my surroundings by finding some landmarks so that I won’t get lost. Aha! I can have the AI travel to landmarks that are all over the islands. By doing this, it will also give the appearance that Suzy really is using some form of intelligence to perform this search. If the player watches her, they will notice that she is traveling to a location that has a very distinct landmark. Hopefully this will feel fair to the player, as they too can orient themselves by using these landmarks. Moving Suzy around the island isn’t that difficult with the custom EQS generator that was made specifically for this purpose. But, how exactly should the AI chose which landmark to visit? And how should she go about visiting each one? I chose to have her randomly select a landmark to visit, but I didn’t want her to visit the same landmarks over and over without going to each one first. That was important to me, because it would then feel more like she really is searching, rather than just randomly running around. To implement this, I chose to use the “Traveling Salesman” approach for visiting each location.

The idea behind the traveling salesman approach is that the ‘salesman’, Suzy in our case, will travel from her starting location to each of the destination points, or landmarks. But, she will not backtrack to a previously visited landmark. She will only go to unvisited landmarks until all of them have been visited. Only after all of the landmarks have been visited will the AI be allowed to revisit a landmark.

Once at the landmark, I wanted Suzy to give the place a good search. So how did I implement this? The same technique that was used to move her to the landmark: Traveling Salesman. I actually implemented this first, so that I could make sure that it would work the way that I wanted it to. Once she arrives at the landmark, the behavior tree task generates a random number of points within a range that can be set in the behavior tree. For this, I felt that 3-6 points around the landmark would be fairly good, but the range can be set between 2-8. Once these points are generated they are handed off to Suzy’s behavior tree so that she can run these points, utilizing the traveling salesman approach. It gives Suzy a nice appearance of being an excitable little monkey that is running around trying to find these fruit objects.

While Suzy’s AI still has some work to do before she will be challenging enough to make this game fun, I think that I am close having the developmental part done. If I can just make this a little more successful at finding the fruit objects quicker, it will just be a matter of balancing the numbers to get things just right. I hope.

Coming to Our Senses

It has been quite a while since the last post here, and that is because work has been moving forward on Suzy’s AI as well as some ancillary code development for the AI Perception system as well as the Environment Query System (or EQS for short). Also, as can be seen in the image for this post, a test level was constructed to better represent the conditions the AI will need to operate it. This gave me a much better idea of how this AI will perform “in the wild“, as some like to say.

Suzy’s AI has come a long way, and it is now close to being implemented to the point that the AI outline describes. Whether or not it will be sufficient to make the game challenging enough is yet to be seen, but I am encouraged by the progress. With a much better understanding of all of the moving pieces in the Behavior Tree/Blackboard approach to AI design, I have been able to build up a reasonably intelligent AI that will wander looking for fruit to pick up. But, once the AI reaches a predefined level of frustration, it will seek out the player to follow them in the hopes of stealing a piece of fruit that the player may lead it to.

To help Suzy find fruit easier, and make the AI more challenging for the player, a new sense had to be created for UE4’s AI Perception system: Smell. With a sense of smell, the AI doesn’t have to actually see a piece of fruit to find it. This sense of smell respects not only the direction of the wind, but also its intensity. By taking the wind vector used in the newer atmosphere system’s material and converting that into a material parameter collection, the wind’s values can be piped into the perception system. In this way, the player will get a visual cue as to how, or why, the AI can sense them even when they remain unseen by the AI. It isn’t perfect by any means, but I feel that it is a great addition.

Finally, in this game Suzy is using a NavMesh Invoker to create a dynamic navigation mesh around her everywhere she goes. This is much better than trying to create a huge navmesh that encompasses the entire level. At best, that would be very time consuming during development due to the need to rebuild a huge navmesh whenever objects are moved in the level. At worst, the navmesh may be too big to generate at all, which would require an entirely new set of systems brought into the project (such as level streaming).

With a navmesh invoker, we can eliminate these issues. But, and you knew that ‘But’ was coming, navmesh invokers present their own sets of issues. The largest issue is that the AI can’t be given a target location to move to if that location is outside of it’s generated navmesh. For example, if the AI’s navmesh has a radius of 3000 units (the default) and you were to specify a location that is 4500 units away, that ‘move-to’ command would simply fail. The location is unreachable to the AI because it can’t build a path from where it is to where you are directing it to go. A solution that is still being developed is to use the EQS to generate a set of vectors that will be passed as an array from the AI’s current location to a target location. This will require multiple ‘move-to’ commands to go from the start to the end of the path, but hopefully, it will mitigate if not eliminate this problem.

There is an issue with the fact that the EQS is generating a straight line from point A to point B, and no tests can be used to score a better path. But, given the alternative, I feel that this is a good start of not a great solution.