Sonntag, 28. März 2010

Behavior - Alternating tasks

From now on, it is fairly simple to extend and enhance our AI's behavior:

We'll let building remain our priority, but whenever we have dropped a converter, we will pick up a power-up while our converter sinks into the ground.
Once we have picked up the power-up (or if there is no power-up available) we get into position to drop the next converter. For the time being we will leave our behavior totally determinstic and just add tasks with the principle of priorization.

With this small modification our pacifistic AI behavs almost like a (somewhat greedy) human player already...

Without changing our behavior any further, it can never be wrong to shoot at anything that moves! So let's destroy enemy converters and the enemy tank when they are in reach.

Finally, to use our nice catching behavior that we implemented earlier, let's go and hunt down the enemy once it has less than a quarter of its maximum heahlth.

With this very basic behavior that can - now that we have all tools like pathfinding and building avaialable - easily be enhanced, it is already quite fun to play against the AI.

Pathfinding - Infrastructure

After supplying our AI with an easy way to find its path from everywhere to everywhere, we want it to do something useful. And in order to win a game in Squar, our AI has to build some infrastructure.

Again we will do some kind of pathfinding, this time for what we call the "buildpaths". Boring? Not at all! What we are doing now will be quite different to what we did before, because of the different circumstances.

The major differences to our navigation pathfinding are the following:
a) While the walls on our map are static, the squares on which we can build towers are not! Whenever our enemy builds a tower on a hitherto neutral square we have to build around this tower. So the pathfinding we apply for our buildpaths has to be dynamic (i.e., it has to be performed at runtime) - which means that can not use something as expensive as Floyd/Warshall anymore.
b) While our graph for navigation contained edges between diagonal neighbors (and I pointed out that this was indeed necessary) we don't have diagonal edges here, because our infrastructure is restricted to orthogonal connections.

While a) means that we have to do things efficiently, b) tells us that we are lucky enough to have an algorithm for this situation that runs in O(n):
As we have a single source shortest paths problem one could think that something more expensive would be needed, but in a graph where all edges have the same costs we can simply use a modified breadth-first search (BFS) as follows:

Initially, let our BFS-queue contain all squares that are somehow connected to our base. For all these squares we can store that it costs us 0 towers to connect them to our base. As long as our queue is not empty we take a square ("square") from the queue.

Starting in square, we we walk up to 4 steps in all 4 (orthogonal) directions.
As long as we don't encounter something that blocks our way (e.g., an enemy tower) we know that we could build a tower here ("nextsquare"), which would raise our costs to reach this position by 1 (compared to "square"). If we already found a cheaper way to reach nextsquare before, we do nothing. But if our new way to reach nextsquare is cheaper than what we have found out until now, we will update the costs to reach nextsquare by costs[square]+1 and define buildpathpredecessor[nextsquare] = square.

There are some little exceptions:
E.g., we have to make sure that we do not step over the border of our gameboard. Or, e.g., we can build a connection under a wall (i.e., we will not break if we detect a wall while going into a direction), but we can not build a tower under a wall (i.e., we will never set a wallsquare as a predecessor).

Once we have calculated the cheapest buildpathcosts and predecessors for all squares, we simply check which source (resp. its neighbors) are the cheapest to connect to our base.

Then we walk the corresponding buildpath backwards (by help of the array buildpathpredecessor) and compare for every square on the path the distance of our tank to this square.

The square that is returned will be the next square we build a tower on.

Samstag, 27. März 2010

Pathfinding - Navigation

So finally we with our distance matrix defined and initialized we can go ahead and compute our shortest paths. The most popular approach to determine the shortest paths to all possible targets from a single source is probably the A* algorithm.

However, we will do something completely different: In order to prevent the multiple application of A* for different objects at runtime we are going to compute all shortest paths in a preprocessing. This approach is only possible due to the fact that we will never have to take into account any waypoints or paths other than the ones among our 225 squares.

The Floyd/Warshall algorithm computes all shortest paths in a graph in O(n³). It belongs to the area of dynamic programming where solutions to large problems are found by computing the solution for smaller subproblems first and then storing this information and reusing it to find the solution for the large problem.

Given a pair of nodes "from" and "to" and assuming that we already know for all other nodes "x" the shortest paths from->x and x->to, it is clear that the shortest path from->to uses the node x as an intermediate, where the sum of the lengths of from->x and x->to is minimal.

Note: As we are not only interested in the lengths of our shortest paths but also in the information where we have to go to follow a shortest path, we need something else than the distance matrix to store this information. A naive idea would be to somehow store all paths for all pairs of nodes, but indeed it will be sufficient to store the information, given a node "from" and a node "to" where we have to go next on a shortest path. We will then simply go to this next node and look up which node is our next stop from this node to the node "to" until we reach our destination. We thus introduce a two-dimensional array which will - for the time being - store the information about the intermediate node "x" from our desciption above.

Running Floyd/Warshall for our 225-nodes-graph on an Xbox 360 takes only a couple of seconds, which is ok to do during the loading of a map. If we consider this too long we can think about storing the shortest path information later. The good thing is that we will never have to worry about runtime performance.

Now assume that we have 4 neigbor nodes (a,b,c,d) in a row such that neither of them contains a wall. Then the shortest path from a to d obviously has length 2+2+2 = 6.
And we will find this information in distance[a,d].

Due to the way (Floyd/Warshall) we computed the intermediates however, we do not necessarily have intermediate[a,d] = b but possibly intermediate[a,d] = c. This is due to the fact that the sums a->b + b->d and a->c + c->d are the same and so it depends on which intermediate was looked at first.

In order to follow a path step by step (and not driving to an intermediate that lies behind a corner, i.e., behind a wall) we want to compute the direct successor nodes. For this task we simply look up the intermediate of from->to. If there is no intermediate, then the nodes are neighbors and we have found the direct successor. Otherwise we look up the intermediate of from->intermediate, i.e. we proceed with the "first half" of our path recursively.

We're almost there!! In the example of our last post we have now computed a shortest path that uses only orthogonal and diagonal ways between neighbor squares. Following this path with our tank is already a nice thing, but to finally avoid the zigzaging that may occur here, we have to apply one last method which is called "string-pulling": Assume that the white path is a string. If we take the source and destination of the path and pull we will get a path that stretches around corners and avoids zigzaging.

The straightforward way to compute string pulling is to go through a path and check, whether - given a present node "from" and a destination "to" - we have to stick with our direct successor x or if it is possible to go to the successor of x directly. This is the case if and only if there is a line of sight between from and the successor of x. We follow the path as long as possible, i.e. we compute the last successor y such that there is a line of sight between from and y.

Remark: To answer whether we have a line of sight, we avoid high-cost computations like collision detection. Instead, we follow the vector between to squares and check whether the squares in between contains walls. (The step-distance 5 that is used in the algorithm below is about a third of square length.)

It remains to tell our PacifistAI to use the information we have computed so far. Given our current square (from) and some destination square (to), we can simply overwrite the player's controls-vector with a vector that points from our present position to the center of the square shortestPathsIntermediate[from,to].

If we subsitute the content of our method TestAI() by something as simple as...

curPlayer.chassis.thumbstick = Steer(enemy.chassis.positionAsSquare);

...we obtain a hunting behavior (on any user-generated map) that we can finally enjoy as the fruits of our work:

Pathfinding - Preliminaries

Although from a didactic point of view it might be nice to do some easy stuff first, our AI will be utterly useless without a possibility to find its way to, e.g., the next power-up it wants to collect.

Pathfinding is pretty much the same problem in any kind of game. Let us consider a Squar map from above and compare it to a map in an FPS. A bot that wants to determine the best way from room A to room B has exactly the same problem to solve as our tank:

The fact that one could almost call pathfining an academic discipline of computer science might be a little bit disincentive to people who are not familiar with the area, so it is a good idea to cheer us all up a little with this video of pathfinding issues in modern games.

Note: The video was made by ai-blog and is part of a very instructive pathfinding discussion.

So although we all had a good laugh now we also learned that pathfinding is still a problem, even for AAA titles and the professional programmers behind them. My statement from above - that pathfinding was pretty much the same problem in any kind of game - is the key to our success: Find out what makes your game different from others and use these differences to your advantage. Especially if you are developing an indie game gives you a good chance that things are different in your game.

In the case of Squar we have two major characteristics:
a) Given the map editor and the fact that the user may generate maps on his/her own maps means that we can not use static waypoints that we determine ourselves but need an algorithmic solution.
b) Although a Squar map may be compared to a floorplan in an FPS, our obvious advantage is that we only have 15x15 (i.e. 225) squares in our gameboard.

The naive way to benefit from aspect b) would be to use 225 waypoints - one for the center of each square. Let us try and take this approach.

Using waypoints we would abstract from the gameboard and define a graph with 225 nodes that represent (the centers of) our squares. We introduce an edge between every pair of neighbor nodes. If one of the nodes is a wall, we give the edge the distance value infinite, otherwise we give it the distance value 1.

However, if we only use the orthogonal neigbor relation, we will encounter a problem, namely, we will no longer be able to detect shorter ways that "cross a room diagonally". As a consequence we cannot decide whether to take the white or the red path in the following picture. Even if we apply string pulling afterwards, we will have a bad solution if we start out with the red one.

The solution to this problem is to inroduce edges also for pairs of squares that are diagonal neighbors. The adequate weight of such an edge would be the square root of 2, but as we want to avoid floating point operations we will simply use the weight 2 for orthogonal neighbors and 3 for diagonal ones. This is exact enough as our paths will always be relatively short and so the sum over the "inaccuracies" will be small.

So our distnace matrix with which we start out for our graph computations can be sketched as follows...

This matrix will allow us to compute the following shortest path

and with string pulling we finally get...

So our final task is to put our information so far in an adequate data structure. For easy and fast access to our distance information we will use two-dimensional array distance[,]. While the class GameBoard.cs in Squar refers to a square by its row and column we want to refer to a sqaure by a a single index, so we enumerate the 225 squares (starting with 0) from left to right and top to bottom. We supply methods to compute row and column from an index and an index from row and column.
As we often want to have a neighbor of square we define an enumeration of neigbor relations together with a function that returns a corresponding neighbor. This makes our code easy to read later on.

Given the auxiliary methods from above we can finally initialize our distance matrix as follows.

Developing an AI from scratch

I guess most people who come across this blog probably know me from the XNA Creators' Club or followed a link from the Squar website.

Here, I will try and develop an AI from scratch, improve it step by step and keep you updated with videos of how the AI evolves. The idea is to make the process of developing an AI transparent and perhaps to strike up a conversation with fellow hobby developers, both about technical aspects as well as how we feel an AI should behave in order to allow for a satisfying gaming experience.

As all algorithmic considerations will be very specific for the game Squar, people who are not familiar with the project should definitely check out the above mentioned site and see what Squar is about.

As already mentioned, the idea is to get something up and running as soon as possible and to determine by playtesting (and of course by twisting our mind) how to extend and enhance the behavior of the AI.

And here we go:

Basically a player in Squar has only a few simple tasks - and yet he has much more to do than in a simple FPS: He should extend his infrastructure to tap energy sources, he should collect and use power-ups and he should attack the enemy and his infrastructure whenever possible. The standard way an AI is usually thought of is a finite state machine, which means that we have one state for each of our desired behaviors (or tasks) and that we switch from one state to the other depending on the way our environment evolves.

That being said, it is clear that a simple enumeration of behavior names together with a switch-statement can be used to allow for different behaviors. Each behavior should then contain orders what the AI has to do to fulfil the present task and a check whether the present task has been completed or should be abandoned. For this case we provide a function GetNewTask() that "evaluates the situation" and determines our next task.

We start out with 3 states for our (hithero pacifistic) Squar AI which is perfectly happy to extend its infrastructure and collect power-ups.

What's this blog about?

Hi all. As I have already mentioned to some of you, I am going to blog about my XNA hobby projects starting with the AI development process for the XNA indie game Squar.

My motivation for this is twofold: On the one hand writing about my AI ideas is a nice way to reconsider decisions I have made so far. On the other hand I thought about making a blog about the general Squar development in 2008 already and I regret that I didn't do it because I find it so interesting how the game (resp. its mechanics) evolved. It would be nice to have some kind of documentation and so I want to make sure that I'll have one for the AI.

Of course, I also hope that people reading my blog enjoy it and maybe can take one or two ideas with them for their own projects. So have a good time, anyone!