In order to define a framework that allows us to specify what funcionality exactly a micro behavior should provide, we have to abstract from the tasks, a player has to accomplish. And I don't mean tasks live saving Sandy or getting the rest of those alien bastersd. I mean tasks like moving, dropping converters and activating upgrades.

Or even more abstract: pulling the right triggeer, clicking the left stick, and so on. And that's exactly the way I suggest: use your control scheme to go back to the very roots of what a player has to do! In Squar, you have to...

- Drop converters (LT)

- Fire your tank gun (RT)

- Activate converter upgrades (LB)

- Activate tank upgrades (RB)

- Move (LS)

- Aim (RS)

- Switch between tank gun and base weapon (Click RS)

- Aim with the base gun (RS)

- Fire the base gun (RT)

Considering things this way, we see that we unvoluntarily assumed a certain scheme when we implemented the micro behavior for the energy ray. (Ok that was not our headline when we did it, but I wrote in my last post that we could interpret our hitherto implemented AI that way.)

We integrated movement and firing, for example, in every of the states of our finite automaton:

What we called "building" was actually building-oriented movement with firing as a secondary task (i.e., at targets within line of sight).

What we called "hunting" was enemy oriented-movement with nothing as a secondary task. However, we could have dropped a converter once in a while!

In a first step we identify the questions that we have to answer (these are derived with little effort from the controls list above):

In a second step we reason which of these questions we should integrate in the macro behavior (what should we do?) and which of them belong to the micro behavior (how can we do it?).

Now we get to the meat of the matter: In order to allow for intelligent decisions, we need a lot of information about the environment. Let's introduce some parameters that describe our situation in the game (where for each object we want to have a value for all enemy and ally objects):

- chance that a tank is easily destroyed

- value of a tank

- chance that a tower is destroyed

- value of a tower

- chance that we will be able to build a tower

- value of a potential tower

- chance that we will be able to pick up a power-up before the enemy does

- value of a power up (for us and him)

Note that in the end, all that can happen in a game situation is that we (or the enemy) gain or lose a tower, gain or lose a tank, gain or lose a power-up.

On the other hand we have our current upgrades which we can consider our "bank account". For each situation we have to determine whether we want to save more power-ups or whether we prefer to spend them to enhance our chances to accomplish our present task.

Our next step will be to realize this parameter-guided concept. The very best example (respectively starting point) I can imagine for this purpose is the mighty converter upgrade together with the existence of blocked squares. Will we be able to describe with our paramters, that:

- A potential tower on a blocked square has (on an adequate map) a high importance?

- We should thus save our converter power-ups?

- We should protect our tank well once we have saved several power-ups?

## Freitag, 2. April 2010

### Behavior - Mirco vs. Macro

As we saw in the last video, the AI already behaves somehow reasonable for most of the time. However it is obvious that we can't let our AI behave determinsitc. A determinstic AI is likely to behave "stupid", e.g. our hitherto implemented AI would chase a player forever if he drives around a long wall.

It is a good idea to add some fuzziness to the AI's behavior. Our AI decides to hunt a player down if his health goes below a quarter of maximum. We could instead say that it does only do so in 80% of the time.

Or, to prevent the pitfall from above (and similar cases) let's say that whenever a certain objevtive has been pursued for more than 1 minute without success, try something else. Or every second let there be a chance of 1% to do something else. Or define a curve with the Curve Editor for a more differentiated way to determine the chance to abort your present task.

However before we finally add fuzzines to our behavior let's try and make the behavior as good as possible without.

What will be our next step?

A) One idea would be to implement a bunch of "micro behaviors" that simulate human player tactics. In a team shooter like battlefield or counterstrike, that would be something like hide & snipe, or use a tank to attack.

In Squar, the micro behaviors significantly depend on the currently available power-ups. In fact, we have until now implemented some kind of micro behavior for an enemy that has collected no power-up at all. Once it has collected a tank power-up it could activate the grenades and start destroying enemy towers. To define the micro behavior for using grenades we could consider again a priortiy chain which target to attack first. It makes no sense to attack the enemy tank because we probably won't hit it with grenades. So we could start with converters as primary targets and proceed to towers as secondary targets. It is straightforward to define a routine that let's the AI activate the grenades at once when it picks up a tank power-up and starts firing according to this priorization.

But should we keep building (and collecting) and fire at targets within range only,

or should we give priority to destroying enemy towers? This brings us to the second way of proceeding with the AI implementation.

B) While one could call A the bottom-up approach, we will now consider the top-down approach. Instead of implementing micro-behaviors and determining which to choose when later, we could try and specifiy more abstract guidelines at first.

The advantage of developing the AI bottom-up is that we can think about every plan of action the AI could take before we have to put them together. That way, we may have certain ideas of when it is advantageous to, e.g., activate a certain upgrade and when it is not.

The advantage of developing the AI top-down is that this approach makes it less likely that we have to modify our micro behaviors later when we find out that we should organize things differently (e.g., integrate or separate movement and shooting in a micro behavior).

=> I would suggest to start top-down, especially if its your own game you are developping the AI for, because you should already have quite a good idea of your micro behaviors should look like in this case.

Whenever you feel like you don't get ahead this way, switch to implementation of micro-behaviors, test them and try when it is advantageous to activate which of them. I will henceforth refer to this question by macro behavior.

It is a good idea to add some fuzziness to the AI's behavior. Our AI decides to hunt a player down if his health goes below a quarter of maximum. We could instead say that it does only do so in 80% of the time.

Or, to prevent the pitfall from above (and similar cases) let's say that whenever a certain objevtive has been pursued for more than 1 minute without success, try something else. Or every second let there be a chance of 1% to do something else. Or define a curve with the Curve Editor for a more differentiated way to determine the chance to abort your present task.

However before we finally add fuzzines to our behavior let's try and make the behavior as good as possible without.

What will be our next step?

A) One idea would be to implement a bunch of "micro behaviors" that simulate human player tactics. In a team shooter like battlefield or counterstrike, that would be something like hide & snipe, or use a tank to attack.

In Squar, the micro behaviors significantly depend on the currently available power-ups. In fact, we have until now implemented some kind of micro behavior for an enemy that has collected no power-up at all. Once it has collected a tank power-up it could activate the grenades and start destroying enemy towers. To define the micro behavior for using grenades we could consider again a priortiy chain which target to attack first. It makes no sense to attack the enemy tank because we probably won't hit it with grenades. So we could start with converters as primary targets and proceed to towers as secondary targets. It is straightforward to define a routine that let's the AI activate the grenades at once when it picks up a tank power-up and starts firing according to this priorization.

But should we keep building (and collecting) and fire at targets within range only,

or should we give priority to destroying enemy towers? This brings us to the second way of proceeding with the AI implementation.

B) While one could call A the bottom-up approach, we will now consider the top-down approach. Instead of implementing micro-behaviors and determining which to choose when later, we could try and specifiy more abstract guidelines at first.

The advantage of developing the AI bottom-up is that we can think about every plan of action the AI could take before we have to put them together. That way, we may have certain ideas of when it is advantageous to, e.g., activate a certain upgrade and when it is not.

The advantage of developing the AI top-down is that this approach makes it less likely that we have to modify our micro behaviors later when we find out that we should organize things differently (e.g., integrate or separate movement and shooting in a micro behavior).

=> I would suggest to start top-down, especially if its your own game you are developping the AI for, because you should already have quite a good idea of your micro behaviors should look like in this case.

Whenever you feel like you don't get ahead this way, switch to implementation of micro-behaviors, test them and try when it is advantageous to activate which of them. I will henceforth refer to this question by macro behavior.

Labels:
AI,
behavior,
macro behavior,
micro behavior,
Squar,
XNA

## Sonntag, 28. März 2010

### Behavior - Alternating tasks

From now on, it is fairly simple to extend and enhance our AI's behavior:

We'll let building remain our priority, but whenever we have dropped a converter, we will pick up a power-up while our converter sinks into the ground.

Once we have picked up the power-up (or if there is no power-up available) we get into position to drop the next converter. For the time being we will leave our behavior totally determinstic and just add tasks with the principle of priorization.

With this small modification our pacifistic AI behavs almost like a (somewhat greedy) human player already...

Without changing our behavior any further, it can never be wrong to shoot at anything that moves! So let's destroy enemy converters and the enemy tank when they are in reach.

Finally, to use our nice catching behavior that we implemented earlier, let's go and hunt down the enemy once it has less than a quarter of its maximum heahlth.

With this very basic behavior that can - now that we have all tools like pathfinding and building avaialable - easily be enhanced, it is already quite fun to play against the AI.

We'll let building remain our priority, but whenever we have dropped a converter, we will pick up a power-up while our converter sinks into the ground.

Once we have picked up the power-up (or if there is no power-up available) we get into position to drop the next converter. For the time being we will leave our behavior totally determinstic and just add tasks with the principle of priorization.

With this small modification our pacifistic AI behavs almost like a (somewhat greedy) human player already...

Without changing our behavior any further, it can never be wrong to shoot at anything that moves! So let's destroy enemy converters and the enemy tank when they are in reach.

Finally, to use our nice catching behavior that we implemented earlier, let's go and hunt down the enemy once it has less than a quarter of its maximum heahlth.

With this very basic behavior that can - now that we have all tools like pathfinding and building avaialable - easily be enhanced, it is already quite fun to play against the AI.

### Pathfinding - Infrastructure

After supplying our AI with an easy way to find its path from everywhere to everywhere, we want it to do something useful. And in order to win a game in Squar, our AI has to build some infrastructure.

Again we will do some kind of pathfinding, this time for what we call the "buildpaths". Boring? Not at all! What we are doing now will be quite different to what we did before, because of the different circumstances.

The major differences to our navigation pathfinding are the following:

a) While the walls on our map are static, the squares on which we can build towers are not! Whenever our enemy builds a tower on a hitherto neutral square we have to build around this tower. So the pathfinding we apply for our buildpaths has to be dynamic (i.e., it has to be performed at runtime) - which means that can not use something as expensive as Floyd/Warshall anymore.

b) While our graph for navigation contained edges between diagonal neighbors (and I pointed out that this was indeed necessary) we don't have diagonal edges here, because our infrastructure is restricted to orthogonal connections.

While a) means that we have to do things efficiently, b) tells us that we are lucky enough to have an algorithm for this situation that runs in O(n):

As we have a single source shortest paths problem one could think that something more expensive would be needed, but in a graph where all edges have the same costs we can simply use a modified breadth-first search (BFS) as follows:

Initially, let our BFS-queue contain all squares that are somehow connected to our base. For all these squares we can store that it costs us 0 towers to connect them to our base. As long as our queue is not empty we take a square ("square") from the queue.

Starting in square, we we walk up to 4 steps in all 4 (orthogonal) directions.

As long as we don't encounter something that blocks our way (e.g., an enemy tower) we know that we could build a tower here ("nextsquare"), which would raise our costs to reach this position by 1 (compared to "square"). If we already found a cheaper way to reach nextsquare before, we do nothing. But if our new way to reach nextsquare is cheaper than what we have found out until now, we will update the costs to reach nextsquare by costs[square]+1 and define buildpathpredecessor[nextsquare] = square.

There are some little exceptions:

E.g., we have to make sure that we do not step over the border of our gameboard. Or, e.g., we can build a connection under a wall (i.e., we will not break if we detect a wall while going into a direction), but we can not build a tower under a wall (i.e., we will never set a wallsquare as a predecessor).

Once we have calculated the cheapest buildpathcosts and predecessors for all squares, we simply check which source (resp. its neighbors) are the cheapest to connect to our base.

Then we walk the corresponding buildpath backwards (by help of the array buildpathpredecessor) and compare for every square on the path the distance of our tank to this square.

The square that is returned will be the next square we build a tower on.

Again we will do some kind of pathfinding, this time for what we call the "buildpaths". Boring? Not at all! What we are doing now will be quite different to what we did before, because of the different circumstances.

The major differences to our navigation pathfinding are the following:

a) While the walls on our map are static, the squares on which we can build towers are not! Whenever our enemy builds a tower on a hitherto neutral square we have to build around this tower. So the pathfinding we apply for our buildpaths has to be dynamic (i.e., it has to be performed at runtime) - which means that can not use something as expensive as Floyd/Warshall anymore.

b) While our graph for navigation contained edges between diagonal neighbors (and I pointed out that this was indeed necessary) we don't have diagonal edges here, because our infrastructure is restricted to orthogonal connections.

While a) means that we have to do things efficiently, b) tells us that we are lucky enough to have an algorithm for this situation that runs in O(n):

As we have a single source shortest paths problem one could think that something more expensive would be needed, but in a graph where all edges have the same costs we can simply use a modified breadth-first search (BFS) as follows:

Initially, let our BFS-queue contain all squares that are somehow connected to our base. For all these squares we can store that it costs us 0 towers to connect them to our base. As long as our queue is not empty we take a square ("square") from the queue.

Starting in square, we we walk up to 4 steps in all 4 (orthogonal) directions.

As long as we don't encounter something that blocks our way (e.g., an enemy tower) we know that we could build a tower here ("nextsquare"), which would raise our costs to reach this position by 1 (compared to "square"). If we already found a cheaper way to reach nextsquare before, we do nothing. But if our new way to reach nextsquare is cheaper than what we have found out until now, we will update the costs to reach nextsquare by costs[square]+1 and define buildpathpredecessor[nextsquare] = square.

There are some little exceptions:

E.g., we have to make sure that we do not step over the border of our gameboard. Or, e.g., we can build a connection under a wall (i.e., we will not break if we detect a wall while going into a direction), but we can not build a tower under a wall (i.e., we will never set a wallsquare as a predecessor).

Once we have calculated the cheapest buildpathcosts and predecessors for all squares, we simply check which source (resp. its neighbors) are the cheapest to connect to our base.

Then we walk the corresponding buildpath backwards (by help of the array buildpathpredecessor) and compare for every square on the path the distance of our tank to this square.

The square that is returned will be the next square we build a tower on.

Labels:
AI,
pathfinding,
Squar,
XNA

## Samstag, 27. März 2010

### Pathfinding - Navigation

So finally we with our distance matrix defined and initialized we can go ahead and compute our shortest paths. The most popular approach to determine the shortest paths to all possible targets from a single source is probably the A* algorithm.

However, we will do something completely different: In order to prevent the multiple application of A* for different objects at runtime we are going to compute all shortest paths in a preprocessing. This approach is only possible due to the fact that we will

The Floyd/Warshall algorithm computes all shortest paths in a graph in O(n³). It belongs to the area of dynamic programming where solutions to large problems are found by computing the solution for smaller subproblems first and then storing this information and reusing it to find the solution for the large problem.

Given a pair of nodes "from" and "to" and assuming that we already know for all other nodes "x" the shortest paths from->x and x->to, it is clear that the shortest path from->to uses the node x as an intermediate, where the sum of the lengths of from->x and x->to is minimal.

Note: As we are not only interested in the

Running Floyd/Warshall for our 225-nodes-graph on an Xbox 360 takes only a couple of seconds, which is ok to do during the loading of a map. If we consider this too long we can think about storing the shortest path information later. The good thing is that we will never have to worry about runtime performance.

Now assume that we have 4 neigbor nodes (a,b,c,d) in a row such that neither of them contains a wall. Then the shortest path from a to d obviously has length 2+2+2 = 6.

And we will find this information in distance[a,d].

Due to the way (Floyd/Warshall) we computed the intermediates however, we do not necessarily have intermediate[a,d] = b but possibly intermediate[a,d] = c. This is due to the fact that the sums a->b + b->d and a->c + c->d are the same and so it depends on which intermediate was looked at first.

In order to follow a path step by step (and not driving to an intermediate that lies behind a corner, i.e., behind a wall) we want to compute the

We're almost there!! In the example of our last post we have now computed a shortest path that uses only orthogonal and diagonal ways between neighbor squares. Following this path with our tank is already a nice thing, but to finally avoid the zigzaging that may occur here, we have to apply one last method which is called "string-pulling": Assume that the white path is a string. If we take the source and destination of the path and pull we will get a path that stretches around corners and avoids zigzaging.

The straightforward way to compute string pulling is to go through a path and check, whether - given a present node "from" and a destination "to" - we have to stick with our direct successor x or if it is possible to go to the successor of x directly. This is the case if and only if there is a line of sight between from and the successor of x. We follow the path as long as possible, i.e. we compute the last successor y such that there is a line of sight between from and y.

Remark: To answer whether we have a line of sight, we avoid high-cost computations like collision detection. Instead, we follow the vector between to squares and check whether the squares in between contains walls. (The step-distance 5 that is used in the algorithm below is about a third of square length.)

It remains to tell our PacifistAI to use the information we have computed so far. Given our current square (from) and some destination square (to), we can simply overwrite the player's controls-vector with a vector that points from our present position to the center of the square shortestPathsIntermediate[from,to].

If we subsitute the content of our method TestAI() by something as simple as...

curPlayer.chassis.thumbstick = Steer(enemy.chassis.positionAsSquare);

...we obtain a hunting behavior (on any user-generated map) that we can finally enjoy as the fruits of our work:

However, we will do something completely different: In order to prevent the multiple application of A* for different objects at runtime we are going to compute all shortest paths in a preprocessing. This approach is only possible due to the fact that we will

**never**have to take into account any waypoints or paths other than the ones among our 225 squares.The Floyd/Warshall algorithm computes all shortest paths in a graph in O(n³). It belongs to the area of dynamic programming where solutions to large problems are found by computing the solution for smaller subproblems first and then storing this information and reusing it to find the solution for the large problem.

Given a pair of nodes "from" and "to" and assuming that we already know for all other nodes "x" the shortest paths from->x and x->to, it is clear that the shortest path from->to uses the node x as an intermediate, where the sum of the lengths of from->x and x->to is minimal.

Note: As we are not only interested in the

**lengths**of our shortest paths but also in the information where we have to go to follow a shortest path, we need something else than the distance matrix to store this information. A naive idea would be to somehow store all paths for all pairs of nodes, but indeed it will be sufficient to store the information, given a node "from" and a node "to" where we have to go next on a shortest path. We will then simply go to this next node and look up which node is our next stop from this node to the node "to" until we reach our destination. We thus introduce a two-dimensional array which will - for the time being - store the information about the intermediate node "x" from our desciption above.Running Floyd/Warshall for our 225-nodes-graph on an Xbox 360 takes only a couple of seconds, which is ok to do during the loading of a map. If we consider this too long we can think about storing the shortest path information later. The good thing is that we will never have to worry about runtime performance.

Now assume that we have 4 neigbor nodes (a,b,c,d) in a row such that neither of them contains a wall. Then the shortest path from a to d obviously has length 2+2+2 = 6.

And we will find this information in distance[a,d].

Due to the way (Floyd/Warshall) we computed the intermediates however, we do not necessarily have intermediate[a,d] = b but possibly intermediate[a,d] = c. This is due to the fact that the sums a->b + b->d and a->c + c->d are the same and so it depends on which intermediate was looked at first.

In order to follow a path step by step (and not driving to an intermediate that lies behind a corner, i.e., behind a wall) we want to compute the

**direct successor nodes**. For this task we simply look up the intermediate of from->to. If there is no intermediate, then the nodes are neighbors and we have found the direct successor. Otherwise we look up the intermediate of from->intermediate, i.e. we proceed with the "first half" of our path recursively.We're almost there!! In the example of our last post we have now computed a shortest path that uses only orthogonal and diagonal ways between neighbor squares. Following this path with our tank is already a nice thing, but to finally avoid the zigzaging that may occur here, we have to apply one last method which is called "string-pulling": Assume that the white path is a string. If we take the source and destination of the path and pull we will get a path that stretches around corners and avoids zigzaging.

The straightforward way to compute string pulling is to go through a path and check, whether - given a present node "from" and a destination "to" - we have to stick with our direct successor x or if it is possible to go to the successor of x directly. This is the case if and only if there is a line of sight between from and the successor of x. We follow the path as long as possible, i.e. we compute the last successor y such that there is a line of sight between from and y.

Remark: To answer whether we have a line of sight, we avoid high-cost computations like collision detection. Instead, we follow the vector between to squares and check whether the squares in between contains walls. (The step-distance 5 that is used in the algorithm below is about a third of square length.)

It remains to tell our PacifistAI to use the information we have computed so far. Given our current square (from) and some destination square (to), we can simply overwrite the player's controls-vector with a vector that points from our present position to the center of the square shortestPathsIntermediate[from,to].

If we subsitute the content of our method TestAI() by something as simple as...

curPlayer.chassis.thumbstick = Steer(enemy.chassis.positionAsSquare);

...we obtain a hunting behavior (on any user-generated map) that we can finally enjoy as the fruits of our work:

Labels:
AI,
pathfinding,
Squar,
XNA

### Pathfinding - Preliminaries

Although from a didactic point of view it might be nice to do some easy stuff first, our AI will be utterly useless without a possibility to find its way to, e.g., the next power-up it wants to collect.

Pathfinding is pretty much the same problem in any kind of game. Let us consider a Squar map from above and compare it to a map in an FPS. A bot that wants to determine the best way from room A to room B has exactly the same problem to solve as our tank:

The fact that one could almost call pathfining an academic discipline of computer science might be a little bit disincentive to people who are not familiar with the area, so it is a good idea to cheer us all up a little with this video of pathfinding issues in modern games.

Note: The video was made by ai-blog and is part of a very instructive pathfinding discussion.

So although we all had a good laugh now we also learned that pathfinding is still a problem, even for AAA titles and the professional programmers behind them. My statement from above - that pathfinding was

In the case of Squar we have two major characteristics:

a) Given the map editor and the fact that the user may generate maps on his/her own maps means that we can not use static waypoints that we determine ourselves but need an algorithmic solution.

b) Although a Squar map may be compared to a floorplan in an FPS, our obvious advantage is that we only have 15x15 (i.e. 225) squares in our gameboard.

The naive way to benefit from aspect b) would be to use 225 waypoints - one for the center of each square. Let us try and take this approach.

Using waypoints we would abstract from the gameboard and define a graph with 225 nodes that represent (the centers of) our squares. We introduce an edge between every pair of neighbor nodes. If one of the nodes is a wall, we give the edge the distance value

However, if we only use the orthogonal neigbor relation, we will encounter a problem, namely, we will no longer be able to detect shorter ways that "cross a room diagonally". As a consequence we cannot decide whether to take the white or the red path in the following picture. Even if we apply string pulling afterwards, we will have a bad solution if we start out with the red one.

The solution to this problem is to inroduce edges also for pairs of squares that are diagonal neighbors. The adequate weight of such an edge would be the square root of 2, but as we want to avoid floating point operations we will simply use the weight 2 for orthogonal neighbors and 3 for diagonal ones. This is exact enough as our paths will always be relatively short and so the sum over the "inaccuracies" will be small.

So our distnace matrix with which we start out for our graph computations can be sketched as follows...

This matrix will allow us to compute the following shortest path

and with string pulling we finally get...

So our final task is to put our information so far in an adequate data structure. For easy and fast access to our distance information we will use two-dimensional array distance[,]. While the class GameBoard.cs in Squar refers to a square by its row and column we want to refer to a sqaure by a a single index, so we enumerate the 225 squares (starting with 0) from left to right and top to bottom. We supply methods to compute row and column from an index and an index from row and column.

As we often want to have a neighbor of square we define an enumeration of neigbor relations together with a function that returns a corresponding neighbor. This makes our code easy to read later on.

Given the auxiliary methods from above we can finally initialize our distance matrix as follows.

Pathfinding is pretty much the same problem in any kind of game. Let us consider a Squar map from above and compare it to a map in an FPS. A bot that wants to determine the best way from room A to room B has exactly the same problem to solve as our tank:

The fact that one could almost call pathfining an academic discipline of computer science might be a little bit disincentive to people who are not familiar with the area, so it is a good idea to cheer us all up a little with this video of pathfinding issues in modern games.

Note: The video was made by ai-blog and is part of a very instructive pathfinding discussion.

So although we all had a good laugh now we also learned that pathfinding is still a problem, even for AAA titles and the professional programmers behind them. My statement from above - that pathfinding was

**pretty much**the same problem in any kind of game - is the key to our success: Find out what makes your game different from others and use these differences to your advantage. Especially if you are developing an indie game gives you a good chance that things**are**different in your game.In the case of Squar we have two major characteristics:

a) Given the map editor and the fact that the user may generate maps on his/her own maps means that we can not use static waypoints that we determine ourselves but need an algorithmic solution.

b) Although a Squar map may be compared to a floorplan in an FPS, our obvious advantage is that we only have 15x15 (i.e. 225) squares in our gameboard.

The naive way to benefit from aspect b) would be to use 225 waypoints - one for the center of each square. Let us try and take this approach.

Using waypoints we would abstract from the gameboard and define a graph with 225 nodes that represent (the centers of) our squares. We introduce an edge between every pair of neighbor nodes. If one of the nodes is a wall, we give the edge the distance value

**infinite**, otherwise we give it the distance value 1.However, if we only use the orthogonal neigbor relation, we will encounter a problem, namely, we will no longer be able to detect shorter ways that "cross a room diagonally". As a consequence we cannot decide whether to take the white or the red path in the following picture. Even if we apply string pulling afterwards, we will have a bad solution if we start out with the red one.

The solution to this problem is to inroduce edges also for pairs of squares that are diagonal neighbors. The adequate weight of such an edge would be the square root of 2, but as we want to avoid floating point operations we will simply use the weight 2 for orthogonal neighbors and 3 for diagonal ones. This is exact enough as our paths will always be relatively short and so the sum over the "inaccuracies" will be small.

So our distnace matrix with which we start out for our graph computations can be sketched as follows...

This matrix will allow us to compute the following shortest path

and with string pulling we finally get...

So our final task is to put our information so far in an adequate data structure. For easy and fast access to our distance information we will use two-dimensional array distance[,]. While the class GameBoard.cs in Squar refers to a square by its row and column we want to refer to a sqaure by a a single index, so we enumerate the 225 squares (starting with 0) from left to right and top to bottom. We supply methods to compute row and column from an index and an index from row and column.

As we often want to have a neighbor of square we define an enumeration of neigbor relations together with a function that returns a corresponding neighbor. This makes our code easy to read later on.

Given the auxiliary methods from above we can finally initialize our distance matrix as follows.

Labels:
AI,
pathfinding,
Squar,
XNA

### Developing an AI from scratch

I guess most people who come across this blog probably know me from the XNA Creators' Club or followed a link from the Squar website.

Here, I will try and develop an AI from scratch, improve it step by step and keep you updated with videos of how the AI evolves. The idea is to make the process of developing an AI transparent and perhaps to strike up a conversation with fellow hobby developers, both about technical aspects as well as how we feel an AI should behave in order to allow for a satisfying gaming experience.

As all algorithmic considerations will be very specific for the game Squar, people who are not familiar with the project should definitely check out the above mentioned site and see what Squar is about.

As already mentioned, the idea is to get something up and running as soon as possible and to determine by playtesting (and of course by twisting our mind) how to extend and enhance the behavior of the AI.

And here we go:

Basically a player in Squar has only a few simple tasks - and yet he has much more to do than in a simple FPS: He should extend his infrastructure to tap energy sources, he should collect and use power-ups and he should attack the enemy and his infrastructure whenever possible. The standard way an AI is usually thought of is a finite state machine, which means that we have one state for each of our desired behaviors (or tasks) and that we switch from one state to the other depending on the way our environment evolves.

That being said, it is clear that a simple enumeration of behavior names together with a switch-statement can be used to allow for different behaviors. Each behavior should then contain orders what the AI has to do to fulfil the present task and a check whether the present task has been completed or should be abandoned. For this case we provide a function GetNewTask() that "evaluates the situation" and determines our next task.

We start out with 3 states for our (hithero pacifistic) Squar AI which is perfectly happy to extend its infrastructure and collect power-ups.

Here, I will try and develop an AI from scratch, improve it step by step and keep you updated with videos of how the AI evolves. The idea is to make the process of developing an AI transparent and perhaps to strike up a conversation with fellow hobby developers, both about technical aspects as well as how we feel an AI should behave in order to allow for a satisfying gaming experience.

As all algorithmic considerations will be very specific for the game Squar, people who are not familiar with the project should definitely check out the above mentioned site and see what Squar is about.

As already mentioned, the idea is to get something up and running as soon as possible and to determine by playtesting (and of course by twisting our mind) how to extend and enhance the behavior of the AI.

And here we go:

Basically a player in Squar has only a few simple tasks - and yet he has much more to do than in a simple FPS: He should extend his infrastructure to tap energy sources, he should collect and use power-ups and he should attack the enemy and his infrastructure whenever possible. The standard way an AI is usually thought of is a finite state machine, which means that we have one state for each of our desired behaviors (or tasks) and that we switch from one state to the other depending on the way our environment evolves.

That being said, it is clear that a simple enumeration of behavior names together with a switch-statement can be used to allow for different behaviors. Each behavior should then contain orders what the AI has to do to fulfil the present task and a check whether the present task has been completed or should be abandoned. For this case we provide a function GetNewTask() that "evaluates the situation" and determines our next task.

We start out with 3 states for our (hithero pacifistic) Squar AI which is perfectly happy to extend its infrastructure and collect power-ups.

### What's this blog about?

Hi all. As I have already mentioned to some of you, I am going to blog about my XNA hobby projects starting with the AI development process for the XNA indie game Squar.

My motivation for this is twofold: On the one hand writing about my AI ideas is a nice way to reconsider decisions I have made so far. On the other hand I thought about making a blog about the general Squar development in 2008 already and I regret that I didn't do it because I find it so interesting how the game (resp. its mechanics) evolved. It would be nice to have some kind of documentation and so I want to make sure that I'll have one for the AI.

Of course, I also hope that people reading my blog enjoy it and maybe can take one or two ideas with them for their own projects. So have a good time, anyone!

My motivation for this is twofold: On the one hand writing about my AI ideas is a nice way to reconsider decisions I have made so far. On the other hand I thought about making a blog about the general Squar development in 2008 already and I regret that I didn't do it because I find it so interesting how the game (resp. its mechanics) evolved. It would be nice to have some kind of documentation and so I want to make sure that I'll have one for the AI.

Of course, I also hope that people reading my blog enjoy it and maybe can take one or two ideas with them for their own projects. So have a good time, anyone!

Abonnieren
Posts (Atom)