AI

For a little while now, I’ve been dabbling with my computers’ AI. I touched on this subject before in a simple racing game, but it wasn’t overly complex. A single way race track eases some of the decision making since the general direction you want to go is forward.

My recent FPS, however, has required a bit more complexity. My first attempt at tackling the open environment was creating sonar based AI. Each unit was given a specific set of Rays that would be casted out each update cycle. The AI, then, read from the results in order to determine its movement.

In order for this to work, the Units had to move in a sort of objective styled manner. Since the movement was based on constant environment readings, basing a move per read would cause very sporadic movement. Instead, the Units would read the environment and choose the most appropriate destination. This created a general search mode. The Unit would move around from destination to destination, seeking out any enemies.

In order to customize the AI and give each unit a unique sort of personality, I threw in some members to track different types of characteristics. For example, the amount of time a Unit would take reading the environment was based on a float that represented time spent searching. The higher the value, the longer it would turn, looking for the best destination.


Destination currentDest;
//...
if (searchTime > 0)
{
Destination testDest = ReadEnvironment()
if (testDest (is better than) currentDest)
{ currentDest = testDest; }
searchTime -= tSeconds;
}

Here’s where some of the fun stuff comes into play. What happens if a Unit chooses the best destination based on its search, but missed a better destination due to the circumstances that have yet to occur? Maybe the player would have happened to come around the corner with his backed turn; a perfect opportunity, but missed due to unforeseeable consequences.

You could attempt to read the players general direction and speed to help the computer in its decision, but frankly, that would be cheating and thus put the computers at an unfair advantage. It’s not unheard of to have AI that cheats; it can sometimes be a good work around in terms of efficiency, but I personally tend to stray from that. I find it frustrating when I’m faced against computers that have such advantages; not the external advantages such as stronger weapons, being out numbered, etc, but those that take place inside the AI. If the computer can’t see you, it shouldn’t know that you’re there.

To work around this issue without giving away the player’s ability to sneak, I added another float that represented the amount of time a Unit would second guess itself; I also added a bool that determined whether or not a Unit was in the process of second guessing.

Once the Unit found its next destination, it would start second guessing for the given time; that is, continue searching just in case. In the event that it would find a better location, the Unit would take it and go; otherwise, the Unit would turn back to its previous destination.

Using these, the Units could differ based on their surety: how quickly they would make a decision. If I wanted a Unit that was used a Run and Gun sort of style, I would set it’s second guess value very low; I could also set it high to create Units that were more strategic.

Overall, this was an interesting method; however, it did have its drawbacks: reading the environment isn’t easy. Detecting ramps and stairs turned out to be rather easy; two to three central rays over top one another can offer slope readings.

A big problem was doorways and hallways. I was able to detect them by storing the previous “sensor states” and comparing them with the current. A previous blocked state with a current open state can represent an opening, but what kind? A door? A window? A cubby?

To further determine what type I opening, I used the Units’ layering of Rays. Just as that can determine sloped footings, it can determine what type of opening. Generally speaking, I broke openings up as follows:

Step – Bottom Ray Hit
Hurdle – Bottom and Middle Ray
Gutter – Middle and Top
Overhang – Top
Wall – Bottom, Middle, and Top

Generally speaking, this worked, but the overhead was building up so much that I found it unbearable. As you can imagine, the amount of rays for quick precision adds up (x number of Units). I eventually abandoned this method, but some of the concepts I’ve carried over into my current method–Rally Points. This is still a method, however, that I will likely come back to. The overall effect was very nice compared to predefined movements, but I would likely only use it in the event of a more constrained player Unit confrontation.

Advertisements
  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: