-->

Artificial Intelligence in Gaming: Innovations & Trends

91 minute read

 USING ARTIFICIAL INTELLIGENCE IN VIDEO GAMES

This Article will look at the various uses for artificial intelligence in video games and also where AI contributes to the field of artificial life. Areas covered include traditional AI applications for a variety of game genres, such as motor racing, first-person-shooters, and fighting and puzzle games. 

We also tackle the notion that AI-driven behavior is reasonably prescribed in video games, a fact that is lamented by reviewers, but necessary in many cases where emergent pattern, learned by the player through trial and error, reveals a way to win. At its worst, prescribed intelligent behavior can lead to undesirable behavior when the rules fail to cope with unintended situations. 

The term prescribed, as used in this book, covers all the situations that are attached to the more common terminology such as scripted, deterministic, hardcoded, or hand-coded. All of these are examples of AI behavior that is prescribed, although in different ways each time. For example, scripted AI behavior is prescribed, as is hard coded; however, the script might be open to customization by a game designer or even a player, whereas the hard-coded AI behavior might not be.

 Where a specific kind of prescribed AI is being used, it will, of course, be duly identified as belonging to one of the aforementioned subgroups of what I call prescribed AI. 

If strict AI is applied to its fullest extent, then there is the possibility that the game might be too challenging. Achieving the correct AI game balance between “too tough” and “too predictable” is very difficult. The topic is presented in a way that will illustrate obvious solutions for most genres. 

A-Life provides an alternative; the behavior patterns are predictable, not because they are always the same, but because they respect the same kind of flexible logic that governs behavior that arises from a combination of instinct and reapplication of knowledge. In real life, we can sometimes predict how people will react, but that does not make it any less of a challenge to deal with them.

 First, AI in video games must be introduced, and some of the ways that it can be applied in various genres. We will also cover some traditional AI techniques that are based on well-understood paradigms. 

 Since conventional AI techniques require understanding some basic concepts as well as paradigms, they will be covered when their use is encountered. What this book’s scope will not do is provide a catalog of AI science—only how AI can be applied. Many paradigms are just too resource consuming to be included in video games. Some of the more common paradigms that will be covered are the following: 

  •  Finite state machines
  •  Fuzzy logic
  •  Daemon AI
  •  Neural networks
  •  The A* algorithm and other weighted decision-making algorithms
 Many of the items covered will provide the basis for our discussion of A-Life and AI and their deployment in video games. It is vital that the general theories be understood at this stage, if not the actual implementations of those theories.

AI IN Video Game's 

Artificial intelligence has been used in almost all video games, from the first attempts to modern day epics. These efforts have not always been successful. A leading U.K. games magazine, EDGE, has often lamented the lack of good AI in games. The behavior of one in particular has been referred to as reminiscent of “demented steeplejacks” encountering a ladder. Even as recently as 2007, EDGE’s review of Transformers: The Game had this to say:

 “Transformers’ AI makes GTA’s barking mad pedestrians seem like HAL 9000 by comparison; stand on a hillside and you’ll see cars ploughing into each other at every junction, completely irrespective of your actions.” [EDGE01]

 So, although AI has been around for a while, its deployment has not always been “intelligent.” There might be a number of reasons for this. Maybe it was bolted on as a last afterthought, or perhaps there are simply not enough resources in the gaming system to allow for sophisticated AI implementation—sophisticated, that is, in relation to the rest of the game. Even today’s most rudimentary game AI would have been beyond belief 20 years ago. Perhaps developers deserve the tonguelashing they get when AI performs under par; after all, more power is at our fingertips than ever before.

 As more features and options are piled into the rest of the game, however (vehicles, weapons, additional intelligent behavioral models for the AI to follow, interactive environment, and so on), the AI itself gets left behind. It can even run the risk of becoming no more than a slightly augmented script-following engine. 

 This partially explains why some AI, such as soccer simulations, tends to be better implemented than AI in more open environments. Games such as soccer are different because the developers have the advantage of being able to repeatedly apply and reapply the same AI for essentially the same game, in a restricted environment, over and over again.

 As John Anderson, one of three AI developers at Atomic Games dedicated to the development of AI for Microsoft’s Close Combat 2, notes in an e-mail to the GameAI.com Web site:

 “AI will almost always be shirked by the software developers/producers, at least in the initial release. This I feel is because most AI cannot be written to be effective until late in the development cycle as it needs a functional game environment to see the effects of the AI.” [GAMEAI01]

 Nonetheless, many aspects the elements in our paradigms list will find their way into video game control at a variety of levels, often depending on whether the in-game actor, entity, or object is on the player’s side or not. Enemy AI and player co-operative AI are (and should be) often implemented differently, allowing the possibility for the enemy to be overcome, rather than providing an insurmountable barrier.

However, this kind of pseudo-AI is not always implemented intelligently— hence the criticism. Part of the issue here relates to the balance that must be struck between good AI and good gameplay. The two do not necessarily work together to provide value for the player. John Anderson explains this: “Then the developer is faced with a choice of spending several more months to get the AI right, or release a fully functioning game with limited AI capability. Most choose the latter.” [GAMEAI02] There are, therefore, instances where the developer’s initial intent for a certain kind/level of game AI simply has to be shelved for the sake of the development cycle. Of course, the AI must be replaced by something, and more often than not, simple hierarchical rules-based systems are implemented, which use game universe knowledge that they should not logically have. This removes the necessity of sophisticated “guessing” AI, but in terms of the gameplay experience, it is not always to the player’s benefit. The choices appear stark. Either the AI has to be made sophisticated enough that, in the face of uncertainty, it can still reason (like a human can), or it has to be made into a system that can guess the probable best solution from a collection of possible solutions, or it is entirely stochastic. Underpinning these is the possibility that the AI can have access to in-game information that helps it to decide, providing a shortcut that allows the developers to remove some of that guesswork. For example, sometimes knowledge of the environment is tantamount to cheating. Now, we are not suggesting for a moment that entities can see through walls, or that their vehicles do not obey the laws of physics, but it stands to reason that the virtual driver of a virtual car is more in tune with that car than the real player who has imprecise feedback from his own vehicle, via a control pad. If we coupled together the various algorithms that developers put in place to govern the physical movement of all vehicles in the game, as well as to provide thresholds for the virtual drivers, the net result could be a perfect opponent. But, it must also be balanced by the need to be fair and the mandate of providing an enjoyable experience. Bearing this in mind, there are four general categories of AI that will be covered in this chapter, broken down roughly by genre. The hope is to produce a reasonable overview of AI as it is currently used in the industry and as building blocks for the development of the theme of this book: usable AI and A-Life techniques. These categories are: • Movement: The manipulation of NPCs within the game universe; • Planning: The high-level control of behavioral threads within the game universe; • Interaction: Managing the interface between the game universe (and entities) and the player; and • Environmental: The appearance and objects of the game universe as manifested toward the player.

Not every game will use all of these categories, as we shall see, and some rare games will need to employ at least one technique in each area in order to maintain the assumed reality of the game universe. However, AI should not be used gratuitously. It should either provide a challenge to the player or enhance the experience. AI for the sake of AI must be avoided. If the developer is tempted to add something that seems like a clever bit of coding but does not advance the game or enhance the player’s experience, it is unnecessary. It wastes CPU resources and development resources and could see the project abandoned entirely. Movement AI Any game in which the in-game agents (opposition, NPCs, co-operative characters, and so on) move needs some form of control so that movement can be farmed out to routines that provide intelligent behavior. This is true of practically all genres; movement is a key part of the video game experience. Having said that, some games require more intelligence that others. In shootem-up games, for example, the incoming patterns of flight enhance the experience rather than detract from it. The player’s ability to read and counteract these patterns is part of the genre’s appeal. We can mix in some clever in-flight decision-making—such as attack, evade, and so forth—but the actual movement of the entities should remain prescribed. The counterpoint to this is the AI needed to control loose flight, as in space combat games. Here, the movement AI is constantly looking for ways to position an enemy attack on the player’s craft, while staying out of trouble. Patterns or standard maneuvers can be employed, but they just form part of the AI, rather than being strictly prescribed. This is also common to racing games, such as Wipeout, Burnout, and Formula 1. The driver AI has to achieve goals, avoid traffic, and remain reasonably challenged, but not be superhuman. When the movement AI takes on the management of a collection of common goal-centered entities—team-oriented sports simulations, for example—it becomes reliant on the planning AI. In other words, the movement AI implements low-level positioning on an entity-by-entity basis, according to the instructions of the planning AI. Planning AI Planning is a two-part affair. On the one hand, there are strategic engines designed to create an overall scenario in which the game tries to achieve a general goal. On the other hand, there is the planning associated with restricted movement. In Civilization, for example, the AI needs to plan the eventual overthrow of the player by an opposing force. Concurrently, however, instances might be necessary where movement or action must be planned at a lower level within a universe bound by hills, rivers, and time factors.

In soccer and other team-based sports simulations, planning can also work at two levels: the immediate play (which may change, depending on the player’s countermoves) and the general strategy for the match being played. The immediate in-game planning occurs at two levels: each individual and groups of individuals, be that the team as a whole or subgroups within the team, such as defense, attack, forwards, and so on. This is real time, where the plan might have to shift, change, and adapt, depending on the actions of the player and his cooperatively controlled characters. Even a soccer Sim such as LMA Manager 2006 needs this kind of planning. The player can only direct, through actions of the manager agent within the game, and the agents that are under this superficial control are entirely autonomous. However, the player has given them tactics to follow and can call these into play during the game. There may even be longer-term planning AI, which keeps abreast of the team’s progression through a season and intelligently manages injuries, transfers, and training activities. These can be broken down into three discrete areas: • Immediate planning: Reaction within a tactical plan. • Tactical planning: The current goal as a subgoal of the main strategy. • Strategic planning: The general long-term plan. Whether or not to use this level of AI granularity within the system will be a design decision that is also affected by the genre. It is important to make the appropriate choices initially when creating an AI-driven game, because it is usually hard to change approaches late in the development phase. Interaction AI This is the most complex video game AI category. It involves everything from the actual mechanism of interaction between the player and the game to the way that the game universe feeds information back to the player. In a simple text adventure, for example, it might be paramount for the player to communicate with NPCs via a textual interface that emulates conversations. This implies some kind of communication AI—an area understood quite well, but which is often less successfully implemented. We might understand the challenges of conversational AI, and we definitely understand why it is so difficult to achieve, but we are unable to solve it. Were this an area of AI that was solved, the Turing Test (and the competition that goes with it) would already have been passed and the competition won. We do, however, have a collection of sophisticated models to deploy that ought to work reasonably well within the context of a game. Games that use immediate-control methods, such as driving and other sports simulations, might implement intelligent control methods, such as the driving aids often found in Formula 1 simulations. These need to use AI that is similar to the opponent AI in order to help the player (especially casual or new players) cope with an extremely complex simulated environment.

Then, in games where the player controls a team leader and the interaction of other team members is handled by AI (if only temporarily), management must also be maintained in an intelligent fashion. Like in soccer simulations, this will likely be a mixture of prescribed AI and flexible management algorithms. Of course, AI exists in everything in between. All genres will have some form of smart interaction. The previous three examples are only a small taste of the possibilities. A single game will likely employ a mixture of approaches that cater to the player’s interaction with the game universe, as well as the opposition and the game universe itself. These will likely be layers of approaches—the buildup of a complex behavioral model from simple decisions based on AI routines. This sounds complex, but it is easy to understand if we consider games that include smart weapons with automatic aiming (low-level layer) or the interaction of squad members in achieving a task (medium-level layer). This concept of layers of AI will be expanded upon as the book continues. For quick reference, it can be equated to the span of control of the system. So, low-level AI might be concerned with single items (objects or agents) within the game universe. Medium-level might deal with groups of low-level items working together. High-level AI might deal with organizing the general strategy of the AI responsible for providing opposition to or cooperation with the player’s agents. These layers are not static, however, and a game will define its own spans of control according to the general granularity of the control systems. It may be that the lowest form of control is a squad, or that, as in Creatures, the highest level layer is a Norn, or ingame creature. As always, there are two sides to this—such as routines to help level the playing field for the player by counteracting the perfect-opponent syndrome, as well as obstructing (hindering) the player’s success in a measured fashion. We don’t want a game that is impossible to complete; but then again, we don’t want a game in which simple memory is enough to take the player through to victory. Environmental AI Environmental AI is a tricky proposition. It doesn’t deal with opponents, but with the game universe itself—for example, in the game SimCity. Often categorized as best used in games in which the player is, in a sense, playing against himself, environmental AI is generally employed in simulations and god games. Environmental AI essentially handles the game universe’s reaction to the player’s actions. This might manifest itself through movement AI or interaction AI and might contribute to the input required by the planning AI. With this in mind, we also need to discuss (at a high level) some basic AI theories that can be used to implement these four areas. Since there is no sense in reinventing the wheel, if the developer can find some tried-and-tested techniques, then this will save implementation time, which can be better used for designing AI into the game, rather than trying to bolt it on afterward.

Common AI Paradigms Much of AI programming in video games is related to the ability to find a solution within a search space of possible solutions. This solution could manifest itself in a number of ways—chess moves, behavioral responses, or textual output—and employ everything from alpha-beta pruning to Hopfield nets (we’ll get to these shortly). There are also some neural network (NN) possibilities within search strategies, especially when providing learning algorithms to augment the basic behavior of the system. The search strategy employed has to be efficient within the context being used, as well as good enough to provide an answer that is acceptable to the system. We need a strategy to identify possible solutions within a solution space and then select one that satisfies the criteria, sometimes taking into account the opponent’s own moves or limitations within the game universe. This means that sometimes a straight calculation is not good enough, and we need to do a simulation to figure out the answer. Therefore, ways of doing this must be determined that are computationally inexpensive—ranging from simple statistical analyses to neural networks and fuzzy logic. Behavioral AI need not necessarily depend on searching for solutions. It can also employ techniques to determine the general behavior from external stimuli. This can be anything from a simple finite state machine (FSM), in which the input determines an action and end state based on the current state of the FSM, to a neural network that has been trained to perform an action by using positive feedback. These techniques will become building blocks that we can use to model AI in a video game and provide a basis for adding A-Life to video games, as well. Simple Statistical Analysis Statistical analysis is still widely used in AI routines, especially in video game development. Analyzing statistical feedback from the game universe is a good way to determine the behavior of a component within the system. Statistical analysis is used in conjunction with state machines and other traditional methods to empower the AI within the system. Feedback and analysis of that feedback still have a large role to play, however, and should be made the most of, in the opinion of the author. One way to look at it is that statistical analysis takes observed probabilities, applies them to the evolving situation, and derives behavior. For example, if we determine that a given chess board state can be generated from the current state, and that it produces a winning situation more than 50% of the time, we might choose it over a state that has produced a winning situation less than 50% of the time. However, by itself, this would be unlikely to generate good chess gameplay. Instead, we need to apply some analysis to the games and decide whether 50% is an appropriate benchmark. In doing so, we might determine that a value of less than 99% is not statistically significant. In other words, just because we won half the time from a new chess position, it does not mean that we will win this time. However, if we have won 99 times out of 100, the new chess position is a good one to take in the next move. This is one form of statistical analysis that can be used. Typically, statistical analysis can also provide input for most other kinds of AI, such as NN, FSM, or fuzzy state machines (FuFSM), and allow us to use statistics to generate behavioral responses. This kind of decision-making via probabilistic knowledge helps to reduce complexity and increase the observed efficiency of AI algorithms. Statistical analysis techniques also produce evolving behavior in which probable actions are based on weighting outcomes balanced against probabilities. This allows the AI engine an additional level of sophistication—and very cheaply. The analysis is fast, and the result can augment the basic decision-making process. Finite State Machines FSMs are the building blocks for some quite complex AI routines. They are also useful for modeling discrete behavior where the set of outcomes is known. If we know the number of states that are possible and how we can transition from one state to another based on an action and generate an action as a result, then we can model it with an FSM. A state of an in-game object (agent, avatar, and so on) is a terminating behavior. For example, a guard might be in a state of patrolling, or a soccer player in a state of dribbling with the ball or marking an opponent. The AI system can signal that the in-game objects or agents can move between states, which they might do with or without an accompanying direct effect on the game universe. At any given moment in time, the machine is in a possible state and can move to another state, based on some tested input. If a guard is playing cards, and he hears a noise, then we can change his card-playing state to a searching state, in which he will try to locate and neutralize a perceived threat. Within an active system, states can be arbitrarily added, depending on the application of other AI routines; but this is less common. However, scripting interfaces to FSMs are actually quite common, as we shall see. It provides a good way to create intelligent behavior, so long as the associated actions and triggers are well defined in AI terms. Usually, each state will also have an action associated with it, and we typically go from one state to another via a given action. These actions can be simple or more complex. A simple action in a fighting game might just be to move forward or backward. A more complex action might lead to a transition to another FSM (or to run one in parallel), such as an entire attack sequence, triggered by a single state change. In this way, the control granularity can be changed, and the level at which actions are handled depends on the implementation. For example, we could decide that a given action sequence, like fighting, is too complex to be represented by a collection of FSMs. Therefore, a hard-coded AI routine would be provided, rather than an FSM that provides transition details from the current state to the fighting state. This moves beyond an AI routine into scripted behavioral patterns, with the AI used to choose between such scripts.

Alpha-Beta Pruning Alpha-beta pruning is, in a sense, the combination of two game theories. It uses a minimax test, which attempts to establish the value of a move based on minimizing the maximum gain from that move by an opponent. Based on a plethora of possible moves at a given depth of a move tree, minimax is designed to return, for a given player, a balanced move that offers good safeguards against a player winning. However, on its own, minimax requires analysis of the entire search tree, and therefore its use needs to be optimized in an environment when time is an issue, such as in video games. Alpha-beta pruning offers that optimization. We can prune out those trees that yield results that do not advance our search for the best move. Commonly, alpha-beta pruning is used in strategy games to help plan a sufficiently efficient (in game terms) path through the various game states. Think of chess, for example, where it is possible for the AI to create a look-ahead of every potential board state from one position to another. This is a waste of resources; alpha-beta pruning can reduce this by discarding those that are less efficient or actively wasteful. This could be used in almost any genre where there is the possibility that a path through the various game states can be abstracted as a series of intermediate states. Whether this has an actual application in a shooter, for example, is up to the game developer; it may not be worth the effort or may not have an application. In a video game, let us assume that the AI is capable of analyzing a new state from an existing state and can give it a score. The score attributed to the state, from the game’s point of view, is optimized toward the player’s failure; the worse off the player becomes, the better it is for the game (opposition). When using alpha-beta pruning, the trick in the minimax algorithm is that the AI also takes into account the opposition’s moves (in this case the player), as well as its own. So a given move is examined to find out if it is better or worse than any previously known move. If either of these is true, we move on and discard this move. After all, if the move is really good, then the opposition will do everything to avoid it; if it is worse than the current worst move, then it is just a bad move. Anything in between is a possible move. Then we pass the move along and swap the worst (alpha) and best (beta) scores and take a look from the point of view of the opposition, having first adjusted the alpha score to match. This is our current best move. When a point is reached in which we can go no further—that is, the resulting move ends the game—then this move becomes our choice for the next play. In selecting it, any of the unlikeliest or worst moves have been avoided, and the search time of the game tree has been reduced. There are two caveats: First, we need to be sure that the nodes are being evaluated in the correct order; second, we need a game tree to search. As long as both of these are satisfied, an appropriately good move will be selected.

A* The A* search algorithm is also a game tree search algorithm, but without the caveat that the entire game tree needs to be known at the time it is executed. The game tree will be built as it goes along and is designed to return a path through that tree. Like the alpha-beta algorithm, A* will discard any path that is worse than the current path, but it does not take into account the alternative two-player nature in the same way. The most common illustration of A* is in its use as a navigation (mapping plus planning) algorithm. Assume that we have two points on a map: A and B. We know that the best route from A to B is also the shortest one—a straight line (that is, as the bird flies). This does not take into account any information regarding terrain. In an equal-cost world, where movement from one point to another carries no particular cost or blocking artifact, the best route from point A to B is indeed a straight line. However, our map shows us that there are some good paths, bad paths, and impossible paths. The A* algorithm is designed to return the best path, taking all of the map’s possible paths into consideration. So, our A* game tree is one of possibilities, where each possibility is weighted by the approximate cost of selecting it in terms of the most efficient path and any costs implied by its selection (a hill, for example). In essence, the A* algorithm will select the most efficient path through the search space, based on an evaluation of these two variables. Of course, there are many ways to decide what constitutes the “best” choice, depending on the nature of the game. Therefore, the chosen path is the result of the shortest path plus the anticipated cost of taking that route. When the shortest (most efficient in terms of in-game cost) is not known and set to zero, it becomes a straight analysis of cost. While this is still valid, it might not produce an appropriate result. The standard terminology for the A* algorithm is that it is a heuristic search algorithm. Neural Networks “Neural network” (NN) is a buzzword in both AI and video game design. It refers to modeling in a way that mimics human brain organization, using memories and learned responses. Imagine a soup of “neurons” that are able to take inputs and post outputs to other free neurons within the system—connected by their inputs— allowing trainable network of neurons to be built. In a sense, the connections themselves represent the learning experience. The network can be trained by adding neurons with specific qualities, for use either in-game or during the play session. Paths through the network can also be reinforced by training, as well as new paths formed. In a pattern-recognition network, for example, we first train it with connections of neurons that fire according to sensory input that is represented by a shape, along with the information about what that shape is. We then give it random shapes and let the network decided whether the new shapes are the same or not. The more it gets right, the better the network will be trained toward recognizing the target shape. The exact nature of the neural network and individual neurons within it will depend entirely on the game being designed. This only holds for those NNs that are allowed to continually adjust themselves once the initial training period has finished. If the NN is provided statically trained (that is, its weights are set by training, or hard coded, and it is not allowed to evolve), then, of course, no matter how many of the random shapes it gets right, it will not progress further. In my opinion, this is one of the greatest errors of NN implementations; a fully trained NN can be allowed to evolve by adding other mechanisms that alter its output. The original trained NN can be kept as a template, and subsidiary NNs can then be used to adjust, on a continual basis, the choices made by the NN so that the game can adjust to new information. After all, we humans do not stop learning just because we leave school. One thing that is common to all implementations is that the NN needs to be initialized at the start of the play session, either in code or as a data file. This training might contain general strategies that can be modified during the play session to match the machine’s success against the player’s, which provides a level of adaptive experience. In this way, once an NN has been trained in the development stages, it can be supplied as a blank network of weighted nodes that is then initialized with the results of training, which restores it to its trained state. Following this, the NN can then be allowed to adapt further, augmented with other NNs to allow an adaptive network to be simulated, or kept static, depending on the requirements of the game. Expert System An expert system can be described as an example of knowledge-based reasoning. It is a well-understood and fairly mature technology. Generally speaking, an expert system comes with built-in knowledge. It recommends courses of actions based on questionasking to narrow down choices, or it opens up new avenues of possible solutions. The system can also be improved by adding to the knowledge during a play session, depending on available processor resources. In the majority of cases, however, training an expert system needs to be done manually. This can be done by using a design tool to code the knowledge base or (less commonly) by extensive play-testing to extend the system’s capabilities from a basic starting point. Unlike neural networks, expert systems are mostly simple rule-based systems. The goal is to take inputs and decide on an outcome. In the real world, this is like medically screening toward the diagnosis of a specific ailment. An expert system is, therefore, built by hand (or procedure) in such a way that the questions are logically linked in a path through the system. An NN, as we have seen, is by contrast a black box that is allowed to learn on its own within the parameters of the game universe. The expert system asks questions and allows the user to select from the options and then recommends one or more courses of action. This information can be fed back into the system either directly—for example, “9 out of 10 people agreed, so choose option B”—or indirectly via another AI technique. The indirect AI might need some form of memory implemented as a neural network to reinforce the given course of action that resulted in an improved condition. Expert system implementations can become very complex and processor intensive, so care needs to be taken. They are also generally quite inflexible rule-based systems that are not well suited to general problem-solving. Having said that, an expert system is very useful within a video game development project because it deals with data rather than behavior; therefore, it can be adapted to different data sets. Fuzzy Logic Another buzzword, fuzzy logic allows us to choose the inclusion of different sets of values at the same time. In other words, if our video game is creating a population of entities to form an army, we might decide that they could be aggressive, courageous, and intelligent, but at varying levels. For example, we want one entity to be somewhat aggressive, not very courageous, and quite intelligent—all at the same time— while another entity of the same type will be very aggressive and courageous but not too intelligent. We can therefore give approximate rather than precise values for each criterion. This is different from, say, Boolean logic, which is based on precise binary values—the entity in question is aggressive (or not), courageous (or not), and clever (or not). Fuzzy logic also allows the grouping of more than one (possibly contradictory) set. In the previous case, although our entity is quite intelligent, it might make lessintelligent decisions occasionally. So a decision can be made based on several inputs, and that decision can be fuzzy in its outcome. For example, if the entity is feeling aggressive but a bit cowardly, and it is unable to make a right decision, it will attack only a little bit (low level of aggression). This leads us to fuzzy state machines, in which we discard the crisp decisionmaking of a finite state machine in favor of being able to introduce a weighted probability or pure chance of taking a specific course of action over another. This is akin to mandating that if our entity is feeling aggressive and clever, then it should attack most of the time. Hopfield Nets A Hopfield net is a special kind of neural network in which each node attempts to attain a specific minimum value, taking into account its current value and the values of its neighbors. Each node in the network is visited at random until none of the nodes change state or an arbitrary limit has been reached. Hopfield nets can be used for almost anything, from pattern recognition to trained behavioral responses, and are very useful in video game design because they can be easily trained; simply modify the minimum value that each node aspires to. At the same time, because they are based on a deterministic algorithm, each node’s behavior is easy to model.

A slight modification to the general algorithm is known as a stochastic Hopfield network. In this variation, an entity that needs to select from various possible actions does each one in turn (simulated) and picks the best outcome or another random possibility. This randomness gives us the stochastic behavior. Hopfield nets are useful in providing some natural logic, but evaluating one in real time can be time consuming, despite the fact that they are easily trained and easy to model. A Hopfield net also relies on a limited (or at least finite) problem space, so it is not appropriate for cases in which the player might do something unexpected. APPLYING THETHEORIES Having looked at some examples of the theories behind video game AI, we can now turn to several genres and discuss examples in which AI has been used and, more importantly, which techniques might be employed in current projects in order to implement AI behavior. Note that much of what is presented here is conjecture— intelligent guesswork combined with common knowledge. The more successful a technology is, the less willing a developer might be to share it with the competitive video game marketplace. However, some basic techniques and technologies that have been implemented over the years are well understood by most developers. Independent developers are also occasionally willing to share their experiences through articles, Web sites, and interviews. In addition, through conferences on AI and books, the basic techniques are shared and honed, and in a professional capacity, developers are often more than willing to share their own personal triumphs of game AI design. For laypeople, these remain out of reach, even though there is an active exchange of ideas. Motor-Racing AI When creating AI for a motor-racing game, the enemies are the opposite numbers on the track, and everyone is fighting against each other, the clock, and the track conditions. Therefore, the AI will generally be geared toward planning, movement, and interaction. Of course, the player might also be able to tune his vehicle; in this case, AI is needed to provide a behavioral model that can be modified, such as using straight simulation and calculation of effect from forces, or it could be something more adaptable. The basic steering algorithm could feasibly be based on “radar-style” feedback from the game engine, giving the AI positional information to work with. From this, the planning AI needs to decide what the best route through the track might be, although the racing line could be encoded for each track and the AI allowed to follow it. In either case (and the latter is less computationally expensive), the AI still has the task of negotiating through the course, taking into account the positions of other vehicles, and managing speed, cornering, and obstacle avoidance. Interaction AI, such as helper routines to aid the player in steering, braking, and throttle control, is mainly reserved for Formula 1 simulations. But it might also  appear in games where the racing experience is technical and should be tailored to the evolving experience level of the player. In addition, each driver’s behavior also needs to be modeled. This might include routines that allow the AI to display aggression or prudence and in-game vengeance against the player for previous actions. Different drivers might have different traits; therefore, the AI may make different decisions, based on the situation being evaluated. This kind of behavioral modeling might need quite advanced fuzzy logic to be deployed, along with some way to remember and learn behavior, based on an observation of the player (and even other drivers). For example, these algorithms might lead to aggression combined with nerves of steel, possibly equating to recklessness on the part of the AI-controlled driver, or it could alternatively lead to prudence. An aggressive driver might be tough to beat but prone to vengeful attacks on other drivers, which might lead to mistakes. A prudent driver, however, might be easy to beat, as long as the player holds his nerve. The S.C.A.R. (Squadra Corse Alfa Romeo) racing game also deploys another trick— the ability to induce a “panic attack” if the player drives up very close to another driver (tailgates). This will lead to a “wobble,” in which the driver loses some control. What is important is that any driver can induce these wobbles—be they AI or human controlled. The end result is a temporary suspension of regular AI driving skills in favor of a “highly stressed driver” behavioral model. This might not be implemented as a behavioral model at all—just exhibited as a twitch in the steering controls, but it is still an example of worthwhile AI. In addition to the behavioral model, which may or may not be implemented, the AI also needs the position information to steer the vehicle. This includes deciding where to be in relation to other vehicles, both in general and at close quarters. For example, rubber-banding can be used to keep the car a certain distance from its neighbors, thereby reducing the threshold for more aggressive drivers. The AI also needs to have a built-in scope for over-compensation errors and other traits that might lead to mistakes in behavior. Otherwise, the player will fall victim to the “perfect driver syndrome” and find that no matter how hard he practices, he cannot beat the opponents. Based on these concepts, the planning and behavioral modeling AIs now have a path to follow. This is where physics comes into play—the handling model, when applied to computer-controlled vehicles, will mirror the options offered to the player; in other words, no cheating, except when allowed to help the player (such as to slow down the lead vehicles). The accurate physics model must then be coupled with the AI in order to keep the driver’s car on the road while following the direction of the planning AI. This might also include errors in judgment or mistakes in the driving model. These can be linked to the player’s own behavior—in other words, worse (more mistakes) for less-experienced human players. This provides an alternative to simply slowing down the lead vehicles, which can appear artificial. The AI model that does the driving should also be able to be manipulated with feedback from the road so that decisions can be made based on the relationships between steering, power, speed, and road traction, as well as other factors, such as wind resistance. We might have a prescribed driving algorithm that allows the correct compensation of actions versus the actual conditions, which is then augmented by the AI to produce the behavior. This “feeling” for road conditions needs to be mapped into action and take into account that conditions might arise that change this feeling and possibly the handling model. In other words, driving conditions that affect player should also affect computer-controlled drivers. This includes vision, wet surfaces, wheel types, and obstacles in the road, as well as power-ups and weapons for futuristic racing games such as Wipeout: Fusion. The processing of this information leads to certain actions being performed, which is the output from the AI routines. We could model these at the finite state level (speeding up or slowing down), but we also need fuzzy control mechanisms to allow nondiscrete decisions within gradients of possibilities. Examples The aforementioned S.C.A.R. manages to imbue drivers with different personalities, thanks to an RPG-style capability matrix that weights decision-making, probably using fuzzy logic algorithms to model driver behavior. It is quite successful, and the same weighted matrix is also applied to the player. By experimentation, it might be possible to use the matrix to augment the behavior of the control system and vehicle, again using some fuzzy logic. The end effect is that as the player progresses and improves his score (in the values) within the matrix, his experience also improves and evolves. Each driver that is pitted against the player reacts differently when its score is compared to the player’s in each category. For example, if the player notices some drivers with low aggression, he might choose an intimidation strategy to achieve a better score or driving position. (Before the race begins, the player is given the opportunity to examine other drivers on the grid through a user interface that allows examination of them before the grid is shown and the race begins.) The Formula One Grand Prix 2 developers claim to have taken this one step further and have modeled their AI based on real Formula One drivers. Their data model contains information pertaining to the behavior of each driver so that the AI can make the appropriate decisions during the race. This approach also needs feedback and memory during a play session. More aggressive drivers will tend to retaliate, which again might make use of AI techniques to modify the default behavioral model in a neural network. This could result in a particular driver AI “learning” more aggressive driving modes in certain situations, such as when encountering an entity that had cut in front of the driver at some point in the race. In addition, there are “career” modes. Drivers with less to lose take bigger risks; drivers with bigger engines learn to accelerate harder out of turns but slow down when entering them because they know they can make up the time elsewhere.

Potentially the most difficult part of driver AI will be selecting the best path, based on the positions of other drivers, the state of car, and so on. This requires maintaining a certain speed and distance from other cars within the parameters of the environment, such as the track and road conditions. Action Combat AI AI in action combat games has to strike a balance between perfection and incompetence, which is extremely difficult to get right. This genre includes first-personshooters (FPSs) such as DOOM, Quake, and Half-Life, as well as mech-oriented games such as Heavy Gear, Tekki, and the slightly ill-fated Transformers: The Game. If the AI is overdone, then the result can lead to impossible opponents, such as the DOOM Nightmare Mode; if it is done badly, the result can range from the comical to the irritating. This is especially true if the AI is supposedly cooperative but ends up being more of an obstruction. One aspect of FPS AI in particular is the ease with which the targets can be acquired and dispatched. On the one hand, the player needs to be able to locate a target and shoot it with an appropriate level of difficulty; on the other hand, the AI has to ensure that the enemy targets act in a way that puts them in a position to attack the player. More often than not, the actual implementations lead to predictable, dependable, and easy-to-confuse AIs that either fail to account for their environments and are unable to deal with new situations or follow such a strict pattern that the enemies’ positions are unrealistic, but they are easy to shoot. Two examples spring to mind—the slightly odd behavior of the pedestrians in Grand Theft Auto and the suicidal behavior of the traffic in Transformers: The Game. In both cases, the AI completely fails to model intelligent behavior, leading to comical effects with no real drawbacks, aside from slightly embarrassed developers. As with the driver AI, information to be processed includes items such as the position of opponents in relation to the game universe and player’s position, internal status (damage, weapons), and the strategic plan handed down by the planning AI. The position of opponents can be based on a radar-style system—line of sight (visual) coupled with predictive AI—to track the player in an intelligent fashion. The AI will likely combine data on possible paths with some form of statistical measurement of progress, especially in free-roaming games or flying/shooting environments. The behavioral AI needs to take into account the status of the actor, which leads to behavioral modeling. This can be represented as FSMs or fuzzy logic, enabling the AI to make a decision based on the temperament and status of the entity. In other words, if the entity is hurt, it might decide to hide and take no risks; but if not hurt, the entity might decide to cause as much damage to the player as possible. Managing this incoming information and making decisions need an objective AI with responsibility for setting short-term goals with respect to the rest of known status data (Where are we going? What do we have to do?). This might be a combination of A* pathfinding with some fuzzy logic to discover the best solution, or it could just be a rule-based adaptive model.

It might be tempting to cheat the player a little as an alternative to implementing complex AI. For example, opponents should be using line of sight in order to determine whether the player is in view and where he might be going. This needs an observation and prediction AI algorithm. However it would appear easier to just retrieve location data as required from the system, because the game knows where all of the elements are. In fact, this is the same information that would be used for radar-style views (though that might not be part of the game universe or scenario). If radar is not part of the equipment, then the AI routine needs to take into account the fact that radar cannot be used, and that derived algorithms from other information (such as shots fired, or visual or other input) must be used to guess at the player’s position, or it must adjust the data to negatively compensate for cheating. The design needs to support one or the other. In short, the player and opponents should be on the same level. The AI can be tough, but it should remain fair, unless some form of unfairness is supported by the scenario. For example, the DOOM Nightmare Mode includes a resource boost for enemies, making it nearly impossible for all but the best players to overcome. Within the AI, there are several basic behavioral possibilities that can be implemented at the low or high level—such as run, hide, shoot, track, or avoid. A system can be modeled in which the basic states are chosen by the AI algorithms, but each one is encoded as part of the game engine; or we can give more scope to the AI, enabling it to control movement in a scripted fashion. The granularity of the behavioral modeling allows for implementation of the behaviors supported by script. For example, in Quake, the AI is reasonably finegrained and allows access to the various entities' AI routines. This gives the player the ability to play with the AI and tweak it. This level of granularity adds an extra dimension to the game while providing a good platform for sculpting the AI behavior during development. Feedback from the game environment will also lead to actions. Here the AI is playing a reactive role rather than a proactive one. So instead of making a decision in the next cycle based on what the AI wants to do (planning, objective, or behavioral), a reaction is modeled on something that has happened that is beyond the control of the game AI. Observation of the game universe (or the entity’s own state) might also lead to a change in the current behavior. If a state machine is being used to manage general behavior, this might be as simple as a monster running away after being mortally wounded. Of course, we might augment this by using a fuzzy state machine that allows the monster greater freedom to decide on an action, coupled with some fuzzy logic to model its actual behavioral tendencies. The state “flee” might still be the end decision; the AI routines to actually manage the task of finding somewhere to hide are encoded not as behavior, but as algorithms inside the program code. The same can be applied to weapons management—such as select, fire, or reload —in which the AI has an opportunity to make the perfect choice each time. Again, this would be unfair unless the player had proven himself to be sufficiently skillful enough to possibly be victorious. This behavioral change with respect to player skill is an AI technique that has been implemented with only marginal success thus far. Some games manage it better than others, as we shall see; but the overriding theory is that certain aspects of behavior can be changed, depending on the player’s skill. We might choose to alter the accuracy of a weapon in the hand of an opponent or have it choose an arbitrary weapon rather than a correctly ranged one. As in the driving AI example, this can simply be embodied in a series of mistakes, or it might be something less easily identified, depending on the design of the game. Examples The game Half-Life has monsters with many sensory inputs, such as vision, smell, sound, and so forth. These monsters can track the player intelligently based on that input. Whether this is handled as modified game-universe state data (taking position data and altering it slightly) or as modeling of senses (or a combination of these) is not known. However, it does seem to work in certain circumstances. In addition, the game’s monsters are aware of their states, and help is sought if they feel that they will not prevail in a given situation. This behavior seems to work as described by the developers. Half-Life also uses A-Life style behavior modeling (that we will come to in Chapter 3), such as flocking, to deliver a more realistic gaming experience. The combination of finite state machines (a monster is chasing, attacking, retreating, hiding) is balanced with fuzzy logic, creating flexible decision-making and modular AI that allows that flexibility to manifest itself in a variety of ways. Examples of scripting can be found in Interstate ’76. AI programmer Karl Meissner e-mailed GameAI.com with a few observations on AI and scripting that offer some great tips for budding AI programmers: “The script specified high level behavior of the cars such as an attack, flee or race. This allowed the designers to make missions with a lot of variety in them.” [GAMEAI03] This is an inkling of the inner workings of the AI routines in the game engine; only the very high-level behavior is exposed to the scripting interface. The scripting was implemented via an efficient virtual machine: “The overhead of the virtual machine is probably less than 1/2% of the CPU time, so it is efficient.” [GAMEAI04] The virtual machine that is referred to here is a kind of interpreter within the game system that is supplied with scripted commands. It evaluates them and passes the result back to the game universe. It is known as a virtual machine because it is a machine within the machine, but simulated in software. The benefit of a scripted interface to the AI is that development can be done faster and (arguably) better. It also allows level designers to have a hand in creating the game; if it was all hard coded, then they would not have that opportunity. As Meissner notes: “This meant the designers spent a lot of the development time writing scripts. But it was worth it in the end because it made each mission unique and a challenge.” [GAMEAI05] The flip side to this is that the developers must implement proper scripting language, teach the staff how to use it, and then let them experiment. This all takes time—as does testing the implemented scripts within the confines of the gameplay and universe management. If a coarse-grained interface has been used, then the other AI routines that have been implemented to produce the actual behaviors must also be tested. Again, this requires time—time that might be scarce on a large project. Interstate ’76 uses finite state machines that are implemented in a simple scripting language: <state> <action>(<actor>) if (<condition>) goto <state> In the example, we might have multiple states that are similarly coded, and all test specific conditions. The flow of control is simple: stat➔action➔condition➔new_state The action might be continuous, and “new_state” might be the same as “state.” The internal API then exposes certain possible functions, such as locating the player, attacking, checking to see if the entity or player is dead, and so on. These low-level functions are encoded into the game itself, but they could also have been implemented or run as a script. Remember that layers of scripting reduce efficiency slightly, but there is a payoff for added flexibility—the developer might not be willing to pay the price of less performance. For the non-programmer, the explanation is simple: The scripts need to be executed inside an interpreter, which is not as fast as straight computer code. Even when the script is reduced to very simple and easy-to-execute low-level encoding, there will still be some level of inefficiency. As Meissner explains: “Originally, there was much more fine grain control over the attacks. As the deadline approached, this all got boiled down to the function, attack (me, him), and then the Sim programmer (me) did all the work. [GAMEAI06] So action combat games generally make good use of AI—pathfinding, behavior, adaptive behavior, reasoning, tracking, and so on—via hard-coded AI routines and extensible scripting interfaces. There is also ample scope for improving the existing implementations, with negligible performance impact, through the use of A-Life such as flocking and other emulations of natural behavior.

Fighting AI In one sense, implementing fighting AI ought to be easy; it takes place in a small game universe with prescribed moves. It is also sometimes part of a larger game storyline in which fighting provides a means to an end, rather than the actual focus of the game. In this case, there is less scope for real AI that can adapt to the player. Still, players have come to expect some level of intelligent behavior. In this case, learning is hindered by infrequent fighting interaction with the player, so rule-based AI, with some small level of randomness in the decision-making process, will probably be the most likely implementation. On the other hand, when the player and AI are always fighting, there are more opportunities for the system to learn from the player and adapt itself according to the player’s style. This is a fairly advanced use of AI that has become a reality in some more recent games. The upcoming Street Fighter IV, for example, by all accounts will make use of a very adaptable AI. Many arcade fighting games also seem to have this approach wired in—from Virtual Fighter to the original Street Fighter and Mortal Kombat machines. However, in these, there always seemed to be a killer move that would work every time, something that should be avoided in a game if at all possible by deploying adaptive AI. Fighting AI is characterized by very discrete goals in the short term, coupled with reactionary mechanisms that are designed to counteract the player’s own moves. The simple actions and movements are then augmented by some tactical planning and possibly a strategic element that ties each bout to the next via a storyline, but the information is retained between each individual fight. So we would like some gut-level reactive AI, some tactical AI, and possibly some strategic AI, depending on the exact nature of the game. This could be coupled with pattern recognition, adaptive behavior, and learning systems, perhaps using simple neural networks. If we consider the fighting AI in isolation from the rest of the game, each bout can be managed as a series of FSMs at a high level to create sequences; FSMs can be combined to enable medium-level behavior that is dependent on the goal set by the managing AI. This could be as simple as just beating the player to a pulp and obtaining a knockout (or killing the player) or winning some other way. The retreat or surrender mechanism could be part of a larger game (or scenario), but what is important is that this behavior is managed from outside the fighting AI, but it is offered as a goal to the planning AI that controls computer-opponent movement. Other strategic elements that will affect the AI are point and knockout wins in classic fighting games. A canny computer player might hold out for a points win if it feels unable to guarantee a knockout. This kind of behavior must account for the perceived status and characteristics of the player-controlled fighter. This input information can be augmented by pattern recognition that serves two purposes: learn from the player to acquire new tactics to reuse, or learn how to counteract player attacks for use in future fights. Of course, any neural network or expert system that has been used for training will need to be prepared with some basic behavioral models out of the box.

The system receives movement information that can be translates into sequences of moves (or combos), and this can be used to augment any pre-installed movement sequences. Some reaction AI models will have to be included to deal with defensive mechanisms. So the movement information model is not just kick/punch/duck; it will also include movement to the left or right, as well as jumping, and could be used to build networks of behavior that are held together by adaptable FSMs. In this way, the behavior can be changed for a particularly gifted human opponent, while remaining beatable. One aspect of this is that this information is available to the system before being used in the game to alter the visible game universe. In other words, the machine knows before it animates the player character what the move might be. As in other implementations, it is tempting to use that information as a shortcut, rather than create properly balanced AI. However, using this information can be difficult to avoid; our counteraction is to flaw the AI somehow, as was illustrated in the previous section. As long as we compensate for any predictive AI that was left out, it is not considered cheating. But remember: The moment that the machine uses information before it is visible to the player or before it is used to update the player character, a level of unfairness creeps in. AI also has a role in providing feedback on the state of the player, information that is sent to the player by altering the way that the player character reacts. A punch-drunk boxer under the player’s control might be managed by clever AI routines that emulate the effects of the boxer’s condition, and this feedback is also sent to the data store of the opponent, who may or may not act on this new knowledge. So the system also needs information on the avatar (player), which gives rise to the possibility for some interesting player dynamics. If behavior is modeled accurately, the player might be able to lull an opponent into a false sense of security before landing a knockout blow, for example. Some fighting games already implement similar mechanisms, allowing the player to play badly right up until the end and then unleash an attack with a knockout blow. This is an unintended consequence of adaptive behavior that exposes a flaw in this approach over strict rule-based AI. Examples There are fighting AI engines implemented in almost every game that contains some version of “player versus computer.” Swordplay abounds in games such as Prince of Persia, and there many fight-oriented marshal arts games, from Street Fighter II to the Virtua Fighter franchise. However, few have attempted to implement AI that is capable of adapting to the player while maintaining a unique playing style. They all allow for difficulty ramping, where the computer challenger improves as the player gains confidence and skill. More often that not, these take the form of simple speed and damage-resistance boosts, rather than true behavioral changes. Also, the entity might gain the ability to use different attacks over time, which is halfway between AI and adaptive systems, and a workaround. There is some evidence that Virtua Fighter 2 tried to offer adaptive play and unique character styles (beyond special moves) but seems not to have delivered it properly. At least, reviewers were unimpressed. Fighting Wu-Shu also claimed to be a fighting game that delivered this kind of AI behavior. It is an approach that is clearly better than cheating. Characters move incredibly fast, intercept player moves, and dodge out of the way, providing more interesting gameplay than when a static rulebook is employed. Players like to be able to read an opponent, though, so perhaps the true solution lies somewhere in between. This would entail some form of FSM to drive the AI process, such as a semistatic rule book of combo moves that reflects the challenger’s style, coupled with some fuzzy logic to dictate the circumstances under which these moves are hooked together. In addition to adaptability based on the entity’s state, some learning algorithms can be placed on top to enable the entity to copy the player’s moves or combos that resulted in a hit or win. Due to the fast-moving nature of these games, there is no time to perform indepth searches of the problem space, so split-second reaction AI has to be favored over pondering, strategic AI routines. There seems to be room for better AI in this area, however, judging by reviews of fighting AI in the marketplace. Puzzle AI Puzzle games are traditionally an area that computers do very well at; they are usually based on logic and lateral thinking, which can be easily programmed. In addition, we can often calculate a large problem-search space, using an A*, alpha-beta pruning, or minimax routine to enable the computer to select a possible best move from a selection of lesser alternatives. In addition, new games and mini-games are being added to the existing glut of board and card games already on the market. All of them use some kind of AI—from strict chess-playing algorithms to the looser behavioral models used in some poker games. In video games, puzzles sometimes form the foundations of larger games or just serve as games within games, where the puzzle AI is probably less important. Nonetheless, it is a useful and interesting gaming AI field if only because it illustrates some of the classic AI routines very well. In fact, many puzzles have been “solved” by AI, such as chess, Connect 4, gomuku, and possibly tic-tac-toe. All of these games are based upon strict rules that make them easy to predict and evaluate. This is helped by the restricted environment and restricted move search space. We know that the possible range of moves is finite, even if it is a huge set. These games been around a long time and are well understood, which means that they are open to being implemented as algorithms that can compete with humans. If a computer can repeatedly beat a human under competition conditions, then we can say that AI has solved that game. These are quite sweeping statements, and some of the AI is very complex and outside the scope of this discussion. What we can discuss are the definitions of action and output as they relate to video game creation. The puzzle game exists within a relatively easily defined information space, in which both sides’ movements are restricted by the game rules. This is a predictable, limited problem domain—as opposed to a game like Grand Theft Auto, where the player is free to go and do wherever he wants; actions cannot be planned. A restricted game universe, such as a puzzle game, on the other hand, delivers information that provides the basis for various actions (moves) that are arrived at after careful analysis of all the possibilities. It is normal for the puzzle-playing system to simulate the game universe (board and/or pieces) in future positions, based on possible/probable moves by either side. The eventual action is borne out of this analysis and should be the result of careful evaluation of each possibility. In our discussion of alpha-beta pruning, we saw that the machine will attempt to find a move that satisfies conditions relating to an evolution of each player’s best and worse outcomes. This form of look-ahead takes each problem domain representation one step forward at a time and then picks one possibility as the move to make. Chess is a complex example, but a game such as Connect 4 might prove easier to examine. In Connect 4, a player has the choice of eight places to put his piece, so a lookahead of one level reveals only eight possibilities. Each one of those then has eight possibilities for the opposing player (if we are using the alpha-beta method). So a look-ahead for two levels has 8 8 possible decisions: 64 in total. In this way, the player’s moves can be modeled along with game’s response, and an appropriate move can be chosen from all the possibilities, using a variety of different algorithms, including A*, alpha-beta pruning, and Hopfield nets. For simple search spaces, A* planning might work quite effectively, as long as we can estimate the efficiency of the ideal solution algorithm at each stage—another example of finding the least costly or most beneficial solution when using a problem-space algorithm. The look-ahead is very effective in terms of prediction, but it also involves some quite extensive number crunching. Anything that will reduce the number of comparisons we have to make will help; the game designer will have some important decisions to make when developing the game information and search space representation. For example, specific opening books can be used to augment the initial sequences because these have predictable or known response patterns. When designing a new game, situations can also be invented in which the player or computer will be restricted to certain moves, thereby removing the need to analyze all possibilities. Once the set of possible moves has been pared down, some kind of move analysis must be performed—a weighted comparison—in order to ensure that the best move has been chosen. Any game that involves a puzzle can use weighted comparisons of solutions in this way to determine the best response to a given move. In non-puzzle games, we can also use this technique, as we will see in our strategy AI discussion. In a sense, some of these same theories and practices occur in nonpuzzle games, and many of the technologies can be reused in new environments.

However, humans are still better at some things, due to our superior spatial awareness and environmental perception. We excel over computers in some things, and vice versa—for example, humans can visualize a problem solution better, but a computer is better at performing in-depth analyses of all the possibilities. Using A* for a problem-space deep search takes a long time. Even the most optimized alpha-beta pruning methods are not appropriate for a look-ahead that is all encompassing. The famous chess program Deep Blue, which has successfully beaten Gary Kasparov, uses a 12-level look-ahead—over 3 billion possible moves. Adventure and Exploration AI In a sense, adventure and exploration AI problems are similar to those in puzzles. While the actual game is not really puzzle oriented, the same AI techniques must be deployed in order to be able to counteract gifted or persistent players. AI is used in many RPGs and other game genres, such as MMORPGs (Massively Multiplayer Online Role-Playing Games), to provide behavioral modeling and forms of goal-setting (scenario), communication, and interaction AI. Nonplayer characters often inhabit the game space along with players, and they need to be controlled in an autonomous fashion by the computer. The AI itself may consist of several parts, depending on the autonomy allowed to machine-controlled NPCs within the game universe. The AI must be able to plan and map, as well as interact and sometimes do battle with the player or other ingame entities. Planning and Mapping AI Part of the planning AI will use goals set by the system at a high level to set the scene for a given chain of events. Some of these plans might be prescribed, and some might be developed as part of the AI process in order to challenge the player. The planning AI behavior reflects progress against plan; in other words, it needs to be adaptive in the sense that new plans can be created to manage unforeseen circumstances. If the system is managed in terms of point-by-point movement, this is relatively easy to achieve, but most games also require the AI to manage spatial awareness and inventory at the same time. Spatial awareness is often embodied in mapping algorithms. The principal idea is to implement an algorithm that is capable of knowing where the NPC has been and maintain an accurate map of this information. This information can then be used to chase or avoid the player, depending on the behavioral model being deployed by the entity’s AI at a given moment in time. This behavioral AI will likely be managed by a separate action or combat AI, as previously described, with the exploring or adventuring AI providing high-level goals. This mapping information can be discovered and retained or, more likely, passed on to the NPC by the game as a shortcut—an alternative to using memoryintensive AI routines. In addition, the various tasks or goals that the machine tries to complete are maintained as part of that process. These might range from simple “block the player” tactics to more substantial strategies, such as in Half-Life.

The AI itself might need to strike a balance between planning and mapping in order to maintain efficiency. In other words, a very low-resolution map might be used, freeing resources to enable better behavior and intermediate planning. On the other hand, mapping might be more important, so we could rely more on the development of instinct-style planning, but with a perfect map. Once again, information provided to the AI takes the form of position, state, and environmental information. At a high level, the game universe sends information, and the actor receives that information. Behavioral AI The behavioral AI model (passive versus active) will dictate whether the management AI is playing a role to keep the actor informed of the game universe, or whether the actor is allowed to roam freely. There is an implication here that concerns processing overhead. If the actor is allowed to roam freely, then the controlling machine needs to model its behavior much of the time. If it is the game universe that prompts the behavior, there is much less scope for the NPC to find itself in a position not prescribed by the game engine, and therefore it can be activated based on player action. Coupled with this is the status of the player vis-à-vis the various NPCs in the system. Their behavior could also change, depending on the player’s status; clearly, a passive role is going to be easier to modify than an active one where advanced AI techniques (involving weighted matrices) will be needed. The NPC behavioral models will fall into either cooperation or obstruction patterns, following lines similar to combat action AI. Among other data available to the AI is the position information of the player, as well as other visible in-game artifacts, such as other NPCs. On the one hand, we could model them all as individuals, but with similar behavioral patterns that follow a rulebased system—such as in DOOM, where beings rarely cooperate against the player— or we could allow them to communicate and cooperate. This could be engineered in a coincidental, swarm-based fashion, such as in Robotron, where beings seem to cooperate. This is the easiest algorithm, but it is not really an example of AI, as such; rather, it lies somewhere between instinct-style homing and point-and-shoot AI. For example, in real cooperative AI, actors have to worry about not killing each other, something that neither DOOM nor Robotron takes into account. On the other hand, more advanced games like Half-Life do implement this kind of AI. The pathfinding AI is yet another example of search space analysis that uses algorithms to determine the correct choices. In this case, pathfinding often uses an A* algorithm to find the shortest path with a high score in terms of movement cost. Using this algorithm, it is also reasonably easy to work planning data into the evaluation process. The result is that the path that is chosen is the best possible (lowest movement cost), assuming that all else is equal, such that a path that follows a plan is chosen over one that does not. Some of these assumptions will change with respect to the anticipated cost of encountering the player or with the various behavioral matrices of the entity.

Some fuzzy logic will likely need to be employed to determine whether the entity will choose a path that will bring it into contact with the player, assuming that the combat and action AI contain sufficient intelligence to be able to predict where the player might be. Finally, networks of FSMs can also be used to implement behaviors such as “patrolling-style” movement. These could be coupled with basic learning algorithms to allow reuse of the behavior in different situations. For example, a wall-hugging FSM could be reused to develop a path in memory that could then be employed by the AI to determine, with an A* implementation, possible variations with respect to the changing game universe. Having evaluated all the information, there are plenty of possible actions that can be taken, depending upon several factors. For example, the freedom to move around and collect items or weapons will have an effect on the range of actions permitted. The designers must decide exactly what NPCs are allowed to do and what they cannot do, because these actions will become part of the underlying AI implementation. There are some shortcuts that can be taken to allow a high level of interaction, such as smart objects with triggers that promote behavioral responses. Environmental AI Environmental AI is similar to an environmental trigger—like the smart tiles encountered in the first section of this chapter, which are designed to induce a reaction in the AI. Using these kinds of techniques, we can allow the player to put things into the environment and get responses from NPCs. Some of these responses might be simple, prescribed actions or reactions, such as in Ravenskull, for example, which uses environmental tiles to guide the growth of fungi, as did Repton. While these are not examples of advanced AI, they are techniques that can be deployed effectively. These days, we can also have more clever environmental triggers that guide the AI processes, rather than just providing an automatic response. The kind of AI that we are striving for is comparable to basic instinct, versus more powerful problemsolving or knowledge-based systems. Most movement algorithms are not based on look-aheads. In other words, we don’t let the NPCs cheat; they need to move just like the player. However, some analysis of the best possible outcome in a given situation might still be applied in order to mimic lifelike behavior. Since the AI has to cope at several levels, it might end up being encoded as a simple FSM that governs which AI routing will be used in a given situation. This is an example of using an FSM for behavioral patterns, as opposed to just using it to model static behaviors. Interaction AI Finally, we have actions that relate to the interaction between players and NPCs within the confines of the game design. This could include the exchange of items or conversations on a point-and-click level (or something more advanced, as in Ultima Online). In-game conversation needs its own brand of AI based on language processing, which will be covered in Chapter 3, “Uses for Artificial Life in Video Games.” At a more basic level, the AI might include triggers that elicit various responses under the guise of true communication. This level of interaction can use any of the techniques discussed so far, from smart objects to FSMs and neural networks, depending on the exact nature of the underlying game. If we want to strictly control the outcome of the interaction AI based on the input data, we can categorize that data into one of three groups: • Hinder the player, • Mislead the player, or • Help the player. Most interactions serve to further one of these objectives, and there are examples of each in most games. For example, hindering the player can take the form of simply attaching some environmental modification, such as a door closing, which will prevent the player from progressing further. Misleading the player can take several forms, but could also be some kind of modification to the environment, such as planting evidence that will persuade the player to take an incorrect path. Direct interaction with the player could lead to textual lies or half-truths designed to prevent the player from solving a puzzle—or some form of Gollum-induced delusion that will prevent the player from correctly evaluating the sequence of events. Finally, help can come from several corners—from helpful NPCs, such as the doctor in Half-Life, or NPCs that follow their own scripts, such as the police in Getaway: Black Monday. The difference in Getaway is that the police are fairly predictable, while any helpers designed to interact and be directed will likely need some kind of personality of their own in order to be of any real use. This also tends to make them difficult to debug and test, and they are also potentially prone to unstable behavior. In Half-Life, the helpers seem to exhibit a good balance between some form of selfdetermination while being scripted and fairly interactive. Examples Great examples of scripted NPCs can be found in Baldur’s Gate, a hit with RPG fans worldwide. The scripting, which can also be changed by the player, offers new features and challenges to players. The scripting itself seems to be rule based, with top-down processing that employs weighting to control the probability of some actions taking place. Again, as in other scripted AI engines, the actual AI remains hidden, with scripts reduced to FSMs. The A-Life used in Baldur’s Gate is incredibly sensitive to the player’s own actions, extending to revenge wrought upon a player that robs an NPC; the other NPCs will subsequently ignore the player for a time. Criminal behavior is also punished, to the extent that guards will eventually hunt the player down. The guards are, by the way, invincible.

Simple examples of this can also be seen in Elite, a space trading and fighting game by David Braben and Ian Bell. In this game, the player’s actions can trigger the release of the police craft, who then hunt the player down, as in Baldur’s Gate. This might seem like an example of knee-jerk AI, but it does add to the experience of the player and creates an additional layer of immersion, however simple the mechanism. According to GameAI.com, scripts can perform basic tasks, such as selecting weapons and choosing to fight or flee. The actual attacking AI is still controlled centrally, but presumably there are interface commands that allow the NPCs to move upon demand as well as find out information about their environment. As an alternative, Battlecruiser: 3000 AD developer Derek Smart revealed on GameAI.com how neural networks are used in the game to control all NPCs. This is something of an achievement, and his comments, lifted from GameAI.com, are quite telling: “The BC3K Artificial Intelligence & Logistics, AILOG, engine, uses a neural net for very basic goal oriented decision making and route finding when navigating the vast expanse of the game’s galaxy.” [GAMEAI07] This is a classic example of a planning and mapping AIs in action. In the referenced game universe, some of the navigation can be quite complex, as it is unrestricted space travel. Apparently, this navigation also employs a “supervised learning algorithm” that, according to Smart, “employs fuzzy logic … where a neural net would not suffice” to try and learn its way through the game universe. Along the way, Smart notes that “Some training … is basically done at a low level,” and the entire movement management also includes “threat identification and decision making” to help it navigate against the player’s moves. Since much of the control is farmed out to neural network-managed NPCs, some training is also provided. These were supplied through “pre-trained scripts and algorithms programmed to ‘fire’ under the right conditions,” which is an example of a startingpoint neural network that is designed to be augmented with experience. One final comment: Smart states that he “simply took what I had and … made it work for my game,” which is great advice. The game development cycle is complex enough without trying to re-invent existing paradigms or invent new ones. History is full of examples (many classics documented, often with e-mail correspondence between the maintainer and the game’s authors, on http://www.gameai. com/games.html) in which games abandoned new or unique AI at the last minute because of complexities in the development process. The end game is usually poorer for the omission of good-enough AI, especially in places that called for a more innovative solution. The designers would have been better off using good-enough AI and adding a few twists to that, rather than trying something completely new. Strategy AI Strategy AI exists in a distinct genre between puzzles and simulations. It has much in common with games like chess, is often based on simulated realities, and lacks the first-person perspective or action element that would place it in either the adventure or combat genre. Strategy AI will contain all kinds of different AI algorithms as it tries to outwit the human player on several levels. Unlike strict puzzle or board games, the rules and environment might be open to interpretation and modification during a play session. Therefore, an AI is required that can cope with planning, mapping, and strategy within a changing playing environment. Subsequently, there is plenty of scope for feedback and rule-based AI, as well as search-space algorithms such as alpha-beta pruning and A* pathfinding. The individual units used as “pieces” will be given some degree of freedom, and their AIs also need to be implemented. This is rather like giving a chosen chess piece the freedom to decide where to move. The planning unit will use rules and feedback from the game universe to decide which unit (or piece) should move and what its goal should be. Therefore, all the elements of the planning AI are also necessary. Information and Game Rules In a sense, strategy AI pulls together all the other aspects of video game AI, from balanced combat to vehicular and unit movement, as well as autonomous and scripted behavior. The game rules and status of the player provide the decision-making process with the necessary framework and will dictate the kind of AI algorithms used. This requires that the designers allow for the maintenance of the machine’s status and game universe (including other NPCs) with respect to the player. These analyses are all part of the AI processing. This helps when it comes to planning the next moves with respect to what the player might do, either based on statistical analysis or using a neural network to provide first learning and then anticipation. When the game is deployed, we can modify the system’s behavior to react to strategies that the player has devised. These observations are made easier by the presence of game rules that restrict possible options, but larger, more complex and populous game universes become more difficult. This means that strategy AI, more than in most other genres, has a lot of information that needs to be processed in order to carry out game actions. The possible range of actions depends on the game universe and the rules that govern it; they can be local, global, or environmental. For example, a local action can affect a small area, a global action will have wider-reaching consequences, while an environmental action will change the way the game is played at a fundamental level. Actions often just amount to hurdles for the player, rather than direct actions against him (like shooting them), which are more common in other genres. Nonetheless, the actions can still be modeled by FSMs or other kinds of action networks in which a state change and action can be used to generate other changes/actions. Generally speaking, every action’s goal will be to prevent the player from achieving something—another application for alpha-beta pruning and related techniques. Games such as DefCon, where strategy is the key element, take place at a slow pace, and each move made by the computer is preventative, rather than a direct attack. Board games such as Othello and Reversi are other examples where there is no direct attack, just some vague sense of “prevention,” as opposed to chess, where elimination of the player’s pieces is high on the agenda and is key to winning the game. Even Scrabble has some strategy mixed in; playing the right tiles with the right points at the right time, on the right squares, goes beyond simple puzzle-solving. It needs a more heuristic approach. Examples Age of Empires developer Dave Pottinger (Ensemble Studios) describes his approach toward augmented AI with learning that can carry over play information from session to session. Even though the final outcome in Age of Empires did not quite live up to his expectations, the initial intention is embodied in a statement posted on GameAI.com: “This has helped the quality of the AI out a lot. Well enough, in fact, that we’ll be able to ship the game with an AI that doesn’t cheat.” [GAMEAI08] When the game is played for first time, the sides are even, and we need some way to progress with the player. This gives the machine some ability to counteract the player, rather than just follow the rules that dictate its initial behavior. Learning is the key to achieving this noble goal. As Pottinger notes, “If the AI does something markedly different … then you get a more enjoyable experience.” This is one of our key goals in implementing game AI: Entertainment or added value is as important in some games as increased difficulty. So one challenge is to create AI that does not simply repeat the same actions when combating the player’s strategy, and another is to ensure that the player cannot find a strategy to beat every possible machine strategy. This is a tricky, almost an impossible proposition, because the player is the ultimate opponent. Pottinger goes on to say: “I’ve yet to see an AI that can’t be beat by some strategy that the developer either didn’t foresee or didn’t have the time to code against.” Given that we cannot foresee everything, and we do not have time to code for all possibilities, we need to implement some kind of learning and analysis into the strategic AI. Neural networks coupled with alpha-beta pruning would probably be able to handle this reasonably efficiently. However, in the end, Age of Empires failed to ship with built-in learning and apparently employs an expert system combined with finite state machines to create the behavior. This has brought the game some criticism, but on the whole it works fairly well.

We can also see evidence of multilayered AI in strategic AI implementations in games such as Close Combat. Developer Bryan Stout describes some of the strategic AI’s workings on GameAI.com: “The SAI is comprised of three main systems: the location selector, the path planner, and the target selector.” [GAMEAI09] Close Combat relies on player input and uses a system based on hierarchy, with the player giving the high-level orders and the strategic AI determining the medium-level goals, including target selection, and then carrying them out. The same AI is then deployed from the other side, but with a computer AI giving the high-level orders. Gary Riley, another Close Combat developer, picks up the thread: “When dealing with enemy teams, the location selector hypothesizes the location of enemy units rather than cheating by looking at the real positions of teams that it should not be able to see.” [GAMEAI10] Here again are the negative connotations of cheating. This time it is to the benefit of the player, and we can only assume that the enemy teams together under a guided AI system where high-level goals are generated, whereas the human player is not modeled along the same lines. Either way, there is a clear possibility that cheating might be an option, but the developer chooses fairness and simulation over a shortcut. Judging by the look and feel of the playing methods in Advance Wars, a similar method is used for employing tiles, pathfinding, goal selection, and target acquisition and destruction. It is also likely that actions and movements are based on weighted values of enemy units, as well as probable movement of other units (the player’s) around the “board.” Riley explains how this was implemented: “For example, the simulator determines line of sight to a team by tracing from individual soldier to individual soldier, but the high-level AI has to have some type of abstraction which divides the map into locations and provides information about whether a team can fire from one location at another. If the abstraction didn’t work well, you’d either have teams moving to locations from which they couldn’t attack the enemy or moving out of location from which they could.” [GAMEAI11] Clearly the information representation is a major issue, as was previously noted. Choosing an appropriate abstraction will also have an impact on the solutions selected for determining how a unit should move. Riley continues: “The solution we ended up with was to iterate over all locations on the map deploying teams into the divided map locations and then have the simulator determine whether a line of sight existed (which took a considerable amount of time).” [GAMEAI12]

This is a fairly time-consuming solution to an interesting and vital problem in strategy AI, and there may have been better solutions available, but it is likely that the pressures of meeting deadlines meant that the final decision was rendered less than efficient out of necessity. Alternatives might have been Hopfield nets or alphabeta pruning using line of sight to register locations appropriately. Finally, another example of layered strategic AI is evident in SWAT 2, published by Sierra FX. Christine Cicchi (Sierra) offers some excellent insights on the GameAI.com pages: “What makes SWAT 2 different from most other simulation games is that challenging, realistic gameplay requires a high degree of coordination among the non-player characters. A SWAT element must move together and act as a team, timing their actions with those of the other elements. Likewise, the terrorists have scenario-dependent goals and actions to pursue without the confines of a linear script.” [GAMEAI13] These scenarios are likely hard coded and allowed to develop within a broad scope as the game progresses. Underneath this layer, there is the tactical AI: “Tactical AI gives units their individual personality and intelligence. It tells them how, when, and whether to perform advanced behaviors like finding cover, taking hostages, or running amok.” [GAMEAI14] This is embodied in low, medium, and high-level behaviors that can range from simple movement to shooting and multistage movement, right up to combinationbased behaviors such as “advance under cover.” In a given situation, each unit is then managed by AI routines that account for internal settings in tandem with a stochastic algorithm: “The unit’s response depends upon its personality, but with a slight random element to bolster replayability and realism.” [GAMEAI15] Fuzzy logic is used to handle the four main characteristics—aggression, courage, intelligence, and cooperation—and produces an action based on following a simple set of hierarchical rules: “If courage is low, run away. And it provides a great way to add realistic AI for very little overhead, as long as the options are available. We just need to choose between them. Fuzzy logic comes into play when an actor has a personality that tends toward a certain temperament but still has elements of others. Subsequently, even a courageous unit might conceivably run away (all else being equal) or be pressured by circumstances into running because its courage is overridden by other factors. This approach is geared toward achieving the right behavioral balance in an otherwise strict rule-based AI system. The end result deploys fuzzy logic and fuzzy state machines to make sure that the rigidity of the system has a counterpoint that makes it more playable and ensures that the player cannot repeat the same behavior and win each time.

Simulation AI Finally, and almost as an extension to strategic AI, we have simulation AI. This is generally linked to environmental simulation, although there may be some entities that are managed by the AI system that have an indirect effect on the success of the player as well as the general game universe. Examples include SimCity, Theme Hospital, and Railroad Tycoon, as well as the seminal Black&White, which is a god game with some real AI and A-Life implementations. (We’ll look into these in Chapter 3.) The key here is that the player is, in a sense, just playing against himself and the environment. There is no real opposition beyond the game rules; they are there for the player to master, but generally the player’s downfall will be greed or mismanagement of the game universe. The conjunction of rules is all interconnected, and so this mismanagement or greed can cause the environment to rebel and have a direct effect on the player. Conversely, part of the attraction is that the player can have an effect on the environment and observe that effect as dictated by the game rules. Sometimes things go the player’s way—mostly they do not, and the play session becomes a series of little crises to be resolved. Information and Game Rules As in our other examples, the game experience is basically just the control of variables with little direct action. Strategic AI offers some basic, direct action in games that involve combat; but in simulations, this is generally not the case. So it becomes more important than ever to get the data representation and monitoring right. The actions in the system are therefore the manipulation of events within the game universe. In SimCity, we can point out areas to build things, like roads, houses, and so on. Sometimes these things appear, and other times they are dependent on the system’s game rules. For example, unless there is a growth in population (event), no houses will be built in the residential zones. Without the houses and citizens, there is no need for commercial zones for them to shop in. Furthermore, without industrial activity, there will be no jobs for the citizens, and therefore they will leave. The system’s input variables provide an alternative to the direct action favored in other AI models. This also means that players are quick to recognize and predict certain behavior, which is part of the problem with most simulation AIs; but arguably, predictability also forms part of the appeal for players. Problems and Challenges In order for the game to be successful, we need to find ways to make it fun, while also allowing the AI to work on a strict rule-based platform. Without the rules, we cannot model behavior; but at the same time, this might restrict the game universe too much. The key is to avoid micromanagement but still allow flexibility. This allows the AI some scope to make decisions on behalf of the player, which is key to the simulation side of the AI. It can be performed by real simulation or just statistical analysis and calculations.

These issues are more about actual gameplay than they are AI, except that the gameplay in a simulation is, in a sense, only really about the AI. Games such as Creatures, Beasts, and Pets also fall into this category, but they are more A-Life than AI and will be discussed in future chapters. SUMMARY As we have seen, prescribed AI abounds in today’s video games. When we say “prescribed AI,” we mean that the behavior can tend toward the predictable over time. It can be overused in gaming systems that are based on fairly inflexible rules and that predict a certain behavior that the developer has decided is challenging enough. Prescribed AI can be augmented to give the appearance of adaptable behavior and sometimes even manifests true adaptable behavior, but often this is an illusion created by clever algorithms. The intelligence in these cases is only apparent at first. In time it becomes clear that 90% of AIs do not learn from the player’s actions, and that a single, reasonably successfully strategy can be found that can win for the player every time. In many cases this is not a bad thing; it is just that the AI used today can be improved without sacrificing quality elsewhere, while also offering more value to the player. Of all the examples we have seen in this chapter, the most successful ones have deployed advanced techniques, but without undue impact on the rest of the system. So the question is: Is bad AI better than no AI at all? Judging by reviews, observations, and experience, it is fair to say that AI could be done better in the majority of cases, even with a few tweaks. That is not to say that the AI is necessarily bad—just that it could be so much better. Of course, there is also a minority of users that just get it plain wrong; the AI has been misunderstood, to the detriment of the game itself. It is necessary to have a certain level of AI, but not always “intelligent” AI per se; we can often get away with the appearance of intelligence by using a combination of different algorithms, as we have seen in the previous examples. In the end, we must decide what it is we are trying to achieve. Should the game be harder, easier, or just more natural? Or should it be different every time we play? Is the AI something to be turned on and off as a kind of difficulty ramping mechanism, or is it fundamental to the game itself? A game without AI, whose behavior is based on strict pattern-following and rules—like the ghosts in Pac-Man or the aliens in Galaga—is still fun and challenging, after all. However, they can lack the longevity that games with high replay value through applied AI can offer. SimCity, on the other hand, with its detailed AI-centric approach, is an example of a sim that will have lasting value and extreme longevity of game and interest. So games with AI are not necessarily harder to play, but they can feel more natural, making them more immersive. The natural next step from realism is to augmented realism, where we can make the game more accessible to the player with additional AI for things like driving support or, for example, automated soccer player AI.

Balancing the AI The balance between tough adaptability and predictable AI can be a hard one to strike. On the one hand, scripted behavior is often not quite good enough to challenge the player, and repeatable AI routines always tend to perform identically. This can make the game slightly boring, especially if the behavior remains constant throughout. However, adaptive AI, combined with augmented environmental senses, can be impossible to beat. The perfect opponent is as frustrating as a poor playing experience (unless the player specifically chooses to play that way). Perhaps balancing AI is a question of allowing it to be turned on and off in the same way we can turn other in-game features on and off, such as steering and aiming help. This would at least enable the flexibility for the player to choose the experience that he would like. It could be combined with extensible AI scripting to allow player to dumb down or smarten up in-game NPCs, as desired. If we had a scripting model in which the game could choose smarter or dumber versions of itself dynamically, we would be able to adapt to the skill level of the player in a fashion that would provide a balanced experience, without the intervention of the player. AI and A-Life in Video Games The next step is to combine these techniques with the notion that A-Life allows a flexible logic that mimics patterns found in nature. This can be used to provide a playing experience that is often touted, but that rarely manifests itself. In a sense, it is the combination of “modeled instinct” and reapplication of knowledge that takes the best of AI theory and augments it with some new ideas. It is important to understand that the foundation of A-Life is the AI; A-Life is just a way to express the AI in a more natural embodiment—and in a way that we can achieve much more with very little increase in overhead. For example, A-Life makes mistakes naturally, while AI has to be programmed (scripted) to make mistakes. We saw how sometimes the result of an AI algorithm must be augmented in order to address the “perfect opponent” syndrome. But if we use A-Life techniques, this becomes part of the system modeling that creates the ingame behavior. A parallel can be found in the creation of electronic music using MIDI or even good samples. Instruments that have a natural resonance, such as guitars and drums, sound unnatural when produced electronically. This is usually because each time the sample is played, it produces exactly the same waveform. Real-life instruments have variances in their sounds due to the immediate environmental conditions and due to imperfections in the metals/wood that the instrument is constructed of—imperfections that give each guitar/drum a special resonance of its own. No matter how badly it is played, a band playing a rock track will sound like a band playing a rock track, but it will still sound better than its electronic equivalent unless the technician has taken time to tweak the mix to make it sound more natural.

Some AI in games is bad because a perfect response is produced each time. Conversely, when that response has been badly implemented, we get the imperfect response every time; and it is all the more noticeable because it is persistently repeated. We cannot just dumb it down because that will lead to mistakes. On the other hand, if we leave it intact, it will be either too tough or too patterned. Some real (or observed) life should be mixed in to break up the repetition and also keep the AI routines from making too many silly decisions. In the end, the rules of the game must still be respected, so we need to make sure that AI is used to enforce them, with A-Life used to generate behavioral patterns within those rules. This is why we need both AI and A-Life and why AI is an important building block in the process. As we shall see in the next chapter, A-Life provides a naturalness that can make it applicable in many different areas—not just the game itself, but in development, testing, multiplayer design, and continual evolution through mimicry. A-Life is both a tool for ensuring that the system is well developed, implemented, and tested, as well as a paradigm for creating in-game behavior. It is up to the developer to decide exactly how much A-Life he wants to use or how little he has resources for. REFERENCES [EDGE01] Review of Transformers: The Game in EDGE magazine, September 2007, No. 179, p. 90. [GAMEAI01/02] http://www.gameai.com/games.html Close Combat 2 section, quoted e-mail from AI developer John Anderson [GAMEAI03/04/05/06] http://www.gameai.com/games.html Interstate 76 section, quoted e-mail from AI programmer Karl Meissner of Activision [GAMEAI07] http://www.gameai.com/games.html Battlecruiser : 3000AD section, quoted e-mail from developer Derek Smart [GAMEAI08] http://www.gameai.com/games.html Age of Empires I/II section, quoted e-mail from Dave Pottinger of Ensemble Studios [GAMEAI09/10/11/12] http://www.gameai.com/games.html Close Combat section, quoted e-mail (reprinted on GameAI.com with permission from Atomic Games) from developer Gary Riley [GAMEAI13/14/15] http://www.gameai.com/games.html S.W.A.T. 2 section, quoted e-mail from Christine Cicchi of Sierra FX 
















Techy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ARTTechy Pranav PKD ART