“Autonomous Steering Behaviours”, as first defined by Craig Reynolds in his presentation at the 1999 Game Developer’s Conference, are a simple yet effective method of creating realistic movement in computer-controlled characters. As such, they are often considered an important part of “Game A.I.”, despite not really having any relation to traditional A.I. techniques.
Using Reynolds’ model, the movement behaviour of an autonomous character can be described using active verbs that represent certain desires:
- A homing missile seeks its target
- A police car pursues a fleeing criminal
- A hoard of orc minions follow their orc leader
- An outlaw evades the sheriff’s bullets
- A convict may hide from the prison warden
- etc. etc.
Although they represent a range of different movements, each of these behaviours can be implemented following the same basic pattern, which can be summarised as follows:
- Determine the desired target location.
- Calculate the desired velocity vector, which points from the character in the direction of the target, truncated to the maximum allowed speed of the character (assuming that the character wants to reach the target location as quickly as possible)
- Compare the desired velocity to the current velocity, and calculate the acceleration required as the difference between them.
- Apply a steering force to the character, in the direction of the desired acceleration, truncated to the maximum allowed force produced by the character.
This is illustrated below, for the example of a car seeking a target (notice how the steering force is parallel to the vector of desired velocity – current velocity)
There are plenty of good online demonstrations of different individual steering behaviours (such as here or here), and, at some point, I’ll probably get round to writing up my own examples. However, for this post, I’m going to look at different methods to handle situations in which there may be two or more competing behaviours – you want to flee from an attacker but also avoid obstacles in your path, or you want to stay close to your teammates, but not so close that you’re stepping on their toes: how should we resolve these conflicting desires to come up with a single steering force in a given situation?
As an example for the methods discussed below, consider the steering forces shown in the following diagram generated by three different behaviours, named simply as A, B, and C, representing the vectors (1,4), (3,2), and (1,-2), respectively:
1. Priority Arbitration
Under this scheme, the various enabled steering behaviours are assigned a priority and, at any given time, only the behaviour with the highest priority is given control over the character’s movement. Priorities can be set dynamically based on the current environment so that, for example, when a character’s health is low, the desire to seek for a health pack may have greater priority than the desire to steer for cover, but, at other times, the desire to steer into cover has greater priority and is therefore given control. To be more specific, arbitration should choose the highest priority non-zero steering behaviour. “Obstacle avoidance” behaviour, for example, might always be given top priority but only needs to produce a steering force in situations when a collision would otherwise occur. If no corrective action needs to be taken, obstacle avoidance produces zero force, and arbitration should continue to consider the next highest priority behaviour and so on.
For the example above, assuming that behaviour A has highest priority, the resulting steering force is therefore (1,4):
There are some limitations to this approach: if the “avoid collisions” behaviour only kicks in when it is given top priority, say, it’s probably at the point when the character is facing an imminent collision that could have been avoided earlier and more gracefully if that behaviour were allowed to have taken some slight corrective action a bit earlier (but it was never given priority to do so). Priority arbitration also does not allow for opportunistic behaviour so that, for example, when the current priority is to seek to a target, the character will not make a slight detour suggested by one of the other behaviours to pick up a great power-up along the way, which a human player would probably have done in the same situation.
2. Weighted Blending
In contrast to arbitration, “blending” involves considering all of the competing desires acting upon the character, weighting them, and adding them together to create a combined force suggesting the appropriate steering direction. This can result in smoother behaviour than arbitration, since every desire is considered at every step, rather than the character switching abruptly from one steering behaviour to the next. Weighted blending is often used in flocking behaviours, where the desires for separation, cohesion, and alignment with other members of the flock are blended together.
For the preceding example, if the behaviours were weighted (A:0.25, B:0.5, C:0.25), the resulting force would then (1*0.25 + 3*0.5 + 1*0.25 , 4*0.25 + 2*0.5 – 2*0.25 ) = (2, 1.5):
The problem with blending is that you can end up with a compromise that suits nobody. Suppose you were trying to seek towards a target, but evade an enemy that stands directly in the way. At some point, the sum of the weighted force assigned to these two actions would cancel each other out into an equilibrium, so that the character would remain motionless, torn between the desires to reach the goal but not wanting to get any closer to the enemy. Blending can also be computationally expensive, as every force must be calculated on every frame.
3. Prioritised Dithering
This approach combines elements of the above two, while also involving an element of random chance. Under prioritised dithering, the various behaviours are once again assigned a priority, but they are also given a probability, expressed as a value between 0 and 1. In any given step, a random number is generated between 0 and 1. If the probability assigned to the top priority behaviour exceeds the random number generated (and, when executed, the behaviour generates a non-zero force), then that behaviour is given control and no other behaviours are considered. Otherwise, if the probability of the top priority behaviour is less than the random number, or if the resulting action generated no force, then a new random number is generated and used to test against the probability of the next-highest priority behaviour being executed. This continues down the behaviours in descending order of priority, until a non-zero force is generated by some behaviour.
The result of the previous example, if we assume that behaviour A is not chosen by random selection, and then behaviour B is selected, is (3, 2):
This is an interesting approach, which solves some of the problems of the previous two. Because only one steering behaviour actually gets given control, it is relatively cheap on CPU, whilst still giving every behaviour some opportunity to influence movement at every step. However, tweaking the appropriate values of probability and priority for each behaviour can take some work.
4. Weighted Prioritised Truncated Sum
Apart from having a silly name, this is my favoured approach, as it is a nice balanced hybrid method that makes use of both weight and priority assigned to each behaviour. Once again, behaviours are considered in priority order, with the highest priority behaviour being considered first. Any steering force generated by this behaviour is multiplied by its assigned weight and added to a running total. Then, the total (weighted) force accumulated so far is compared to the maximum allowed force on the character. If the maximum allowed force has not yet been reached, the next highest priority behaviour is considered, the steering force it generates is weighted, and then this is added onto the cumulative total. And so on, until either all of the steering behaviours have been considered, or the total maximum steering force has been assigned, whichever occurs first. If at any point there is still some surplus steering force left, but not enough to allocate the whole of the desired steering force from the next highest-priority behaviour, the extra force is truncated and what can be accommodated is added to the total (hence the weighted prioritised truncated sum).
When applied to the example using the same weights as before, and, assuming that the maximum force is exceeded after the steering force from behaviour B has been applied, the result is (1*0.25 + 3*0.5, 4*0.25 + 2*0.5) = (1.75, 2):
So there you have it – four different methods, producing four different results. They each have their own strengths and weaknesses, so there’s no “correct” answer, but it’s worth considering the different options available when deciding how to adjust an autonomous character’s movement behaviour.