One of our earliest design goals for Astrobase Command was to make believable AI characters with personalities, moods, likes and dislikes. Not only does it increase a player’s attachment to their crew, but it also creates new situations to manage.
Do you want to give your most gifted science officer a team to run, even if he’s the most irritating person in the known universe? The team might benefit from his knowledge, but will also most likely be united by their hate for him.
Anytime a character is making a decision, whether going about their daily routine or responding to an order from the player, we want their personality to come out in the choices they make.
Three different characters walk to work: one goes straight there and arrives on time, one stops to chat with friends on the way, another stops by the bar. These small variations say something about the character.
The trick is to provide a sufficient amount of unique decisions to make the similarities and differences between each character clear. But, you have to make sure it’s easy to add new actions, lightweight enough to run on dozens of characters at once and, most importantly, is believable.
Character decision making in Astrobase Command is handled by a utility system. Instead of designing a specific logical sequence (“If hungry, eat. If sleepy, sleep. Otherwise, wander.”), we provide it with 3 kinds of information:
These are the most basic components of character AI. They define values that change in some predictable way over time, like tiredness and hunger.
Utilities define wants and needs a character has based on their AI variables. These are things like DesireToEat or DesireToSleep. They takes variables as input and they output a score between 0 and 1. For example, DesireToSleep may be based on tiredness but also on morale.
The way the utility comes up with a score is by running the variables through a mathematical equation. Deciding what kind of mathematical curve you want will define how the utility score progresses over the variable value.
The image above is an example of a logit function. If we wanted to design a FearOfThreat utility based on distance to the threat, a logit function would be perfect.
To use an example from renowned game AI developer Dave Mark, if you see someone in the distance holding a knife, your fear goes up quickly no matter how far they are. If they approach you, your fear will increase further but not as rapidly as becoming aware of the person in the first place. If the person manages to get close enough that they could actually hurt you, your fear will increase much more rapidly again.
Actions define the actual behavior to be performed and provide a score to the AI when asked by using the appropriate utilities. So, a GoToQuarters action would likely use a DesireToSleep utility.
A Beautiful Mind
The beauty of a utility system is that each action is evaluated in a self-contained way. But, since they use utilities for scoring, multiple actions can rely on the same utilities without being interdependent.
Even better, utilities can also use other utilities as input variables. So, a JobSatisfaction utility may factor in other utilities relating to morale, tiredness and how well their skills match up with their assigned job.
An important bonus to representing decision making as a list of scored actions is that the list itself can be dynamic.
I don’t need to think about a FightFire action if the action isn’t part of the list. When a fire occurs, the fire itself will add the FightFire action to the action lists of the characters that encounter it. In this way, situational actions only appear when they become relevant.
This may still be a little too abstract at the moment. But have no fear! I’ll make sure to provide funny AI anecdotes as they occur. If you have any questions, you know where to ask them (It’s in the forums)!