top of page

Writing Tabletop AI


Recently I’ve been working on a project with a fairly prominent piece of cardboard AI in it and I was thinking about writing this blog, since it’s an interesting subject. One of the interesting things about tabletop AI is that a lot of the time it sneaks in without you noticing it, to the degree where I wasn’t sure if I should write this blog because I forgot that both of the two games I’ve launched on Kickstarter had AI in them.

To my mind there are three general forms of cardboard AI, Full AI, Hollow AI and Game AI, I’ll try to explain each of them and suggest a few things to be aware of when writing them. First though, I’d like to suggest a few general rules that go for all AI systems.

Freedom is Constriction

The first and most important rule for an AI system is that the player should never, ever, have to make a choice for the AI. Even the simplest of choices, and even with a general guidance on the choice from the game, should never be left up to the player. All actions taken by the AI should be clearly delineated and based on questions of observable fact. The reason for this is that choosing for the AI puts the player into an impossible and unpleasant space, either they choose in their favour and suspect that they ‘cheated’ by having the AI act poorly, or they don’t, and if they lose always feel like they did so because they played to ‘theme’ rather than mechanics.

Additionally, players should have the freedom to be smart about their choices, but if they are in control of their own threats and you let them be smart about that too, you’ll quickly find that clever players wreck your difficulty curve. You really can’t allow good players too much control because each lever you give them to pull results in a degree of difference between them and average players. Give them too many levers at opposite ends of the equation and you’ll soon find that to keep them challenged you need to crush more average players. I recently tested a prototype of a Legacy game with an experienced group that suggested a very skilled group should be looking at a two-thirds win rate, the game let us make choices for the game’s AI, and we never came close to losing a game.

A version of this problem can be seen in Arkham Horror TCG’s ‘Grim Rule’ which states that if a choice is to be made for the game, that the worst choice always be taken. Now, Arkham Horror is a sprawling game, as is very much Fantasy Flight’s wont and covering every possible interaction that it throws up is quite a task. But even that said, the Grim Rule is a bad example of forcing the player to make unclear choices for the AI. The biggest reason for this is that it forces the player to break character constantly. For example, if I have no real plan to spend resources, then losing resources is less painful than losing cards, but I only know that because I’ve planned ahead. So, when I choose for the game the worst option, do I have access to the knowledge that I planned ahead? Am I to be penalised for future planning? And if I am, should I make some sort of big brother double think effort not to notice what might be wise in the future from then on in an attempt to protect myself? It’s a horrible thing to put a player through and should be avoided.

Better to Stay Silent

AI that is inactive looks better than one that blunders. You will, thank god, never be able to construct an AI that matches up to a human being for cunning and ability. However, you will be able to make one that matches a human being for blundering, bone headed stupidity. If your AI feels a little dead that’s a pity, but people will forgive that, it is, after all, dead. If it self-destructs though, that’s harder to forgive. If you’re not sure that a behaviour will make your AI smarter, take it out and don’t replace it.

Soloists are patient

Less of a rule and more something to bear in mind. Rules that could never fly in a multiplayer game due to complexity are accepted, and sometimes preferred, by a solo player. Solo gaming is a much more internal experience than shared gaming, for obvious reasons, and significantly more in the way of steps and processes are acceptable. That doesn’t mean that people should be overloaded with complexity, but if a step results in something interesting happening but you would usually remove it in favour of simplicity, give it a second though for inclusion.

Full AI

The first form of AI is full AI, which is to say, a set of rules that the player can follow on behalf of the game which attempts to re-create the actions of a moderately capable player. These will often come in the form of a decision tree, a list of behaviours or sometimes a deck of actions. The greatest strength of this form of AI is that it gives the clearest sense of getting a multiplayer experience, if that’s what desired, because the player is getting the multiplayer experience in that you are playing with them by proxy in a sort of Chinese room example.

As a philosophical aside, the Chinese room thought experiment is where there is a room with a man who speaks only English in it with a big book that contains a range of Chinese phrases. Chinese phrases are posted into the room, the man looks them up in the book and copies out the Chinese phrases listed next to them then posts the new card back out the room. The book could even be translating the Chinese phrases into another language that the man does not speak, such as Japanese. The question becomes, if the phrases are question that are answered, or are successfully translated into Japanese, at that point who is answering the questions, or translating the phrases?

The problem is that Full AI is, ironically, usually the dumbest form of AI, unless the instructions are extremely complex or the game very simple, they will rarely come close to the capabilities of a skilled human opponent.

The most important thing to consider when writing full AI is that the decision tree or list of behaviours should be a full and closed loop. That is to say, writing should ensure that all if/then statements should have all eventualities fulfilled, the AI player should not be left wasting resources at various points and no decision paths trail off or fail to complete.

The Hollow Man

Hollow player AI, sometimes called Automata, is probably the most popular form of AI used in games now for a few reasons. Essentially this process of AI involves only figuring out what the other player in a game does from the perspective of a player and simply creating that perspective, without caring why it is happening. For example, if we imagine a worker placement game where another player taking up a slot means that you cannot use that slot, we can create a form of Hollow AI by simply having a worker randomly placed in a slot each turn. It is called ‘Hollow’ because there is nothing going on inside the opponent, all that is needed is that they appear to the player to be doing the sorts of things that effect the opponent, a sort of AI zombie.

There are two main reasons that this is the most prolific form of AI in games now. The first is that it is the usual accidental form of AI. What I mean by that is, many games give a ‘score attack’ solo mode as a simple solo version, which is a basic form of hollow AI. In the case when I’m playing a relatively low interaction game often the only thing my opponents do that affects me is to set a score to be beaten, if the game sets that score, and rates various levels of it, that is a form of AI. Score attack games will quite often simply tell players to beat their own score, setting up a situation where they actually become their own hollow opponent AI. Similarly, solo modes will often set a turn limit on a game that would generally end when a player performs a certain act, this is another form of very basic hollow AI.

The second reason is that it is extremely easy to re-create extremely complex behaviours effectively by this method, and more so, it creates a set of instructions that are very easy for a player to re-create. Imagine in our simple worker placement game above that it is a game with a good deal of hidden information in the form of cards held in player’s hands. From a player’s perspective this sort of hidden information might well make it difficult or impossible to predict exactly what an opponent will do on a given turn. It might be that some of the spaces are more basic and often taken actions than others. So, imagine that we create a six-sided dice that lists the two most popular spaces on two faces and the two least popular on the two remaining faces, each turn we roll the dice and that is where the AI places its worker. The AI now never needs to draw cards or place workers to a purpose and it has fairly reasonably created the feel of playing against another human being with a process that is incredibly easy to re-create each turn. If we have the player discard a few cards from the draw deck each turn, to represent that they would not have access to all the cards because their opponent would use some, and post a score for the AI that’s a reasonable level we have a fairly accurate facsimile of playing against someone without needing to follow decision trees and complex rules.

One of the advantages of this sort of AI is that human beings naturally create stories and explanations. It is part of our make-up that if we roll a random dice and it places a worker on a space that we really wanted, even if it only happens once in a game, we will be certain that it always happens and that the game knew we wanted that space and any number of any other explanations. Ironically, when given a set of full AI instructions that actually take decisions in a way that is crafted and designed to in fact steal spaces we want this sort of interpretation is far less likely, because we can see the workings going on and know that they are just on a list, we tend not to create and ascribe internal workings to them.

The major problem with these forms of AI is that they are very clearly not an equivalent to human decision making, even by proxy, they are a very different experience. Although players might choose to ascribe stories to the AI’s actions they realise that in any game with, for example, engine building, that no such engines or processes are really being built. These systems seek to recreate the symptoms of another person playing a game rather than the presence of another person. In the case of a game with open information or any level of necessary predictability within the other player the results can be wildly swinging and extremely disappointing, abstract tactical games, for example, would struggle to use such a system.

In creating a Hollow AI system, the first thing to do is to pin down exactly and only the points at which the action of another player interacts or effects the choices of their opponent, and then to simulate those actions. It is important to remember that such a system simulates such actions, rather than re-creates them. If I draw a card from a deck, then examine my hand and play the card which gives me the most VP, which averages out at 1VP per card drawn then to recreate that we have a decision tree that asks the player to draw a card on the behalf of the AI, then look at the AI hand and play the card with the highest number in the ‘VP gained’ box. To simulate it, we ask the player to discard the top card of the draw deck and give the AI 1VP whenever the AI would draw a card. Such a system should junk out of necessity all processes that do not directly affect the human player or players, they are the province of the full AI.

Hate the Game

The previously outlined AI attempt to recreate to some degree the experience of playing against a human player in an otherwise oppositional game, but not all AI is built that way, many co-op or solo games have their own form of AI, which I’ll call ‘Game AI’. Game AI is a form of AI that the game is built for, the entire engine is built to service it in order to challenge the other player, and its generally where the most interesting cardboard AI challenges stem from, which is interesting because its not really a form of AI at all. The equivalent is that a person can stop you from getting through a door by judging which way you’re going to try to dodge through, judging whether it is best to go for legs, waist or head and other such forms of intelligent interaction. Equally a sturdy lock on the door can stop you getting through it at least as effectively if not more so, its just that the face off between two people is a fairer contest than the face off between a person and a metre-thick steel vault door.

I’ve read much on the subject that co-op games are extremely difficult to write, the perception being that since non-co-op games can rely heavily on the existence of other players to balance difficulty issues, that they are considerably easier. Personally, I’ve not found this to be the case. I can imagine that a game with total symmetry of power and position is easier to write for this reason, but I’ve found that any asymmetry of position is tough because it needs to be carefully balanced. The benefit to writing co-op games is that oppositions can be wildly imbalanced, the game can hold all the cards to the degree that even a perfect play from the human player gets a less than 20% win-rate, especially if a range of challenges can be offered such that introductory enemies can be beaten easily with final bosses presenting a brutal challenge. For that reason I actually find writing co-op games in many ways less challenging, certainly less challenging than asymmetrical, balanced, open information oppositional games.

When writing game AI it is important to cut out elements that do not increase the apparent threat or intelligence of the game and to build elements that provide the potential for emergent story telling. Oppositional games always have a story, since they always allow you to struggle with a friend, your victory is their defeat and such an exchange has built into it pathos and interest. To have the same effect with a co-op game there must be a perceived reason for actions, a sense of direction from the enemy. Remember that you are creating all parts of the world that you build at these points, there is nothing that has to be in there and nothing that is not allowed. If something isn’t making your game more threatening or comprehensible, cut it out.

So I’ve written various forms of AI between my co-op game SSO, my deck builder with a score attack and full AI mode Moonflight and now the solo built wargame Perilous Tales. All are a challenge, but immensely rewarding to create and open up a game to a whole new audience. Have you introduced an AI to any of your designs? If so, which form did you follow, did you take a more hybrid route, or do you think there is a significantly divergent form of AI available on the tabletop?

bottom of page