Linear Algebra, Cats and Dogs, and Markov Unchained

How can linear algebra be useful when designing a slot machine, other than indirectly through affine transformations when doing animations?

When creating the mathematical model of a slot machine, most calculations are rather straightforward. Sometimes complex, yet usually overviewable. However, in some situations, a mathematical trick or two may come in handy.

Suppose, for instance, that the slot machine features a pick’n’click bonus game in which the player gets to pick a number of boxes to reveal her wins. The bonus game is triggered when three or more DOG symbols appear in the slot machine window in one spin, and the number of picks is determined by how many CAT symbols that have been collected prior to entering the bonus game: 0 or 1 cat – one pick, 2 or 3 cats – two picks, 4 or more cats – three picks. It is then essential to find, not the average number of CAT symbols at the time when the bonus game is triggered, but the probabilities to enter the bonus game with 0 or 1, 2 or 3, at least 4 collected CAT symbols.

Or, we could consider the case where three or more BONUS symbols appearing in the slot machine window triggers the free game feature, where the player is given 10 free games with multiplier 2x. Whenever a PEARL symbol appears in the middle position of the middle reel, the multiplier increases by 1x for subsequent spins, up to at most 5x. We would then need to find the probabilities for spins 1, 2, …, 10 to be played with multipliers 2x, 3x, 4x and 5x, respectively.

It is, of course, possible to run simulations to estimate the sought probabilities. This is, however, not only personally unsatisfactory, but more importantly, some legislations do require that we calculate the exact parameters related to the rtp of the game.

One way of doing this is to set up a Markov chain or Markov process. Briefly, this is a system made up of states between which we transfer with certain probabilities from one observation point or period to the next. These transition probabilities depend only on the current state; they lack memory, in the sense that they are not affected by what has happened before the current state.

The second example above, translated into the language of Markov chains, has four states and ten observation points, the states being “Multiplier m”, where m=2,3,4,5 and the observation points being the ten free game rounds.

Collecting the transition probabilities in a matrix, we can use matrix multiplication to find the probability of being in a certain state after any number of observation points (this is where the linear algebra plays its role!). If the probability of getting a PEARL symbol at the middle position of the middle reel is, say, 1/8, then each column of the following matrix contains the probabilities of transferring from one state to each of the other.

b05matrix01

First spin is always played with multiplier 2, so the corresponding initial state vector looks like this.

b05matrix02

Performing matrix multiplication repeatedly, we end up with the following table, showing the probabilities for playing the nth spin with multiplier m.

 b05table01

It is left as an exercise to make the necessary modifications for the CATS and DOGS example above.