Linear Algebra, Cats and Dogs, and Markov Unchained

How can linear algebra be useful when designing a slot machine, other than indirectly through affine transformations when doing animations?

When creating the mathematical model of a slot machine, most calculations are rather straightforward. Sometimes complex, yet usually overviewable. However, in some situations, a mathematical trick or two may come in handy.

Suppose, for instance, that the slot machine features a pick’n’click bonus game in which the player gets to pick a number of boxes to reveal her wins. The bonus game is triggered when three or more DOG symbols appear in the slot machine window in one spin, and the number of picks is determined by how many CAT symbols that have been collected prior to entering the bonus game: 0 or 1 cat – one pick, 2 or 3 cats – two picks, 4 or more cats – three picks. It is then essential to find, not the average number of CAT symbols at the time when the bonus game is triggered, but the probabilities to enter the bonus game with 0 or 1, 2 or 3, at least 4 collected CAT symbols.

Or, we could consider the case where three or more BONUS symbols appearing in the slot machine window triggers the free game feature, where the player is given 10 free games with multiplier 2x. Whenever a PEARL symbol appears in the middle position of the middle reel, the multiplier increases by 1x for subsequent spins, up to at most 5x. We would then need to find the probabilities for spins 1, 2, …, 10 to be played with multipliers 2x, 3x, 4x and 5x, respectively.

It is, of course, possible to run simulations to estimate the sought probabilities. This is, however, not only personally unsatisfactory, but more importantly, some legislations do require that we calculate the exact parameters related to the rtp of the game.

One way of doing this is to set up a Markov chain or Markov process. Briefly, this is a system made up of states between which we transfer with certain probabilities from one observation point or period to the next. These transition probabilities depend only on the current state; they lack memory, in the sense that they are not affected by what has happened before the current state.

The second example above, translated into the language of Markov chains, has four states and ten observation points, the states being “Multiplier m”, where m=2,3,4,5 and the observation points being the ten free game rounds.

Collecting the transition probabilities in a matrix, we can use matrix multiplication to find the probability of being in a certain state after any number of observation points (this is where the linear algebra plays its role!). If the probability of getting a PEARL symbol at the middle position of the middle reel is, say, 1/8, then each column of the following matrix contains the probabilities of transferring from one state to each of the other.

b05matrix01

First spin is always played with multiplier 2, so the corresponding initial state vector looks like this.

b05matrix02

Performing matrix multiplication repeatedly, we end up with the following table, showing the probabilities for playing the nth spin with multiplier m.

 b05table01

It is left as an exercise to make the necessary modifications for the CATS and DOGS example above.

That’s a Bingo!

Creating the math model for a player vs. player Bingo game is easy. What is left of the pot when the commission has been deducted is distributed e.g. according to an hour-glass shape over the winning tickets, perhaps something like the table below.

bingoTable01

Or maybe more of a funnel-shape.

bingoTable02

Mission accomplished?

Well, in most cases not. We may want to add a jackpot which is paid out should a ticket obtain 1 row Bingo within, say, 13 drawn numbers. Or give out free tickets for the next round when the marked numbers on a ticket form a certain pattern. In order to control and fine-tune the playing experience, such as hit rates and average amounts paid for these bonus features, we need to know the probabilities for the respective events to occur.

For simplicity, let’s consider a Mini-Bingo, where each ticket consists of a 3×3 square grid of (nine) numbers between 1 and 21 inclusively. We want to calculate the probability of filling one row within k called numbers, where k ranges from 0 to 21. It does not matter which one of the three horizontal rows we fill, any of them will do.

Obviously, for k<3 the probability is 0 and for k>18 the probability is 1. For general k, we may perform the calculations in two steps, step A and step B.

A. Given that k numbers have been called, where k=0,1,…,21, find the probability that exactly j of them are present on the ticket.

B. Given that exactly j numbers have been marked on the ticket, find the probability that at least one row has been filled. Here, j ranges between 0 and 9.

We can then combine the two steps to find the probability that at least one row has been filled after k called numbers.

Step A is an exercise in elementary combinatorics involving binomial coefficients, while step B is the one that requires some more thought.

Consider first the case j=3, where three of the nine squares are marked. Since each row is formed by three squares, three out of the 84 patterns made up by three marks will qualify, so the sought probability is 3/84 or 1/28. When j=4, out of the 126 different patterns, the qualifying ones are those where three marks cover one of the three rows while the outlier may end up in any of the six remaining positions. Thus 3*6/126=1/7 is the number we are looking for. The case j=5 is similar, three of the marks cover one of the three rows while the other two can be distributed freely over the remaining six squares. The number of qualifying patterns is therefore 3*15, so our probability is 5/14. The same reasoning cannot be used when j=6, however. Since the three outliers may cover a row by themselves, we would count some qualifying patterns twice, namely those which cover precisely two rows. We end up with 3*20-3=57 qualifying patterns consisting of six marks, out of a total of 84 patterns. For j>6, all patterns qualify and for j<3 there are no patterns which could ever fill an entire row. We end up with the following probability table.

bingoTable04

Combining with the result from step A and performing similar calculations for 2 and 3 rows, we get the complete table of Bingo probabilities for this mini scale game.

bingoTable03

Also added is a column for the probabilities of covering the four corners. In addition to the prizes paid for 1, 2, and 3 row Bingo, we could maintain a progressive jackpot funded by 5% of all bets and paid when a ticket fills the corners within 5 called numbers. The average pot when hit is about $60 if the ticket price is $1, and it occurs once every 1200 tickets sold, on average.

Note that, as a byproduct of these calculations, we have the base for a player vs. bank type of Bingo game, where the probabilities are a prerequisite when stitching the win table together.

In the full scale case, where each ticket has 25 squares with numbers between 1 and 75, it is of course inconvenient to do the calculations in step B by hand. I would normally write a computer program working with bit operations, where one bit is assigned to each square in the grid. In a for loop, we let an integer mask range from 0 to 2^25-1. In each step, the number of bits set is our j. If the integer p is the bitmask for a certain win pattern, e.g. a row, we increment the appropriate counters when p&~mask=0. Here, & denotes bitwise AND while ~ denotes bitwise NOT. The code may look something like this.

bingoCode01