Linear Algebra, Cats and Dogs, and Markov Unchained

How can linear algebra be useful when designing a slot machine, other than indirectly through affine transformations when doing animations?

When creating the mathematical model of a slot machine, most calculations are rather straightforward. Sometimes complex, yet usually overviewable. However, in some situations, a mathematical trick or two may come in handy.

Suppose, for instance, that the slot machine features a pick’n’click bonus game in which the player gets to pick a number of boxes to reveal her wins. The bonus game is triggered when three or more DOG symbols appear in the slot machine window in one spin, and the number of picks is determined by how many CAT symbols that have been collected prior to entering the bonus game: 0 or 1 cat – one pick, 2 or 3 cats – two picks, 4 or more cats – three picks. It is then essential to find, not the average number of CAT symbols at the time when the bonus game is triggered, but the probabilities to enter the bonus game with 0 or 1, 2 or 3, at least 4 collected CAT symbols.

Or, we could consider the case where three or more BONUS symbols appearing in the slot machine window triggers the free game feature, where the player is given 10 free games with multiplier 2x. Whenever a PEARL symbol appears in the middle position of the middle reel, the multiplier increases by 1x for subsequent spins, up to at most 5x. We would then need to find the probabilities for spins 1, 2, …, 10 to be played with multipliers 2x, 3x, 4x and 5x, respectively.

It is, of course, possible to run simulations to estimate the sought probabilities. This is, however, not only personally unsatisfactory, but more importantly, some legislations do require that we calculate the exact parameters related to the rtp of the game.

One way of doing this is to set up a Markov chain or Markov process. Briefly, this is a system made up of states between which we transfer with certain probabilities from one observation point or period to the next. These transition probabilities depend only on the current state; they lack memory, in the sense that they are not affected by what has happened before the current state.

The second example above, translated into the language of Markov chains, has four states and ten observation points, the states being “Multiplier m”, where m=2,3,4,5 and the observation points being the ten free game rounds.

Collecting the transition probabilities in a matrix, we can use matrix multiplication to find the probability of being in a certain state after any number of observation points (this is where the linear algebra plays its role!). If the probability of getting a PEARL symbol at the middle position of the middle reel is, say, 1/8, then each column of the following matrix contains the probabilities of transferring from one state to each of the other.

b05matrix01

First spin is always played with multiplier 2, so the corresponding initial state vector looks like this.

b05matrix02

Performing matrix multiplication repeatedly, we end up with the following table, showing the probabilities for playing the nth spin with multiplier m.

 b05table01

It is left as an exercise to make the necessary modifications for the CATS and DOGS example above.

Gauss, Berry and Esseen went to Monte Carlo


Normal (or Gaussian) probability distributions are very convenient to work with, due to their regular properties. They are widely used in statistical applications such as hypothesis testing and prediction. They become even more useful thanks to the Central Limit Theorem, which translated into the language of slot machines somewhat simplified states that if we hit the spin button sufficiently many times, our average win distribution can be approximated by a Normal distribution. All that is needed to find the best approximating Normal distribution for a certain number of game rounds is the mean value (RTP) and the variance (volatility).

 

For instance, many people have seen expressions like

P( m-d < X < m+d ) = 1-a.

It states that the probability is 1-a that the variable X falls within the distance d from its mean value m. The confidence level a is typically 0.1 (10%) or 0.05 (5%). If X is the RTP, this formula gives an idea of what to expect regarding the player return after a certain number of played game rounds. In the formula above, m is the theoretical RTP, and in view of the Central Limit Theorem, d can be approximated by the corresponding parameter from a Normal distribution which is easy to find.

The approximation by Gaussian distributions is also heavily relied upon in the testing phase of a game. A simple statistical method for finding bugs is to calculate the standard deviation for one game round and use expressions similar to the above formula to make predictions regarding the RTP after, say, a million game rounds. If too few or too many groups of a million game rounds have RTP outside the interval (m-d,m+d), then it’s a good idea to start looking for mistakes in the code or the configuration. This is also useful for finding situations with several bugs whose effect on the RTP cancel each other out to some extent – if the bugs have effect on the variance, it should show when grouping the game rounds and comparing the partial results.

So far, so good. But what is “sufficiently many”? The Berry-Esseen Theorem extends the Central Limit Theorem and says, again rather simplified, that the more skewed the distribution is, the more game rounds we may need to play for the Normal distribution to be a good enough approximation. And the probability distributions of games tend to be rather skewed.

As an example, we consider a scratch ticket game with a virtual batch of 200’000 tickets. We repeatedly simulate one million game rounds, which thus corresponds to five full cycles of the game, and compare the result with the best possible Normal distribution. In the diagram below, the Monte Carlo method has been applied 10’001 times on 1’000’000 game rounds, thus over ten billion game rounds have been simulated.

Approximation by Normal distribution 1

The curves show the inverse of the cumulative distribution function for the respective methods. The Monte Carlo curve is of course more jagged than the Normal distribution curve, but they seem to match pretty well. Zooming in at 95%, however, shows a discrepancy of more than one percent. And we all know the importance of one percent!

Approximation by Normal distribution 2

For a multi-line slot machine, the situation is a bit more complicated. It is usually easy to calculate the variance for
the base game when playing only one payline. It gets a bit more tricky to find the variance for the entire game if bonus features are involved. Most importantly, since two separate paylines are not statistically independent, we cannot calculate the variance for play on all lines by just dividing the one line variance by the number of paylines.

The Central Limit Theorem is certainly very useful also for us working with games, but I would recommend to use it with a bit of care!