Computing Poker Win Rate From Sessions
In the last article, we briefly touched on poker win rates, which is how many (cash) poker players measure their level of success. The win rate is usually given as how many big blinds (a measure of table stakes) a player wins per 100 hands, on average. I think any poker tracking software will prominently display this for you, but I don’t run any poker tracking software. An acquaintance asked how I estimated my win rate.
The online casino I play on does provide quite detailed history, but I have not found (nor, to be honest, looked very hard for) any export functionality. So what I did instead was create a spreadsheet where I record three things from each session I play:
- The stakes (size of big blind),
- The profit/loss (change in money), and
- The number of hands played in that session.
A selection of rows of this spreadsheet looks like this1 Note that the cash amounts are denominated in sek, and a sek is roughly a tenth of a usd or eur. I cannot afford big blinds of $1…:
Stakes | Cash delta | Hands |
---|---|---|
0.2 | +30.21 | 51 |
0.2 | −31.87 | 58 |
1 | +38.13 | 10 |
1 | +6.72 | 26 |
To estimate the win rate from this table, we first convert the cash delta into big blind deltas through division with stakes. Then we can delete the columns for stakes and cash delta, because the information in those columns has been folded into the bb delta column.
Hands | bb delta |
---|---|
51 | +151.05 |
58 | −159.35 |
10 | +38.13 |
26 | +6.72 |
To estimate our win rate, we sum up all big blind deltas (giving 36.55) and divide by the total number of hands (which is 145). This gives us a profit of 36.55/145=0.25 big blinds per hand.
This tells us that, given the information above, we know our win rate is somewhere in the ballpark of 25 bb/100.
We need to figure out the size of the ballpark
Many people stop there. Some even go so far as to start bragging about their ridiculous win rate2 Nobody has a 25 bb/100 win rate. That win rate would make one rich in no time at all. Even at low stakes, with a reasonable level of multi-tabling, that would be a monthly income of $9,000 just from full time online poker. I’m sure some people make that sort of money on online poker, but not at low stakes, which is what we are discussing., which would be a mistake as we will soon see.
Since we are pretending to be real statisticians, we want to figure out the error of our estimate.
Rephrasing division of sums as weighted means
In order to do that, we can cast the computation above in a slightly different light. Instead of thinking about it as summing all deltas and dividing by total hand count, we can compute first the big blinds earned per hand in each session:
Hands | bb delta | bb/hand |
---|---|---|
51 | +151.05 | +2.96 |
58 | −159.35 | −2.75 |
10 | +38.13 | +3.83 |
26 | +6.72 | +0.26 |
We are looking for the average bb/hand across all sessions. It can be tempting to take the average of the bb/hand values for each session, arriving at 1.1 bb/hand. We know already from above that this is wrong – the true figure should be 0.25 bb/hand.
One does not simply take the average of averages and expect a meaningful result3 Well, one does that only if the groups that are averaged are of approximately equal size. If we had played roughly the same number of hands in each session, then we could have used the plain average of averages and it would be about the same as the total average.. We have to perform a weighted average. The intuition behind this is that the first session has contributed 51 hands at +2.96, while the third session has contributed just 10 hands at +38.13, so we want the first session to be more influential in our average than the third – because it was in the real data!
Thus, the weights in the weighted mean will be given by what proportion of hands were played in each session:
\[w_i = \frac{h_i}{\sum h_i}\]
So for the first session, the weight would be 51/145=35 %, whereas the third session carries a weight of only 10/145=7 %.
The easiest way to perform the weighted mean is to multiply each bb-per-hand value with its weight, and then sum it all up:
\[\bar{x} = \sum \left( w_i x_i \right)\]
When we run the numbers, this results in a value of – you guessed it – 0.25 bb/hand.4 Proving that this indeed gives the exact same result can be a fun exercise in symbolic manipulation, but excessive for this article.
Variance of a weighted mean
Now that we are looking at this as a weighted mean, it becomes more obvious how to compute its variance. When we substitute in the expression for the weights5 And apply a notational shorthand where each un-indexed variable is implicitly indexed by its nearest outer sum., we have used the following equation to compute our average bb/hand across all sessions:
\[\bar{x} = \sum \left( \frac{h}{\sum h} x \right)\]
Given this expression,
\[\mathrm{Var}[\bar{x}] = \mathrm{Var}\left[ \sum \left( \frac{h}{\sum h} x \right) \right]\]
At this point we can get to work exploiting the variance laws, which tell us that
- \(\mathrm{Var}(x + y) = \mathrm{Var}(x) + \mathrm{Var}(y)\), and
- \(\mathrm{Var}(kx) = k^2 \mathrm{Var}(x)\)
First, we can move the variance inside the sum, thanks to the first law.
\[\mathrm{Var}[\bar{x}] = \sum \left( \mathrm{Var}\left[ \frac{h}{\sum h} x \right] \right) \]
Then since the denominator \(\sum h\) will be the same for all sessions (it is the total number of hands across all sessions, after all), we can think of that as a constant \(k\) and apply the second law.
\[\mathrm{Var}[\bar{x}] = \sum \left( \frac{1}{\left(\sum h\right)^2} \mathrm{Var}\left[ h x \right] \right) \]
Then we have the fact that multiplication distributes over addition, i.e.
- \(ka + kb = k(a + b)\)
which lets us simplify the variance further to
\[\mathrm{Var}[\bar{x}] = \frac{1}{\left(\sum h\right)^2} \sum \left( \mathrm{Var}\left[ h x \right] \right) \]
And this is where we get stuck. We don’t know the variance of each product \(h_i x_i\). What we can do, however, is assume that sessions and hands are somewhat independent6 If basket players don’t have hot hands, then surely poker players also do not., and treat the between-hands variance as (a) stable, and (b) equal to the between-sessions variance.
If it is stable, we can replace the sum with a multiplication:
\[\mathrm{Var}[\bar{x}] = \frac{ n \mathrm{Var}\left[ h x \right] }{\left(\sum h\right)^2} \]
If the between-hands variance is equal to the between-sessions variance, then the variance of \(h_i x_i\) is the same as the variance we have observed between them for our sessions:
\(h_i\) | \(x_i\) | \(h_i x_i\) |
---|---|---|
51 | 2.96 | 151.05 |
58 | -2.75 | -159.35 |
10 | 3.83 | 38.13 |
26 | 0.26 | 6.72 |
The variance of the last column (feel free to compute it manually or use a spreadsheet) is 16,500. Thus, to get the variance of the number of big blinds per hand, we plug in the numbers we’ve figured out in to the big expression:
\[\mathrm{Var}[\bar{x}] = \frac{ 4 \times 16500 }{145^2} = 3.14 \]
Multiplying to get a win rate credible interval
Since this is the per-hand variance, we need to multiply by \(100^2\) to get the variance of the win rate7 Which is expressed as the average per 100 hands, as you may recall.. Then we take the square root of that and we get the standard error of the win rate.
\[\sqrt{3.14 × 100^2} = 178\]
Thus, while the estimation of our win rate was a mighty impressive 25, we see now that the standard error is 180. A very rough 90 % credible interval for our win rate would span
\[25 \pm 1.645 × 180\]
i.e. from −270 to +220.
Put differently, there’s still something like a 44 % chance that the true win rate is less than zero, and we’ve just been lucky in the sessions we’ve had so far.
Numerical methods are always available
This is the sort of problem where numerical methods are also very easy to apply. We first estimated our win rate by summing the big blind deltas and dividing by total number of hands. We can do this except on randomly drawn sessions (with replacement) – this is the essence of the bootstrap.
hands <- c(51, 58, 10, 26) bb_deltas <- c(151, -159, 38, 7) draw_sessions <- function () sample(1:length(hands), size=length(hands), replace=T) compute_winrate <- function (sessions) 100 * sum(bb_deltas[sessions]) / sum(hands[sessions]) replications <- replicate(5000, compute_winrate(draw_sessions()))
If we now ask R for the mean and standard deviation of the replications
variable, which contains the sample distribution, it will report what we already
know: 25 bb/100 with a standard error8 Did you know standard error is
shorthand for “standard deviation of the sampling distribution”? Now you do!
of 180.
But the neat thing about this technique is we can also draw the sampling distribution, now that we have it! As a reminder, this distribution represents our best guess for the possible true win rates, given the sessions we have observed so far. It doesn’t look so impressive anymore, does it? Could be practically anywhere between −200 and +300.
Then one may ask: why not just run the numerical computation? I like doing things in spreadsheets because that means I can do them on my phone. So that’s why I did it the long way around: I can fill those formulae into cells in the spreadsheet. It’s possible to use resampling techniques in spreadsheets, but they are not yet built for it.
Let me know what mistakes I made!