How many poker hands are there?

I've been posting a lot of philosophical geekery lately, so I'll balance that today with some math geekery. Today's math term is equivalence class. The basic idea is simple: if you have a big set of things, you can reduce it to a smaller set of things, each of which is a subset, or "class", of those things that are “equivalent” in some well-defined way.

Here's an example: how many five-card poker hands are there? Well we can pick any card from a 52-card deck, then pick a second card from the 51 remaining, and so on five times. This gives us 52×51×50×49×48 hands, or 311,875,200. Those 300 million hands include A♥4♦A♣9♥4♣ and also 4♣9♥A♣A♥4♦, so we can immediately reduce that by a factor of 120 by noting that poker rules don't care what order the cards are in. So we collect all those together and reduce our number to 2,598,960.

But we can go further still. That 2.5 million counts our two hands above (along with other combinations like 9♥4♦A♥4♣A♣) as equivalent, but it counts separately the hand A♠A♥4♥4♦9♠, which is the “same” hand in the sense of being identically valued—it is “two pair, aces and fours, nine kicker”, just like the first two. So how many poker hands are there, only counting those that are actually of different value in the game? As it turns out, only 7,462. Number one at the top of that list of 7,462 is simply “royal flush”, which accounts for four of our 2.5 million hands, and 480 of our 300 million. Number 2 is “king high straight flush”, and so on down to number 7,462 which is “no pair, 7-5-4-3-2” (which accounts for 1,020 of our 2.5 million, or 122,400 of our 300 million).

Notice a major difference between our two reduction operations: in the first case, we reduced the big set into subsets that were all the same size. Each of our 2.5 million hands contains exactly 120 of our 300 million—that's the number of different ways you can arrange 5 cards. As a consequence of this, the probability of each of those 2.5 million hands is exactly the same, just as is the probability of each of the 300 million. The 7,462 sets, however, are of different sizes. There are hundreds of times as many ways to get 7-5-4-3-2 as there are to get a royal flush, so the probability of each is different.

One common application of equivalence classes is in computer science: sometimes you need to do something to a very large set of inputs, and you can simplify and speed up the operation by reducing them to a smaller set. If you ask Google for pages about “poker”, not only would you expect it to return pages that mention “Poker” and “POKER”, but Google would save time and disk space by indexing those only once.

This can be applied to life as well. Perhaps there is a large set of things you'd like to improve about your life in some way. If you can group them by things that might have a similar cause or similar solutions, not only will you reduce the number of things to think about, you might notice that some groups are much larger than others, giving you guidance about what to focus on.

What were the odds of that?

As the token math geek where I work, I am the one usually asked to calculate probabilities. I work in a casino, so right after some remarkable event happens at the table I am often asked, "What were the odds of that?" Sometimes I can interpret their question in a way that is meaningful and therefore has an answer. But sometimes I can't. That's not because I don't know the math, but because the question itself doesn't contain enough information to identify a single answer. In this essay I hope to explain that more clearly. A warning though: the details of math and philosophy ahead may be a bit heavy, although they are not specifically aimed at the propeller-beanie set. If your eyes glaze over at such things, then you'll just have to skip it and take my word for it when I don't have an answer. But then you probably wouldn't have asked me for exact odds in the first place.


Words like odds, probability, likelihood, chances, and such are used to ask these questions and are thought of as interchangeable in informal speech. But answering them requires us to use different words for subtly different concepts, and to define those things precisely. Here are my definitions for this essay (they are pretty standard among math geeks): The frequency of an event is simply how often that event happens compared to all events of the kind. It's a simple ratio, but may require careful definitions and good sources of information. Probability is a measure of knowledge—how certain we can be that an event is going to happen (or that we will discover it has already happenned). This depends on the frequency of the event, but also upon when the question is asked, and even who is asking. Odds are the fair payouts on a bet, and are just a different way of expressing the probability of winning a bet from the point of view of the bettor. Most bets you can make in a casino, it should be no surprise, pay less than fair odds (a notable exception being the craps bet called “odds” which is exactly fair).


Frequency is simply the ratio of the number of things we're interested in (called the “event” or “sample set”) to the total number of things of that kind (called the “universe” or “population”), both of which we have to define carefully. Notice that this fraction will always represent a number between zero (the event never occurs) and one (the event occurs every time).

For example, the frequency of tattoos among Americans is about 14%. This is a simple ratio: about 45 million Americans have at least one tattoo, and the US population is about 312 million (Pew Research Center, 2012). 45 divided by 312 is 0.14423..., rounding off to about 14%. Notice that we carefully defined the two groups: First, the event is “a person having at least one tattoo”. If we chose an event such as a more specific kind of tattoo, we'd have a smaller number. If we chose an event like “tattoo or piercing” we'd have a bigger number. Second, the universe of our event is “all Americans”. If we chose a different universe such as Americans aged 18-40, we'd have a higher number. If we chose Mississippi residents over 55, we'd have a lower number. Frequency is simply the ratio of these two numbers.

Sometimes it's not obvious what the sample set or population are. Let's say a Texas Hold'em player just flopped four queens, and he asks me the infamous question after the hand. What's the event? Getting exactly four queens, or any four of a kind, or four of a kind or better, on the flop, or after all the cards are out? What's the universe? All possible deals of hold'em that could have been dealt that hand, all possible boards already given that he had a pair, all games as played (accounting for possible hands over before the flop), or games dealt to the end (or rabbit-hunted)? Sometimes it's possible to make reasonable assumptions about what the asker really wants to know, but sometimes it isn't. In this case, I would probably tell him that out of 133,784,560 combinations of seven cards (his two, plus five on the board, rabbit hunting if necessary), 224,848 of those are four of a kind. That includes those where he has a pair, those in which he only holds one card, and those with quads on the board. We probably don't want to count quads-on-board, so subtract 156. That makes the frequency 224,692-in-133,784,560, or about 1 in 595. You will be dealt quads in hold'em about every 600 hands. You'll fold many of them (for example, the 7-2 that would have seen three deuces on board, or the kings you raised and made everyone else fold), so if you play poker for five hours each weekend, averaging 40 hands an hour, you might expect to have quads two or three times a year.

Sometimes we can even make a reasonable future prediction based on the frequency of past events. When someone asks “What are my odds of dying in a plane crash?”, one reasonable answer would be something like the frequency ratio of fatal crashes to total flights over recent history. Choosing the appropriate event and universe to answer the question may take some thought. I think crashes-per-flight is better than crashes per flight-mile or per passenger-mile because most crashes occur at takeoff and landing, making all flights equally dangerous regardless of flight length or plane capacity. Since 2000 there have been eleven airline crashes with passenger fatalities in about 130 million flights (National Transportation Safety Board, Bureau of Transportation Statistics). That's a frequency of less than one in ten million. With such a small sample size and something so unpredictable, it would be foolish to try to predict the future with much precision based on these numbers, but we can make reasonable estimates. But predicting the future brings us to the idea of probability.

Probability and Odds

For many purposes, probability and frequency are the same thing (it will get weird later). Probability is also a number between zero and one just like a frequency ratio. Imagine that I thoroughly shuffle a standard deck of cards, then slide the top card off by itself, placing a chip on it. Then I ask you, “What is your probability that the card under the chip is a facecard?” You know that there are twelve facecards in a standard deck of 52 cards, so the frequency ratio is 12/52, or about 23%. And this is indeed your probability as well. You have no information to change your estimate from the simple frequency ratio.

Odds are based on the same numbers. They are expressed as a ratio of not-event to event rather than event to universe, so while the probability was 12 in 52, the odds are 40-to-12 (40 being the number of non-facecards in the deck rather than the total number of cards). Every probability can be expressed as odds just by keeping the same sample size at the top of the fraction, changing the bottom of the fraction by subtracting the top number, then flipping it over. Odds are handy because they determine the fair payout on a bet (they also simplify some Bayesian calculations, but that's another essay). If you wanted to bet that the card was a facecard, I would have to offer you a payout of 40-to-12 (or equivalently, 10-to-3) to make the bet fair. Let's say you bet $3, and I agreed to pay $10. If you did this a million times, you'd expect to see a facecard about 230,769.23 times, winning $10 for each, for a total win of $2.3 million. And you'd expect to lose your $3 the other 769,230.76 times, for a total loss of ... $2.3 million. Your actual win or loss of course would be due to the vagaries of chance, but overall the bet would be fair.

People sometimes confuse probability and odds because when the numbers get very large (as they do for the kinds of very rare things we like to talk about), they are not very different. If the probability of something is one in a million, the odds are 999,999-to-1, so if someone says the odds are “a million to one”, he's not really off by much. But when the numbers are smaller it makes a big difference. The probability of rolling a total of nine on a pair of dice before you roll a total of seven is 2-in-5, or 40%. That makes the odds 3-to-2. Mistakenly paying 5-to-2 on such a bet would quickly break the bank. A craps table correctly pays 3-to-2 for odds on your nine. You may have read books on Texas Hold'em that say the probability of being dealt pocket aces is 1-in-221. You may also have read that the odds are 220-to-1. Both are exactly correct, and they're not rounding off. They're just saying the same thing in two different ways. Alas, not all authors are as meticulous as I've been here in using “in” for probabilities, “to” for odds, and flipping them around the right way.

Where it gets weird

The reason that probability can differ from frequency is that it doesn't represent merely the frequency of an event in its universe, but how certain a person can be that an event will happen (or be revealed). This is based upon the information that person has at his disposal when the question is asked. Changing the information you have about an event—without changing the event itself—can change your probability.

Let's go back to my facecard bet. Now let's say I take the twelve cards from the bottom of the deck and turn them face up. By chance, five of them are facecards. Now what is your probability that the card under the chip is a facecard? The chosen card is still sitting there under the chip. It hasn't changed from when I first put it there and your odds were 10-to-3 (probability about 23%). But if I offered you the same bet now paying 10-to-3, you should refuse. Why? Because now you have more information. In fact, for a $3 bet I would now have to pay you more than $14 to make it fair. If we had known back when we placed the top card under the chip that the twelve bottom cards included five faces, we would have calculated that our odds 33-to-7, since there are seven faces and 33 non-faces in the cards remaining. And that's exactly what our odds are now: 33-to-7 (or a probability of 7-in-40, or 17.5%). If by chance there had been only two faces among the twelve bottom cards, you would happily have taken my 10-to-3 offer; I would be losing money even if I offered you only $8. If you're still having a hard time convincing yourself that merely seeing these extra cards changes the probability of the card already under the chip, imagine if by chance all twelve of the revealed cards are faces. Now you know with certainty that the card under the chip is not. The probability has gone to zero.

This may also help you understand, for example, why some of the probabilities you see in books or websites or TV for various hold'em hands sometimes differ. For example, if you hold two spades in your hand, and there are two spades among the four cards on board, what is your probability that the river card will complete your flush? 9-in-46, or about 19.6%. So then you see a hand on television where one of the players is in exactly this situation. They show his hand, his opponent's hand, and the board to you, and the graphic on the screen says that he has a 20.4 chance of making the flush. Are they wrong? No. From the player's point of view, he still has only a 19.6% chance, but from your point of view, he has 20.4%, because you can see his opponent's cards, and you know that they are not spades. So given the information you know, his probability is 9/44, not 9/46. That also explains why some people are confused about why the probabilities aren't based on the number of cards remaining in the stub. After all, if ten players were dealt in, with four cards on board and two burns, there are actually only 26 cards remaining in the stub. But the denominator of our fraction is not the number of cards in the stub, it's the number of cards we don't know. The TV player only knows six cards, leaving 46 unknown. You know eight, leaving 44. If the TV showed you some other player's hand, there would be only 42 cards unknown to you. If neither of those cards were spades, our lucky player's chances go up to 21.4%. If they are both spades, they go down to 16.7%, from your point of view.

A little weirder

Let's make that bet again. I shuffle the cards, place one under the chip, and offer to pay you $10 if it's a facecard for a $3 bet. Should you accept? It doesn't matter. My offer is exactly fair, so you should be completely indifferent to it. Let's say you decline. I take the twelve bottom cards and look at them without showing them to you, and put them back. Now I offer you $11 for a $3 bet that the card under the chip is a face. Should you accept my bet? No! The card hasn't moved, and none of your information has changed, but mine has. I might have decided earlier that I would only offer that bet to you if I had seen five faces among the bottom twelve. If I had seen only two, I would have kept silent.

OK, you were smart and said no. I shuffle again, place the top card under a chip, and make you a promise: in exactly two minutes, I am going to offer to pay you $11 for a $3 bet that the card under the chip is a face. Wait—don't accept yet. I peek at the twelve bottom cards again without showing them to you and return them. Two minutes are up, and I make you the offer. Now should you accept? Yes! I may have more information than you now, but I committed to the offer back when I had exactly the same information as you. I might have seen five faces, in which case I'm making you a bad offer. But I might have seen only two, in which case I'm making you a great offer. You don't know, but neither did I when I made the promise, so you should base your decision on the information available to you when you make the bet and the information available to me when I committed to the bet.

You accept. congratulations, you made a good bet. But before we look, without touching any cards in any way, I offer you double-or-nothing: $22 for a $6 bet that the card is a face. Should you double? No! Neither you nor I gained any information, but still I might have only made the second offer if I had seen lots of paint in those bottom cards.

To be completely honest, I made an unstated assumption above that might have affected your choice. I assumed that I was honest, and would not make the promise to offer you the $11 and then break it if I had seen no faces. So the real answer to whether or not you should accept that offer is “Do you think I'm the kind of person who would make that promise and then renege?” In this case, your probability is affected not only by your knowledge of the cards, and my knowledge of the cards, but by your knowledge of my character.


Monte Hall

The example above might also help you understand the notorious “Monte Hall” problem that even many competent mathematicians get wrong. The problem goes like this: Behind one of three doors is a fabulous luxury car, and behind each of the other two is a goat. Monte has you choose one of the doors, and you do. Now, Monte opens one of the doors you didn't choose and shows that there was a goat behind it. He then makes you an offer: you can keep what's behind the door you've already chosen, or you can switch to what's behind the other unopened door. Should you switch?

Yes! You should always switch. When you made your first choice, your probability of picking the car was 1/3. By switching you raise your probability to 2/3. By using the always-switch strategy you win the car whenever you initially picked a goat, because Monte shows you the only other goat. You only lose if you were so unlucky as to pick the car initially.

After Monte shows you the goat, it's just like showing the bottom cards of the deck in our bet above. He's given you new information, and you should make your choice based on that new information. Another thing that's confusing about the bet is that you also need to assume (which is perhaps not stated clearly in the problem) that you correctly understand Monte's motives. he always shows a goat, and he always makes the offer. If instead we get evil Monte who only shows you the goat and makes the offer if he knows you've chosen the car, and will therefore switch based on my previous argument, then switching doesn't work.

In summary

So if you see something remarkable happen and ask me what the odds were, I might give you a reasonable answer and I might not. What is your description of what happened, exactly? What's the sample set? What is the population of things like that? When did you notice it or learn about it? From whose point of view should I give you odds, and at what time? What are your motives for asking me? Even that might be relevant.

This should also give you some perspective about all the remarkable coincidences or unlikely events that you may hear about. Without specifying ahead of time exactly what kind of events we are looking for, we can't possibly guess how unlikely it was for us to have seen them. There's a famous argument called “miracle a month” that goes something like this: The human nervous system takes a few hundred milliseconds to acquire a sensory input, process it in the brain, become aware of it, and perhaps to respond. So let's round up to a full second and say that it takes us about a second to notice something. It could be a sound, a texture, a word, maybe a name or place, the action of another person or an animal or a machine, a taste, a smell, anything. Let's then define a “miracle” as any event that, if its prior probability had been calculated before we noticed it, would have rated a million to one or more. A typical person experiences more than a million waking seconds each month. Therefore, we should expect a miracle to happen to us about once a month.

Personally, I think the miracle-a-month argument is too conservative. Far more unlikely things happen to us all the time. And lest you think I am jaded by my math or my employment at a casino, I do still marvel at them. I just don't place more significance in them than they deserve. Here's a recent one: my casino allows a side bet on blackjack that the dealer's cards will be red, with higher payouts for more cards. No, this player didn't make the bet. C'est la vie.