Posted on Leave a comment

False Key Errors in McDowell’s Blackjack Ace Prediction

Effects of Shuffle Tracking Errors on False Key Rates

by Radar O’Reilly
(From Blackjack Forum XXIV #2, Spring 2005)
© 2005 Blackjack Forum

In his book Blackjack Ace Prediction, David McDowell has made a serious error in his ace prediction hit rate calculation that has not yet been addressed. I am addressing it because I expect it to jump out at readers some time soon, and it may lead them to overestimate their advantage using his methods.

On p. 111 of Blackjack Ace Prediction, McDowell provides ace hit rate calculations as part of calculating an overall win rate estimate for players. He estimates, based on shuffle studies, that an Ace can be expected to land on the predicted betting spot 38% of the time.

I am not, at this time, going to address his estimate of 38%.

McDowell goes on to explain in Blackjack Ace Prediction that this 38% hit rate assumption must be reduced by the probability of broken sequences and false keys. There are serious problems with the probability he provides for false keys. I will address these in a moment. For now, I want to point out that the way he makes his adjustment for broken sequences and false keys is wrong. McDowell subtracts his overall probability of broken sequences (.15) and false keys (.10) from his .38 hit rate instead of multiplying and then subtracting the product. This is wrong because a share of the broken sequences and false keys properly belong to the aces that land on the other betting spots.

By subtracting .15 and .10 (.25) from .38 he comes up with an estimated 13% hit rate on his ace bets.

Instead, he should have multiplied .38 by .25.

.38 x .25 = .095

Then, he should have subtracted .095 from .38.

.38 – .095 = .285

If McDowell’s false key probability were correct, this would give him a 28.5% hit rate on his ace bets instead of 13%.

Since he specified that he is splitting the aces with the dealer, that would have given both the player and the dealer 20.8 additional aces each beyond the 7.7 accidental aces they will receive.

20.8 x .51 = 10.608 expectation for player when he gets the keyed ace on his ace bet

20.8 x .34 = 7.072 negative expectation for player when the dealer gets the keyed ace when the player has an ace bet out.

10.608 – 7.072 = 3.536% expectation for player on hands where keyed ace hits either dealer or player hand.

To this you must add the expectation on the 58.4 hands where the 7.7 per 100 accidental or random aces will be appearing (7.7 each for both dealer and player). This means 58.4 hands per 100 ace bets played at the house edge of roughly .5%.

58.4 x -.005 = -.292
3.536 – .292 = 3.244% edge on player bets for the keyed ace.
Again, this is assuming that the author’s false key probability is correct, which I will be disputing below.
But this is still not an overall win rate. Assuming the player is able to bet 3 aces per shoe (another assumption that needs to be challenged) in the 66% penetration game the author specifies, where 1/3 of the cards are cut out of play, the player will be playing roughly 33 hands per shoe heads up at the house edge. If he uses a spread of 100 to 1000:
$3000 x .03244 = $97.32 per shoe on his ace bets
$3300 x -.005 = -$16.50 per shoe on his waiting bets

97.32 – 16.50 = 80.82 player profit per shoe
80.82/6300 action = 1.28% win rate.

So, if McDowell’s false key assumptions were correct, and if his assumption that you could use visual tracking to locate three aces per shoe were correct, on this heads-up game where he split aces with the dealer, and used a 1-10 spread, he would have an overall win rate of around 1.28%. It’s less than 1/3 of the 4% win rate McDowell provides, and it would require more than three times the bank, but at least it would be a respectable card counter level of win rate.

Unfortunately, McDowell’s assumptions are not correct. I have already addressed the problem of visually tracking three aces per shoe in my Spring 2005 Blackjack Forum article. To summarize, I make the case that McDowell cannot visually track three aces per shoe in the shuffles he describes. A more realistic assumption is one ace per shoe in the more difficult shuffles (a highly skilled tracker might be able to visually track two per shoe in the simpler ones). I also show that McDowell needs to allow for a visual tracking error rate that depends on the size of the slug he is attempting to track. McDowell allows for no visual tracking error at all.

McDowell’s assumptions about false keys are also a serious problem. McDowell’s shuffle analysis on p. 72 has us tracking cards 50 to 62 in the preshuffle stack to positions 200 through 250 in the post shuffle stack. He also uses his analysis to track cards 80 through 92 to a full deck in the post shuffle stack. On p. 77, he says that “the ordinal post-shuffle position of a card is not easily obtainable in actual play,” and says “simply knowing where this slug of thirteen cards ends up after the shuffle is enough.” These statements are accompanied by a chart showing that a pre-shuffle slug of thirteen cards will end up spread over a full deck post-shuffle.

Yet, on p. 101, when it comes to the important calculation of the probability of false keys, McDowell suddenly has us visually tracking our ace to a half deck, rather than a full deck. He does this specifically to cut his false key probability in half. Then, on p. 103, he cuts his false key probability in half again, to one quarter of what it should be, by claiming that we can use “pointer cards” to halve the number of false keys we bet. Specifically, he claims that trackers can use asymmetric face designs (the number of pips facing up versus down) or the space between the index on a card and its edge to distinguish between a false key and our real key.

Let’s look harder at this assumption.

Regarding the 22 cards that have asymmetrical pip patterns, this accounts for 42% of the possible key cards (22/52). Assuming that false keys would be turned half one way and half the other, we could eliminate 21% of the false keys (those that were turned the wrong way). This would assume that the player is able to follow the orientation of the pip pattern through the dealer’s pick up, placing the discards into the discard tray, removing the discards from the discard tray, turning one half of the cards before the shuffle, as is the procedure in the majority of U.S. casinos, then continuing to follow the orientation through the tip-over, cut, and replacing the cards into the shoe. And this is assuming the player can remember the orientation of multiple key cards to begin with.

Regarding the asymmetrical gap between the index and the edge, unless there is a huge cutting error in the manufacturing process, most decks will not show any easily detected difference from the index on one corner to the index on the other corner. The prospect of a player having six or eight decks that are all this badly miscut is very slim. There may be a card here and there noticeably miscut, but there is only value if one of these cards happens to fall as a key card, and again, the player must follow that card orientation through the entire pick-up, shuffle, and placement into shoe procedure.

Then, on p. 104, McDowell says “Predicting Aces at the table is easier than the theory makes it look.”

McDowell’s automatic assumption that players will reduce their false keys by a full 50%, and his use of this number in his win rate calculation as standard, is naive at best.

Anyone planning to actually try McDowell’s ace location methodology at an actual casino table should use a false key probability of roughly four times the number he provides (roughly .40 instead of .10). Remember, McDowell is using only a single card to key each ace, not a two- or three-card sequence, which most ace trackers use in order to greatly reduce false keys.

What does a false key probability of .40 do to the expected win rate?

First, multiply the .38 theoretical hit rate by .55 (.15 probability of broken sequences plus .40 probability of false keys).

.38 x .55 = .209
br />Subtract the probability of broken sequences and false keys from the 38% hit rate.

.38 – .209 = .171

This means a 17.1% hit rate on his ace bets.

Since we have been analyzing a heads-up game, and McDowell specified that he is splitting the aces with the dealer, this would give both the player and the dealer 9.4 additional aces each beyond the 7.7 random aces they will receive.

9.4 x .51 = 4.794 expectation for player when he gets the keyed ace on his ace bet

9.4 x -.34 = -3.196 expectation for player when the dealer gets the keyed ace when the player has an ace bet out.

4.794 – 3.196 = 1.598% expectation for player on hands where keyed ace hits either dealer or player hand.

To this you must add the expectation on the 81.2 hands where the 7.7 per 100 accidental or random aces will be appearing (7.7 each for both dealer and player). This means 81.2 hands per 100 ace bets played at the house edge of roughly .5%.

81.2 x -.005 = .406

1.598 – .406 = 1.192% edge on player bets for the keyed ace.

And this is assuming that the player never makes a visual tracking error, and never has a shoe in which the tracked ace does not make it to the expected post-shuffle deck.

But this is still not an overall win rate. Assuming the player is able to bet 3 aces per shoe (again, another assumption that I have challenged) in the 66% penetration game the author specifies, where 1/3 of the cards are cut out of play, the player will be playing roughly 33 hands per shoe at the house edge. If he uses a spread of 100 to 1000:

$3000 x .01192 = $35.76 per shoe on his ace bets
$3300 x -.005 = -$16.50 per shoe on his waiting bets

35.76 – 16.50 = 19.26 player profit per shoe

19.26/6300 action = 0.3% win rate.

What can a player do to increase this win rate? For one thing, he has to address that 50/50 split on the aces with the dealer in this game. But in order to do this, he must either spread to multiple hands, and add in their costs, or play at a crowded table, and sharply reduce the number of aces he is able to bet per hour. In his p. 122 calculations of expected return, McDowell assumes 4 bets per hour at a full table. But to get this number of bets per hour, he assumes that the player will visually track 4 aces per shoe, and that none of these tracked aces will be cut out of play. Again, these are very unrealistic assumptions.

Assuming a 17.1% hit rate on our ace bets, and a full table where the dealer gets nothing beyond his accidental aces, the player will receive 9.4 additional aces beyond the 7.7 accidental aces he will receive.

9.4 x .51 = 4.794% expectation for player when he gets the keyed ace on his ace bet

To this you must add the expectation on the 90.6 hands per hundred where the 7.7 per 100 accidental or random aces will be appearing (for both dealer and player). This means 90.6 hands per 100 ace bets played at the house edge of roughly .5%.

90.6 x -.005 = -.453

4.794 – .453 = 4.341% edge on player bets for the keyed ace.

Again, this is assuming that the player never makes a visual tracking error, and never has a shoe in which the tracked ace does not make it to the expected post-shuffle deck.

But this is still not an overall win rate. Assuming the player is able to bet one ace per hour (again, a generous assumption given visual tracking at the crowded game with 66% penetration the author specifies), the player will be playing roughly 59 hands per hour at the house edge. If he uses a spread of 100 to 1000:

$1000 x .04341 = $43.41 per shoe on his ace bets

$5900 x -.005 = -$29.50 per hour on his waiting bets

43.41 – 29.50 = 13.91 player profit per hour

13.91/6900 action = 0.2% win rate.

And again, this makes no allowance whatsoever for visual tracking error.

Note that by correcting McDowell’s arithmetic on false keys and broken sequences, he is credited for a higher ace rate than he himself estimated. But this is still not a high-enough win rate expectation for consideration by professional gamblers, especially after McDowell’s false key probabilities are corrected.

The problem for real-life gamblers who want to locate aces for profit in casinos is that David McDowell’s single key, visual tracking methodology is not a good one even in the one pass riffle and restack shuffle, with two riffles, that he uses for calculating his win rate (p. 111).♠

Posted on Leave a comment

Poker Tournament Rebuy Advice

Response to Mason Malmuth on the Rebuy Advice in The Poker Tournament Formula

by Arnold Snyder
(From Blackjack Forum , Summer 2006)
© Blackjack Forum 2006


[Editor’s note: This article is a response to a post by Mason Malmuth at his twoplustwo.com Web site in which he purports to refute the argument behind my rebuy advice in The Poker Tournament Formula. —A.S.]

A player recently alerted me to an argument that Mason Malmuth had posted at his twoplustwo web site which purports to refute the logic I present on optimal rebuy strategy in Chapter Ten of The Poker Tournament Formula.

Mason quotes sections from my book, then presents his contrary opinion. First, Mason describes how I show that in a coin-flipping contest, where neither player has an advantage, if one player starts with twice the chips of the other player by paying twice as much for those chips, then neither player has an advantage. Mason agrees with me on this point.

He disagrees, however, with my analysis of the effect of a player advantage when one player makes a rebuy. Here’s what Mason says:

The next step is to look at what happens if Player A has a 10 percent playing advantage. Without going through the details, Snyder now shows that Player A expects to win $10 per tournament if both he and Player B each have one $100 chip. Again I agree.

Then it gets a little more interesting. Snyder now has Player A start with two $100 chips and Player B sticks with his one $100 chip. Since A has a 10 percent playing advantage, we expect him to show a profit, but what happens is that his profit now increases to $17.50 per tournament (as opposed to the original $10) since the average tournament will now last longer because Player B must win twice in a row to win the tournament. Thus it’s pretty clear that the more chips Player A has the larger his expectation will be since he is the better player….

The model that Snyder is using does a pretty good job of representing a winner take all poker tournament. It does not do a good job of representing a percentage payback poker tournament where the prize pool gets divided up among many players, and most of today’s poker tournaments are of the percentage payback structure.

Let’s go back to Snyder’s coin flipping model where Player A has a 10 percent playing advantage over Player B, but this time the winner of the tournament gets 60 percent of the prize pool and the loser gets the remaining 40 percent. (I think everyone will agree that this more accurately represents what happens in a poker tournament than the winner take all model.)

Now without showing the math, the expectation for Player A is $2 when both he and Player B each start with one $100 chip. Notice that this is not as good as the original $10 expectation as before, but it is still a good bet and Player A would probably like to play a bunch of these tournaments.

Now let’s suppose that Player A starts with $200 in chips meaning that the total prize pool is now $300. For him to have an expectation of $17.50 before, it means that he is winning this tournament 72.5 percent of the time. But what happens now when there is a 60-40 split?

First off, Player A will still win the coin flipping tournament 72.5 percent of the time. That’s because his 10 percent playing advantage has not changed. But his expectation is now negative $36.50. Furthermore, since his original expectation was to win $2 (with only one $100 chip) the purchase of the second $100 chip (for $100) has cost Player A $38.50. This makes a huge difference since we can now see that a more accurate model does not behave in the way Snyder’s original model behaved. In fact, it behaves just the opposite and clearly implies that many of the conclusions should be different.

Well, this certainly sounds like a great argument, Mason. You allege that this 60-40 payout structure is a more realistic example of a real world tournament than the winner-take-all format I used. But let’s look more closely at this hypothetical tournament that you set up for your model.

There are only two players who buy-in for $100 each. Each player gets a $100 chip for his buy-in. The total prize pool, assuming neither player makes a rebuy, is $200, and the 60-40 payout structure ensures that the winner will get $120 and the loser will get $80.

So, in your 60-40 tournament, the most either of these players can win is $20, and the most either player can lose is $20. In fact, there really isn’t any reason for these players to be buying in for $100, since what happens at the end is that each player is immediately refunded $80 of his $100 buy-in, while the tournament winner gets the $40 remaining in the prize pool, the only money that is being contested.

Your 60-40 tournament is really a $20 buy-in tournament for which each player gets a $100 chip to play with. You do note that Player A, who has the 10% advantage in this hypothetical 60-40 split tournament, assuming no rebuy, has an actual dollar return expectation of exactly $2 per tournament, so I would think that this might have given you a clue that if he has a 10% advantage and a $2 expectation, he’s actually in a $20 buy-in tournament, but somehow this escaped you.

Then, you analyze what would happen if Player A made a rebuy for $100 to get a second $100 chip! In other words, his first $100 chip costs him only $20, but his rebuy chip costs him $100! In your tournament, he must pay five times more for his rebuy chip than he pays for his initial buy-in!

And, you come to the conclusion that if he makes the rebuy in this tournament, instead of a $2 win, he will have a $36.50 loss. But what does this have to do with any real world tournament? I’ve never played in or seen any tournament where all players were guaranteed to get 80% of their initial buy-in back even if they lose, and I’ve never heard of a tournament where the cost of the rebuy chips is 500% of the cost of the initial buy-in chips. I especially like the part of your argument where you say:

Well, in the world of mathematical statistics, something that I use to do professionally many years ago, it’s important to have the problem well defined. Put another way, when doing mathematical modeling, you would like a model (such as a coin flipping contest) that is simple to understand but at the same time does a pretty good job of representing the more complex phenomenon (such as a poker tournament). If this is the case, you can often draw valid conclusions about how to proceed in the more complex situation.

And then you provide a model that has absolutely nothing to do with the real life situation. In your attempt to refute my rebuy advice, you state that the player who rebuys will lose $36.50 simply because it’s not a winner take all format. No, Mason, he loses that much money because he’s paying 500% of the cost of his initial buy-in for his rebuy chip. His cost per chip is substantially larger than his competitor’s cost per chip, and he’s increasing his competitor’s prize far beyond what his measly 10% advantage will deliver.

In fact, it’s not difficult to create unrealistic models in which my rebuy strategy would be wrong. This would be true, for instance, if Player A were the only player making the rebuy in a small-enough multiple-player tournament, especially if Player A’s advantage over his competitors was only 10%, as in my PTF example.

In a four-player coin-flip elimination tournament, where each player buys in for $100, and only Player A makes a $100 rebuy for an extra chip, while Players B, C, and D make no rebuy, with the $500 prize pool going only to the top two players, divvied up 60-40 ($300 and $200) between first and second place, it can be shown that Player A would lose on average $2.45 per tournament even with his 10% advantage.

But an example like this has nothing to do with real world tournaments. In single-rebuy multi-table tournaments, the vast majority of players make the allowed rebuy. For example, in the Orleans Friday and Saturday night single-rebuy events, 95+% of all players make the allowed rebuy. From my experience, that level of rebuy participation is the norm. In multiple rebuy tournaments, most players make multiple rebuys. At Orleans, the average player in their multiple-rebuy tournament makes three to four rebuys. In the WSOP $1K rebuy event, in which rebuys are not discounted, the average player makes two rebuys. Also, in real-world multi-table tournaments, skilled players are not playing with puny 10% advantages. They play with advantages ranging from 100% to 300% or more. I used 10% in my coin-flip example just to keep it simple and explain the logic.

In my description of “rebuy maniacs” in PTF, I do explain that it is possible to make too many rebuys by playing too loosely, because you would be buying too many chips for your competitors and you would be unlikely to have a sufficient skill advantage to overcome the cost you are paying for your own chips.

The coin-flip examples in Chapter Ten are provided to make simple points of rebuy logic that would be true in almost all real-world tournaments. Further, I tried to isolate these points in such a way that non-mathematicians could grasp the logic. I have an example that shows that discounted chips can create an advantage for a player by lowering his cost per chip. I have an example that shows that if a player has an advantage over his competitors then even full-price rebuy chips can raise his dollar return. And I point out that less-skilled competitors cannot nullify a skilled player’s advantage by purchasing rebuy chips themselves. I even have an example that shows that if the rebuy chip purchase funds do not go into the prize pool—as in the case of “dealer bonus chips”—the purchase of the chips will still in most cases add value to the skilled player.

So, you have provided not only an unrealistic mathematical model, but also a tournament format that is completely irrational. To be sure, there are a number of real differences between a coin-flip model and a real-world tournament. But these differences tend to enhance the value of rebuys for skilled players. For one thing, in real-world multi-table tournaments, the vast majority of the players (generally 90%) will not finish in the money at all.

Also, I pointed out that one of the real-world tournament factors that a coin-flipping contest does not mimic is the intimidation value of chips. In real tournaments, players with more chips more easily steal pots without confrontation. Any player with any significant amount of real-life experience in tournaments will immediately recognize that this is true. This means that having more chips actually raises a player’s percentage advantage, even when that player has the same level of skill. Your model fails to address this factor. My model also leaves out the intimidation value of chips, but I take care to mention this real but difficult-to-quantify factor as it raises the mathematical value of more chips beyond what we can deduce from coin-flip logic.

Another difference between the coin-flip examples and a real-life tournament is that the coin-flip examples consist of only a few bets, with one player all-in on every single flip. In a real tournament, your extra rebuy chips will force unskilled competitors to play against you for many hours, not for just a few flips. The longer a skilled player can keep unskilled competitors playing against him, the greater the dollar value of his chips.

In conclusion, I want skillful players to be aware that they should follow the rebuy advice in The Poker Tournament Formula. The logic of my examples is valid, and David Sklansky is incorrect in his Tournament Poker for Advanced Players when he writes: “I think a decent rule of thumb would be to add on if you have less than the average number of chips at that point, and not otherwise.” Sklansky and I both state that discounted rebuy chips should almost always be purchased by skilled players, as they will lower your cost-per-chip and raise your overall dollar return. But what Sklansky’s advice shows that he does not understand, but what I show in The Poker Tournament Formula, is that having more chips also forces your less-skilled opponents to give you more action, and that this gives you a greater dollar return on your skill.

Sklansky’s advice also shows that he doesn’t understand the strategic and psychological advantage of a bigger stack, and its effect on increasing your percentage advantage. Always buy as many chips as you can, as soon as you can. And remember that chips have an intimidation value that raises their mathematical value beyond their cost. When you have the chips, use them. They allow you to afford greater variance and you should take advantage of this. In no-limit tournaments, chips are a major weapon. There are a few exceptions to these general rules, when rebuy chips ought not be purchased, and these exceptions are discussed in the book.

If a tournament ever materializes where all players are refunded a large portion of their initial buy-in, or where rebuy chips are sold for 500% (or any other percent greater than 100%) of the cost of the initial buy-in chips, I’ll be happy to provide a detailed analysis.  ♠

Posted on Leave a comment

The Implied Discount and Tournament Chip Value

The Implied Discount: New Insights Into Optimal Poker Tournament Strategy

By Arnold Snyder
(From Blackjack Forum , Fall 2006)
© Arnold Snyder 2006

This article will dispel a number of unsound theories about poker tournaments that have been around for decades. These theories have led tournament authors to promote weak and even losing strategies that are the reason why so many smart and experienced poker players have found it impossible to make money in tournaments. Arguments occurring on poker discussion boards at various web sites, including this one, with regards to the rebuy strategy I propose in The Poker Tournament Formula were the impetus for this article. But the implications of this article will go beyond rebuy strategies and deal with the fundamental realities of how you make money in poker tournaments.

Among the topics I will address will be the logic of sound rebuy strategy, and specifically why rebuys are almost always the correct strategy for a skilled player, even when the rebuy chips must be purchased at the full price of the initial buy-in. I will also show the reason why a player should add-on even when he has a lot of chips, and why it is often wrong for a player to purchase the add-on when he is short-stacked.

Most importantly, this article will address the theory that the fewer chips you have the more each chip is worth, and the more chips you have the less each chip is worth, and show that this relationship is true only in very particular instances at some final tables, and is completely inadequate for understanding true chip value throughout a tournament, or devising overall tournament strategy. The peculiar idea that this theory can be applied throughout a poker tournament can be traced back to Mason Malmuth’s 1987 book, Gambling Theory and Other Topics. It has been embraced by other prominent poker authors as well as players, and has been used to guide players toward bad tournament decisions and strategies for many years. The concept is wrong because, as I will show, chip value is primarily based on chip utility (the various ways in which chips can be used), and, in the hands of a skilled player using optimal strategy, chip utility goes up with stack size.

The First Great Misconception

Much of the advice in The Poker Tournament Formula was written specifically to address errors in the existing poker tournament literature, especially a number of serious errors put forward over roughly a twenty-year period by Mason Malmuth and David Sklansky. One of the most widely held and erroneous concepts of poker tournament logic is the concept that the fewer chips you have, the more each of your chips are worth, and the more chips you have the less each of your chips are worth, and that as a tournament progresses, all chips lose value. David Sklansky presents this idea in his Tournament Poker for Advanced Players (p. 44-5), though he credits Mason Malmuth’s 1987 Gambling Theory and Other Topics as the original source of this notion. I am not aware of any reputable poker authority having ever disputed this claim, and it has come to be accepted as the common wisdom.

In a nutshell, the theory says that at the start of a $10,000 buy-in tournament, all chips are worth their face value. At the final table, however, since many of the smaller prizes have already been distributed to players who busted out in the money, the total payout to those at the final table will be less than the initial cost of the chips in play. As a recent example, the winner of the WSOP main event this year received a $12 million prize. When he beat his last competitor, however, he was holding more than $87 million in chips, for which $87 million had been paid by the 8,700+ competitors. So, the individual chips in his stack were worth less per chip to him than their initial cost.

Malmuth carries this idea further to conclude that the individual chips in a small stack are worth more per chip than the individual chips in a big stack. Let’s say a small buy-in tournament is down to the last two players, who are heads up at the final table. The remaining prizes are $3,500 for the winner, and $1,800 for second place. The player who is the chip leader at this point has $70,000 in chips, while the player in second place has only $10,000. Clearly, the value of each of the chips of the second place player is much greater than the value of each of the chips of the chip leader. Assuming the blinds are so high at this point that both players are simply all-in before every flop regardless of cards (making this a coin-flip situation for all intents and purposes), the player in second place has an 87.5% chance of taking the second place prize, and a 12.5% chance of doubling up enough times to take first place. This makes his $10,000 in chips worth more than $2,000 in prize money. So, the short-stacked player’s chips are worth more than 20 cents each. The chip leader, however, will find that his $70,000 in chips have a prize value of less than $3,300. The chip leader’s chips are worth less than 5 cents each. And ultimately, the more chips the second place player loses to the first place player, the greater the value of each of his individual chips. In fact, if he gets’ down to a single chip, that chip will have a value of more than $1,800, while the chip leader’s chips ain’t worth even a nickel each.

This is a true fact, and I don’t dispute it. But the problem is that Sklansky and Malmuth have gone on from this premise to devise rebuy strategies, and in Malmuth’s case, even entire tournament strategies, based on the idea that the chips in big stacks are worth less than the chips in small stacks. In Gambling Theory and Other Topics, p. 232, Mason Malmuth states: “As the poker tournament essays have stressed, in a percentage payback tournament, the less chips you have, the more each individual chip is worth, and the more chips you have, the less each individual chip is worth. This idea is the major force that should govern many of your strategy decisions in a poker tournament.”

I will show that extending this chip value idea to a dominant factor in devising overall poker tournament strategy is a humongous error in logic. And, because it has led to more bad tournament strategies than just about any other “truth” about tournaments ever revealed to the public, it has been a tremendously costly error in logic to players. It is the basis of the whole conservative, sit-and-wait-for-a-hand approach to tournaments, as well as bad advice on just about every aspect of tournament play from how to play a short stack to final table play to optimal rebuy strategy to satellite strategies, and more.

The reason it’s a humongous error in logic is because it starts from the assumption that all players have equal skill, and it ignores the value of a bigger stack in the hands of a skilled player. In the hands of a skilled player, all chips essentially have greater value. Indeed, as I will show, all chip purchases made by a skilled player using optimal tournament strategy are essentially made at an implied discount. This implied discount has a substantial impact on rebuy and add-on decisions.

Why More Chips Equals More Value per Chip: The Implied Discount

In order to make money at gambling, you have to actually gamble. That is, you must place money at risk on wagers on which you have an edge. The more money you can afford to wager with an edge, which is to say the more money you can put in action, the more money you will make. Sitting on your chips like a hen on an egg is not the way to make money gambling, especially in a tournament, which is, essentially, a race for all the chips.

It is incorrect to convert chips to dollar values with no consideration for how individual players might use those chips. It may seem logical that we could assign dollar values to chips since chips are initially purchased with dollars. But once the tournament begins, they cease to be dollars or even to represent dollars. You can’t cash them out. You can’t sell them, trade them, or buy anything with them. They are simply tools that are provided to players for competing in a contest. When the tournament director says, “Shuffle up and deal!” a battle has begun, and chips are nothing more nor less than ammunition.

The ground rules of this battle are pretty simple. If you run out of ammo, you’re dead. But if you outlast enough of your enemies, you will get some portion of the prize money depending on exactly how many of your enemies you bested. The player who survives the longest gets the biggest prize.

But you can’t survive by just hoarding your ammo. Your position will be attacked at regular intervals (the blinds), forcing you to spend some of your ammo even if you are attempting to avoid confrontations at all costs. These attacks on your position will start small, but they will escalate, costing you larger amounts of your ammo as the battle progresses. The only way for you to survive is to acquire more ammo, and the only way for you to do this is to engage in confrontations with your enemies and continually capture some (or all) of their ammo to add to your stockpile.

If a chip is a bullet, and I have 500 bullets, and you have 4500 bullets, you can utilize your ammo in many ways that I cannot. You can fire test shots to see if you can pick up a small pile of ammo that none of your enemies are all that interested in defending. You can engage in small speculative battles to try and pick up more ammo, and you can back out of these little skirmishes if necessary without much damage to your stockpile. Most importantly, because all of your enemies can see your huge stockpile, you can get them to surrender ammo to you without fighting, even in battles they would have won, were it not for their fear of losing everything.

So, intrinsically, each of your bullets has a greater value than each of mine purely as a function of its greater utility. This is due directly to the fact that you have so much more ammo than me.

The more chips you have, the more each chip is worth.

The only time chips do not have more value in a bigger stack is when the bigger stack is in the hands of a player who does not know how to use them. For example, any player who plays according to Harrington’s M strategy will not gain the full available advantage from having a bigger stack of chips. When you are waiting for hands, primarily playing your cards, and taking so little advantage of the edges available from other types of poker moves, your bigger stack will not be in action enough to earn you this greater value.

So, when I say that the more chips you have the more each chip is worth, that assumes that the player will be deploying his chips in such a way as to extract their full potential earning value.

Even Sklansky, despite his mistaken advice on rebuys, clearly recognizes that the dollar value of chips can be greatly increased by skill. On page 44 of Tournament Poker for Advanced Players he provides an example of a bystander, at the start of a $10,000 event, who is a highly-skilled player but who has arrived at the event too late to buy-in. Sklansky comments: “It might be worth it for him to buy your original $10,000 in chips for $30,000 because of his great skill.”

With these words, Sklansky demonstrates that he understands the guiding principle of all of the strategy in my book, The Poker Tournament Formula. This guiding principle is the theory of the implied discount. What is an implied discount? An implied discount comes from the fact that a skilled player can earn more with his chips than an unskilled player can earn. And how do I arrive at an implied discount from this fact?

If the player in Sklansky’s example had arrived on time to buy-in for this tournament, then (according to what Sklansky is telling us) this player would have essentially purchased chips valued at $30,000 to him, based on his skill, for only $10,000. When a player is getting $30,000 in value for $10,000 in cash, it is the same as getting his chips for one-third the price of the unskilled player.

Another way of saying this is to say that the unskilled player will have to buy in three times as often to get the same amount of winnings (not profits, but prize money) as the skilled player. (In fact, an unskilled player will never be able to profit, and may never even be able to make the same amount of prize money as a skilled player, no matter how many times he buys in. The point is that the skilled player is essentially getting his chips at a discount.)

And what are the implications of this implied discount? In The Poker Tournament Formula, I show that there is a huge value to any player in making a rebuy or add-on at a discount in an even playing field. Sklansky agrees with this concept (see Tournament Poker for Advanced Players, p. 94.) The implied discount would mean that, for a skilled player, a full-price rebuy or add-on is never really full price, and is thus a great value, with some exceptions that I cover in The Poker Tournament Formula. In fact, the only exception for a skilled player would be when the skilled player is so short-stacked that the extra chip purchase would not provide enough chips to sufficiently utilize his skill advantage. Again, when skill cannot be used, the implied discount doesn’t exist.

Unfortunately Sklansky, in his rebuy advice, fails to take into account both the extra value of chips to a skilled player and the limits on how skill can be deployed when chips are too few. He says, “I think a decent rule of thumb would be to add-on if you have less than the average number of chips at that point, and not otherwise.”

In effect, Sklansky is saying it’s fine for a skilled player to pay $30,000 for $10,000 in chips, but not to pay $20,000 for $20,000 in chips (in a $10,000 rebuy event). Sklansky has failed to realize that extra chips add to the amount of skill a player can use, and the amount of action he can generate with an edge.

More Implications for Rebuy Strategy: When Opponents Rebuy

Which brings us back to Mason Malmuth’s error in using his evaluation of chips based on stack size to generate a rebuy strategy. As I have just shown above, the fewer the chips a skilled player has, the more likely it is that he will not be able to acquire enough chips to fully utilize his skill, and the less he should be inclined to rebuy or add-on. (Again, this is exactly the opposite of Sklansky’s and Malmuth’s advice.)

On p. 196 of Malmuth’s Gambling Theory rebuy chapter, he says: “If you are leading in a tournament and someone rebuys, the pot is not being ‘sweetened’ for you… Discouraging your opponents from rebuying when they are broke should be an important part of your overall tournament strategy.”

Malmuth bases this advice on the math in a model in which all players have equal skill, making the model inappropriate for poker tournaments, and making his advice absolutely terrible for skilled players. In any real-world tournament, assuming that any players at your table actually give a damn what you recommend regarding their rebuy decisions, you should encourage the poor (read “conservative”) players to rebuy and discourage the more skillful (read “skillfully fast and aggressive”) players from rebuying. If you believe you’ve got an edge on the whole table, encourage them all to rebuy and add-on as much as they possibly can. If they don’t know how to use chips when they have them, believe me, they will indeed ultimately be “sweetening the pot” for you.

Again, this is because the skilled player’s chips come at an implied discount. Whenever players of lesser skill are in effect paying more for their chips than you, as a function of your implied discount, you will profit. They are, in effect, buying chips for you.

This is not something that mathematical analyses based on players of equal skill will reveal.

In this section, we have dealt with the implications of the implied discount for optimal rebuy strategy. Now let’s look more closely at the implications of the implied discount for the validity of various authors’ overall poker tournament strategies.

Broomcorn’s Uncle

In SuperSystem, Doyle Brunson described a South Texas player, Broomcorn, whose uncle occasionally joined the game but never played a hand. He simply sat there until his chips were dwindled away by the antes. Whenever Doyle’s fellow poker players encountered a tight player in a poker game, they would needle him by saying, “You’re gonna go like Broomcorn’s uncle.”

To this day, poker tournament pros use this saying to make fun of tight players, and for good reason.

Mason Malmuth may be the foremost advocate of a tight style of play in poker tournaments. On page 210 of Gambling Theory and Other Topics, Malmuth advises players that it is incorrect for a player who is short-stacked to raise with a “marginal hand” or push all-in with a “calling hand” because, since “the less chips you have, the more (relatively speaking) each individual chip is worth… This means that going out with a bang is wrong. You should try to go out with a whimper. That is, try to make those few remaining chips last as long as possible.”

And on page 204 of Gambling Theory and Other Topics, Malmuth provides an example of two players who enter a small buy-in tournament where each receives $100 in chips. Player A plays very conservatively, so that he always has exactly $100 in chips at the end of the first hour. Player B, on the other hand, plays a very aggressive style such that he busts out in the first hour three out of four times, but one out of four times he finishes the first hour with $400 in chips. Malmuth asks the question, “Who is better off?”

He answers his question on the next page: “…because of the mathematics that govern percentage-payback tournaments, we know that the less chips a player has, the more each individual chip is worth, and the more chips a player has, the less each individual chip is worth. This means that it is better to have $100 in chips all the time than to have $400 in chips one-fourth of the time and zero three-fourths of the time. Consequently, A’s approach of following survival tactics is clearly superior.”

Malmuth tells us little about Player B’s strategy other than that it is “aggressive and reckless.” I have described in detail in The Poker Tournament Formula how fast strategy earns more chips than conservative strategy while actually tending to lead to fewer confrontations and less risk of bouncing out of a tournament early. So there is no reason to equate the higher earnings of fast strategy with recklessness and increased bust-outs. (Despite this, even if Player B was a poker neophyte who had very little understanding of the game, but just liked to get his chips into action and gamble, I’d put my money on Player B’s overall earning power in tournaments before I’d bet a nickel on Player A. That’s how strong the value of aggression is in tournaments, as opposed to conservatism.)

But there is reason to equate Malmuth’s approach of “following survival tactics” with players being worse off, not better off as Malmuth claims. Although we can see that a situation can exist at a final table in which individual chips have less value in a big stack than a small stack, assuming all players have equal skill, Malmuth’s advice assumes that this situation exists throughout a tournament, when it does not.

The reason it does not exist for skilled players is that, as I have shown, chips have increased earning power when they are put into action with an edge more frequently, thus creating an implied discount on the chips. Malmuth’s tight “survival tactics”, by failing to use chips for their full earning power, are in effect leading to a higher chip cost for his tight player.

Again, until the point in a tournament when the remaining players get into the money, at which time, in some but not all tournaments, equal-skill models may start to make sense, the real truth of tournaments is that the value of chips is based on what you can do with them. Malmuth’s theory about individual chip values is dead wrong through most of a tournament. Throughout 90+% of a tournament, any individual chips in a short stack aren’t worth squat. They simply represent a last desperate shot at survival for players who will almost certainly not make it into the money.

The fast strategies prescribed in The Poker Tournament Formula are essentially a blueprint for earning more chips by getting more chips into action with an edge than you can get into action with a conservative strategy.

In the PTF’s fast strategy, for example, there are three positions from which a player would raise if first in with any two cards. On the button, players are advised to call any number of limpers and even to call any standard raise with any two cards. Postflop, players are advised to always bet if they were the pre-flop aggressor, even when out of position, even when the flop does not hit them, even when the flop looks dangerous, and even when their preflop raise was just a position or chip shot with a trash hand. Likewise, if a player has position on an opponent postflop, and he checks, the player is advised to bet—regardless of his hand. Other more complex and dangerous plays—like pretending to slow-play a hand in order to steal even more chips from an opponent when you have nothing yourself—are also described. But the standard positional preflop and postflop bets and raises with any two cards are presented as a “basic strategy” that a player in a fast tournament should almost always follow. (There are discussions in PTF on proper violations of the basic strategy based on the type of opponent you are facing, your chip position, your opponent’s chip position, poker tournament structure and other factors, but the book’s general approach to tournament strategy is overall much looser and more aggressive than the conservative approaches that have been written about in the past.)

The fast play strategy in The Poker Tournament Formula will consistently out-earn conservative play because it keeps your money in action while your conservative competitors are sitting there waiting for stronger cards, and hoping to make back, with trapping hands, what you have taken from them. But, in fast tournaments, a trapping hand is unlikely to arrive frequently enough to help you, and in all tournaments, trapping hands are unlikely to get paid off by skilled players even if they do arrive.

The fast play strategy in The Poker Tournament Formula is specifically designed to earn you an implied chip discount. It is designed to take chips from conservative opponents and penalize them for their weaker strategy.

The Weakness of Using Equal-Skill Models for Devising Overall Tournament Strategy

Although I use coin-flip examples in the rebuy chapter of The Poker Tournament Formula to explain specific points of tournament logic, it would be incorrect for any player to think that coin-flipping contests (or Malmuth’s “equal-skill tournaments”) are analogous to poker tournaments. It is always dangerous to use a simple example to solve a complex problem, because it may tempt a less astute researcher to use a seemingly similar analogy to answer a question for which that analogy does not apply.

For example, in The Theory of Blackjack, Peter Griffin created a game he called “Woolworth Blackjack,” which consisted of a blackjack game where the only cards in the deck were fives and tens. He used this hypothetical and vastly over-simplified version of the game to test how well a statistical estimate of expectation based on approximations might correspond to a player’s actual expectation. For the problem he was attempting to solve, his analogy works well.

Some years back I received a letter from a blackjack player who had used Griffin’s Woolworth Blackjack deck to devise a unique betting strategy for a real-world casino blackjack game. His strategy was terribly misguided, although it would work fine if the player ever found a casino that would deal actual Woolworth Blackjack. Casino blackjack is not Woolworth Blackjack. There are many complexities to the game as dealt in the casino that do not apply in the five-and-ten version. Griffin’s analogy worked well for the point of logic he was testing, but he never meant to imply that all blackjack problems could be solved by Woolworth.

On p. 196-197 of Gambling Theory and Other Topics, Malmuth demonstrates that in an “equal-skill” tournament with a percentage payout structure (as opposed to winner-take-all), there will be situations in which it would be mathematically incorrect to make a rebuy. (In fact, this particular example is incorrectly used by Malmuth as “proof” that players should not rebuy when they have “a lot of chips”, meaning as many as or more chips than their opponents.)

In The Poker Tournament Formula, I too show that, in a coin flip tournament where every player is of equal skill, there is no mathematical advantage in making a rebuy. I have no argument with the mathematics presented by Malmuth for this specific “equal skill” situation. The math is the math. My problem is with his assumption that an equal-skill event provides an adequate model upon which to devise a valid strategy for rebuying. Although his math in this example is internally correct, it is irrelevant to rebuy decisions in real-life tournaments.

In an excellent post on the Poker Board at this Web site, Pikachu provides much better models in a post titled “A Closer Look at the MTT Format and Playing With an Edge”. To summarize, he finds “A player [with] an edge of 50% should… take the rebuy if his stack is less than 4.5 times greater than his initial stack. A 100% edge player should rebuy with a stack less than 7.5 times his initial stack. For very skilled players rebuys should still be made except in extreme cases… The more skill a player has, the more often he should rebuy.”

Pikachu also shows that players with small advantages should rebuy much less often. Pikachu’s findings, by the way, validate the rebuy advice I give in The Poker Tournament Formula. (In Appendix A, I advise players that it is futile to enter multi-table poker tournaments without a very big edge, and say that with an edge as low as even 10% you should not enter multi-table poker tournaments.)

Pikachu’s models are superior to Malmuth’s because they take into account the fact that poker tournaments are not “equal skill” events. They are especially useful because they compare a wide range of realistic player skill levels (or edges) in poker tournaments.

In Closing

The Poker Tournament Formula specifically addresses winning strategies for fast tournaments. Many of the PTF concepts and strategies, however, will also have value in slow tournaments. These strategies will require adjustments for the speed of the tournament, but the concept of the advantage of fast-play over conservative strategy is still valid. Specifically, the PTF strategies will have to be adjusted to take advantage of the increased opportunities for profitable fast play action pre- and postflop, created by the slower blind structures and bigger starting chip stacks. Optimal slow tournament strategies will be very different from the conservative strategies recommended in many of the popular books today, as many of these books were written from the same erroneous perspectives on chip values as Malmuth’s book.

A number of the best professional poker tournament players believe that tight conservative play is already nearing obsolescence even in slow tournaments. Daniel Negreanu, interviewed by Peter Thomas Fornatale in the just-published Winning Secrets of Poker, says: “Basically, the math is behind an aggressive style of play. Books that have been written in the past simply didn’t have it all right as far as what hands to be playing and what hands to be folding… So without playing that [aggressive] style, you’re basically depending on getting really good cards. And when you’re depending on getting really good cards on a regular basis, you’re going to be disappointed because they don’t come often enough.”

Many of the most successful tournament pros do not follow tight, conservative strategies, even in the slowest major events. Malmuth remarks in the “Afterthoughts” on his tournament section in Gambling Theory and Other Topics, “…some people with what appears to be excellent tournament records have very little idea of what is correct… In addition, if it were possible to estimate the standard deviation for tournaments and a good tournament record was compared to a poor one, I suspect that a seemingly large difference would often not be significant. This means that some of the current superstars are probably not very good, just fortunate.”

I agree that there is a large standard deviation in poker tournaments. However, the combined win records of the aggressive fast players have moved well beyond the realm of normal fluctuations. I suspect that the fast-playing pros who keep making it into the money, keep making final tables, and keep racking up wins, know a lot about “correct” strategy.

Part II of this article deals with correcting the idea that chip value is based on stack size at a final table or non-skill tournament. See Chip Value in Poker Tournaments, With Implications for Winning Strategy.  ♠

Posted on 1 Comment

Chip Value in Poker Tournaments

Correct Chip Value Theory and Implications for Winning (and Losing) Poker Tournament Strategy

by Arnold Snyder
(From Blackjack Forum, Fall 2006)
© Arnold Snyder 2006
In my recent article “The Implied Discount”, I addressed a theory (first proposed by Mason Malmuth in his 1987 Gambling Theory and Other Topicsthat linked the dollar value of chips in a tournament to the size of a player’s chip stack. Specifically, the theory states that individual chips in a small stack have greater value than individual chips in a larger stack. For convenience, I am going to refer to this theory as the “reverse chip value theory” throughout this article.

The reverse chip value theory, which has also been embraced by David Sklansky in his 2002 Tournament Poker for Advanced Players, is important because, as I will show in this article, it has led to tournament strategies that are potentially very costly to players. In “The Implied Discount,” I contested this theory based on chip utility value during the portion of tournaments where skill determines results. In this article, I will show why the reverse chip value theory is invalid even later in a tournament, including at a final table, and even when skill and chip utility are not a factor in results.

At the time I wrote “The Implied Discount,” my intuition told me that this reverse chip value theory was incorrect, even in a tournament where skill is not a factor, but I didn’t pin down the source of the error in this theory until shortly after that article had been posted. In this article, I’ll explain the errors behind the theory, show the correct way to calculate the value of chips in a tournament, and discuss the implications of correct chip value theory for winning poker tournament strategy.

The Erroneous Chip Value Theory

To refresh your memories on the reverse chip value theory itself, let’s return briefly to an example I used in “The Implied Discount” to explain it:

One of the most widely held and erroneous concepts of poker tournament logic is the concept that the fewer chips you have, the more each of your chips are worth, and the more chips you have the less each of your chips are worth…

Let’s say a small buy-in tournament is down to the last two players, who are heads up at the final table. The remaining prizes are $3,500 for the winner, and $1,800 for second place.

The player who is the chip leader at this point has $70,000 in chips, while the player in second place has only $10,000. Clearly, the value of each of the chips of the second place player is much greater than the value of each of the chips of the chip leader.

Assuming the blinds are so high at this point that both players are simply all-in before every flop regardless of cards (making this a coin-flip situation for all intents and purposes), the player in second place has an 87.5% chance of taking the second place prize, and a 12.5% chance of doubling up enough times to take first place. This makes his $10,000 in chips worth more than $2,000 in prize money. So, the short-stacked player’s chips are worth more than 20 cents each.

The chip leader, however, will find that his $70,000 in chips have a prize value of less than $3,300. The chip leader’s chips are worth less than 5 cents each. And ultimately, the more chips the second place player loses to the first place player, the greater the value of each of his individual chips. In fact, if he gets’ down to a single chip, that chip will have a value of more than $1,800, while the chip leader’s chips ain’t worth even a nickel each.

The example above does appear to illustrate that the chips in a larger stack have less dollar value than the chips in a smaller stack, assuming that two players are at a final table where no skill advantage is possible for either player. But this logic is based on a false assumption about how chips are awarded in percentage-payout tournaments.

The false logic can be found on pages 220 and 221 of Gambling Theory and Other Topic, in an essay written by Mark Weitzman. In this essay, Weitzman provides a method for players to chop up a prize pool, based on their respective chip stacks, in a percentage payout tournament where the players decide to quit the tournament before the finish. In his example, one player has 70% of the chips while his heads-up opponent has 30% of the chips. Essentially, Weitzman assigns 70% of the first place prize, plus 30% of the second place prize, to the player with 70% of the chips.

Weitzman’s method of chopping the prize pool does provide a fair distribution based on each player’s expected value with his respective chip stack. Unfortunately, Weitzman then leaps to a very wrong conclusion about tournament chip values. He says: “This example also illustrates a general point: If you are behind in a percentage payback tournament, your chips are worth more than their face value, and if you are ahead, your chips are worth less than their face value. For similar reasons, Mason Malmuth has pointed out that the winner of a percentage-payback tournament has in a sense suffered a ‘bad beat’—he has just won all of the chips, but he gets to keep only a percentage of his winnings.”

If you look at the chip distributions Weitzman comes up with for the players in his example, it is obvious that he is saying that you can determine the value of individual chips by dividing a player’s expected win by the number of chips in his stack (which is exactly the method I used in my example above).

But it is not true that you can determine of value of individual chips in this way. In a percentage payout tournament, there are always one or more payouts that players receive for having zero chips, meaning for having busted out before first place (the payout for the player who has won all the chips). In other words, because there is a specific dollar value to having zero chips, this value must be taken into account when calculating tournament chip value. Specifically, to correctly calculate the value of the chips in your stack, you must calculate their value based on what they add to the value of having zero chips.

What Weitzman is doing wrong is assuming that all payout value is derived from your chip stack. He is failing to take into account the portion of each payout’s value that is due to having zero chips. To return to the example above, if a player expects a payout of $3500 for having all the chips (or taking first place), and $1800 for zero chips (or going out in second place), Weitzman would be incorrectly dividing the full $3500 of the first place prize by the number of chips in the first place player’s stack in order to come up with the value of an individual chip. What he should be doing to come up with the value of a chip is dividing $1700 (the amount added to the $1800 payout for zero chips) by the number of chips in the first place player’s stack.

I can see how Weitzman made this error, and why Malmuth and Sklansky accepted the error. I accepted the error myself in my initial analysis in “The Implied Discount”. Unfortunately, Weitzman’s mistake causes the individual chips in a big stack to appear to be worth less than the individual chips in a small stack. In reality, as I will show, all chips have the same value at any given point in a tournament, no matter what the size of the stack they are in.

The Correct Way to Analyze Tournament Chip Value

Properly understood, all prizes in a percentage payout tournament, other than first prize, are awarded for running out of chips, or having zero chips (after having survived longer than a specific number of opponents).

Let’s return to the example above, in which two players remain heads up at the final table. Again, the total prize pool remaining consists of a $3500 prize for first place, and an $1800 prize for second place. We are also going to assume that neither player has any skill advantage, and that the blinds at this point are so high that both players must go all in on every hand. In other words, the tournament results might as well be based on coin-flips. There is a total of $80,000 in chips on the table.

Let’s look at what each player can expect to win based on the size of his chip stack (this expected win amount is called “Stack Value” in the chart below), as well as the dollar value of each $100 chip in his stack at this point. (The math for figuring out the payout value based on stack size is explained below the chart.)

Chart I
Snyder’s Chip Value Analysis
TOTAL STACK STACK VALUE VALUE ADDED BY $10K CHIPS $100 CHIP VALUE
$80,000 $3500.00 N/A $2.125
$70,000 $3287.50 $212.50 $2.125
$60,000 $3075.00 $212.50 $2.125
$50,000 $2862.50 $212.50 $2.125
$40,000 $2650.00 $212.50 $2.125
$30,000 $2437.50 $212.50 $2.125
$20,000 $2225.00 $212.50 $2.125
$10,000 $2012.50 $212.50 $2.125
ZERO $1800.00 $212.50 N/A

Explanation of Chart

1. In this example, and in other examples throughout this article, the house fee is ignored.

 2. The first column is the player’s stack size. There are $80,000 total chips in play. It is assumed that if the player has a $50,000 stack, his opponent will have a $30,000 stack.

 3. To calculate the “Stack Value” for an equal skill event, you simply analyze it as if the players were flipping a coin to determine the winner. If one player has 90% of the chips in play, he will win first prize 90% of the time and second prize 10% of the time. For example, with a total stack of $70,000, as above, this stack represents 7/8 or 87.5% of the $80,000 total chips in play. So, we take 7/8 of the first place payout, and add this to 1/8 of the second place payout, and we have the EV, or expected value, in dollars, of that size stack when heads up in a coin-flip against the player with the remaining chips.

 4. The “Value Added by $10K Chips” shows the exact dollar value added to the stack if $10K in chips are added to it, by having won those chips from the opponent. Note that in every case, adding $10K in chips adds the same amount of dollar value ($212.50) to the player’s chip stack, regardless of the stack size.

 5. The “$100 Chip Value” shows the dollar value of each $100 chip in the stack, and can be obtained by dividing the $10K value by 100.

The chart shows a number of interesting things. If the theory that chips in a bigger stack are worth less than chips in a smaller stack was correct, then we would expect to see the dollar value of adding $10K in chips to a $60K stack to be less than the dollar value of adding $10K in chips to a $20K stack. But this does not occur—every $10k added is worth $212.50, regardless of the size of the stack to which it is added. Nor does subtracting $10K in chips from a stack ever diminish a stack’s dollar value by anything other than that same $212.50. Chips a player wins have the exact same dollar value as chips a player loses, and stack size is irrelevant.

For example, the dollar value of a $70,000 stack is $3287.50, while the dollar value of a $60,000 stack is $3075.00, a difference of $212.50. And this same difference can be found between any two stacks of any size that are exactly $10,000 apart.

The percentage payout tournament structure I used above is not an isolated example of individual chips always having the same value in a non-skill tournament, regardless of stack size. If you are good with spreadsheets, you can set one up to do the simple math, and you’ll find that regardless of the number of chips in play, and regardless of the percentage of the chips you assign to a player as his chip stack, adding or subtracting chips to or from his stack always adds or subtracts the same dollar value per chip, regardless of his stack size.

By contrast, if I use Weitzman’s erroneous method to calculate the value of a $100 chip in the sample tournament I described above, here is what the chip values would look like:

Chart II
Chip Value Based on Erroneous Theory
TOTAL STACK STACK VALUE VALUE ADDED BY $10K CHIPS $100 CHIP VALUE
$80,000 $3500.00 N/A $4.375
$70,000 $3287.50 $212.50 $4.696
$60,000 $3075.00 $212.50 $5.125
$50,000 $2862.50 $212.50 $5.725
$40,000 $2650.00 $212.50 $6.625
$30,000 $2437.50 $212.50 $8.125
$20,000 $2225.00 $212.50 $11.25
$10,000 $2012.50 $212.50 $20.15
ZERO $1800.00 $212.50 N/A

[To come up with the value of a $100 chip using Weitzman’s invalid assumption, we simply divide the stack value by the total number of chips in the stack, and multiply by 100.]

The value of a single $100 chip, based on this incorrect method of analyzing chip value in a percentage payout tournament, rises steadily from well under five bucks (with an $80,000 stack) to more than twenty bucks (with a $10,000 stack). But the data in the Stack Value column contradicts the data in the $100 Chip Value column! We can see in the Stack Value column that every addition or subtraction of $10,000 in chips is worth exactly $212.50, regardless of stack size.

So how can the individual chips within the stack differ in value when $10K in chips always has the same value? The answer is: They can’t. And, unfortunately, this incorrect chip valuation has lead to many errors in poker tournament strategy advice.

It is important to note, for example, that if you are heads up with the last remaining opponent, and you are down to your last $100 chip, it is a mistake to think that your last chip is worth $1800+. That chip is worth just a little more than two bucks, like every other $100 chip on the table at this point in the tournament (all but one of which are in your opponent’s stack).

It is zero chips that are worth $1800 at this point in the tournament, and when you bust out, you will be paid solely because you outlasted so many of your opponents. The only reason your last remaining chip has even a two-buck value is because you still have a hair of a chance to come back from the brink of death and win.

The important point, however, is that individual chips in a small stack are not worth more than individual chips in a big stack. In fact, as we’ll see, the shorter the stack, the closer to worthless the chips in that stack become. (In other words, in real-world tournaments that are almost never pure coin-flips, it’s highly unlikely that your last chip is worth even two bucks at this point.)

More Implications for Poker Tournament Strategy

The erroneous concept that individual chips in a short stack are worth more than individual chips in a large stack pervades the thinking and advice on overall tournament strategy in many popular books. However, it is mainly works by Mason Malmuth and David Sklansky in which chip value theory is cited as the basis of the strategy advice. To a large degree, this is because of the care Malmuth and Sklansky take to try to justify their advice with underlying poker math and logic.

Malmuth’s advice on how this concept relates to overall tournament strategy is somewhat contradictory. For example, on page 198 of Gambling Theory and Other Topics, he says: “It needs to be pointed out that this force becomes significant only late in a tournament. Early in a tournament, it is not that crucial.”

But, in this same chapter, two pages later, Malmuth uses the reverse chip value theory to advise that progressive rebuys may not be correct. He says: “Progressive payback mathematics clearly shows that the more chips you buy, the less profitable the rebuy is.” And on page 196, Malmuth again bases rebuy advice on reverse chip value theory where he writes: “Don’t rebuy when you have a lot of chips.” Since rebuy decisions are made in the early portion of tournaments, Malmuth is clearly basing an important early tournament strategy decision on the reverse chip value theory.

Sklansky’s Gap Concept: A Mistake in Poker Tournaments

Sklansky too uses reverse chip value theory when he advises on page 94 of Tournament Poker for Advanced Players that players ought not to rebuy unless they have less than the average amount of chips. More importantly, Sklansky uses reverse chip value theory in his advice on how to adjust the “gap” concept for poker tournaments versus cash games.

The gap concept essentially says that it takes a stronger hand to call a raise than it does to make a raise. In The Poker Tournament Formula I and II I challenge the gap concept in poker tournaments, stating that because of the increased value of position in tournaments, it does not take a stronger hand to call a raise, but only a stronger position. (Position beats cards.)

Sklansky, by contrast, believing that individual chips are worth less in a big stack, comes to the opposite conclusion. He writes that tournaments, because they “…increase the ‘Gap’, …force you to give up on many small edges, and frequently make overall play tighter.”

And in his chapter titled “The Gap Concept”, Sklansky writes that the success of the top tournament players against players who do well in cash games is a result of the top players’ widening their “Gap” requirements. He finishes by telling us: “As important as the Gap concept is anytime in a tournament, it becomes even more important still with the Gap usually widening even more during the last stages of the tourney. The reason is related to the way prizes are paid.”

To look further at the degree to which reverse chip value theory has influenced poker tournament strategies, let’s consider another concept of Sklansky’s—the idea that a chip you lose is worth more than a chip you win (see Tournament Poker for Advanced Players, page 45). Such advice necessarily leads to overly conservative play because a player would need a bigger advantage in a hand, or better odds, to risk a chip for a chip that is worth less.

I could produce a lot more examples of the flawed and costly strategies to which their erroneous reverse chip value theory has lead.

Again, in an “equal skill” tournament (assuming such an event could exist), every chip you add to your stack adds the exact same dollar value that it adds to any other sized stack. And every chip you lose, likewise, subtracts the same value.

As I mentioned in “The Implied Discount,” I wrote The Poker Tournament Formula to address conservative poker tournament strategies that I consider deeply flawed, especially in fast tournaments. And perhaps the most important flaws in these strategies, for all tournaments, are due to this erroneous and longstanding reverse chip value theory.

Poker Tournament Chip Value If More Than Two Players Remain

Some players may be wondering about the validity of my own chip value theory in calculating the value of chips when more than two players remain in a tournament. To address that question, let’s consider a ten-player sit’n’go tournament (SNG) with two cash prizes—$3500 for first place and $1800 for second place. Will the value of each chip be the same at the start of the tournament when all ten players are at the table as when only two players remain?

This is of interest because although we know that zero chips pay $1800 to second place in this tournament, zero chips pay zero dollars to third through tenth place. Does this fact affect chip value, and if so, how?

To answer this question, let’s start with a simpler example. Let’s say that this SNG is down to the last three players, so that two players will finish in the money, but the third place finisher will get paid nothing. Does this bubble player change the dollar value of the chips from what they are with only two players remaining?

Here’s an easy way to think about this question. Let’s assume that one of these three players wins all of the chips from his two competitors when all three are all-in on the same hand. His first place payout will be $3,500. One of the other two players (the player with the bigger chip stack going into the hand) will get $1800, but the other gets nothing. Because we are making an assumption that these players have “equal skill,” then the average payout for having zero chips is $900, which is the $1800 second place payout divided by the two players who have an equal shot at it.

If the average value of zero chips has gone from $1800 with only one player in contention for second place, to $900 with two players in contention, then the value of the $80,000 in chips is now $3500 minus $900, or $2600. Or, at least, that’s what logic tells me. But since logical errors are easy to make when trying to solve gambling problems, I need to test this theory.

It appears that with three players remaining, the value of the chips on the table has gone up from what it was with only two players remaining. If $80,000 in chips is worth $2600 when three players remain, then each $100 chip is worth $3.25, instead of the $2.125 each $100 chip was worth when there were only two players.

How Can We Test This Theory?

It’s easy to test if this is true by setting up an example where we first distribute chips to each of the three players, then calculate what the chips are worth. An easy example, which requires no difficult math, is to divide the $80,000 in chips exactly evenly among the three players.

If we divide $80,000 by three we’ll see that each player has approximately $26,666.67 in chips. (Okay, so they now have penny chips in play. Not very practical, but it works fine in a spreadsheet for our analysis.) We also know that since we are assuming all players have equal skill in addition to equal chip stacks, each player has a one-in-three chance of winning first place, a one-in-three chance of winning second place, and a one-in-three chance of finishing on the bubble with no payout. This gives each player’s chip stack a payout value of $1766.67, or exactly one third of the total $5300 prize pool.

Now, let’s say that all three of these remaining players go all-in on the first round and Player A wins all of the chips of Players B and C. Since Players B and C had exactly equal chip stacks before this hand, they will split the $1800 second prize, earning $900 each for their remaining zero chips. Player A wins all of their chips, adding about $53,333.33 to his chip stack, giving him the full $80,000 in chips on the table.

Since we know that Players B and C had chip stacks with a payout value of $1766.67 before this final hand, and now they have zero chips, how much dollar value did each of them lose when they lost their chips? Well, at the end of the hand, their zero chips were worth exactly $900 each. And since 1766.67 minus $900.00 equals $866.67, then it follows that with this chip distribution $26,666.67 in chips are worth $866.67.

If we divide the $866.67 payout value of the chips by the $26,666.67 in chips each player lost, we get $.0325, or exactly $3.25 for each $100 chip, the exact same dollar value that we had when all $80,000 in chips were in a single stack.

So, with three players at the table, assuming equal skill and the same payout structure we have been using, all chips have the same dollar value, in this case $3.25 per $100 chip, regardless of the size of a chip stack.

We do find something interesting in this three-player example, however. The dollar value of the chips goes up as the number of players contending for the prize pool goes up. This is because the relative value of zero chips is smaller earlier in the tournament, due to the fact that busting out now will not lead to a payout. Implied actual-chip value is higher early in a tournament, and implied zero-chip value is lower.

But it would be wrong to assume that the higher dollar value of chips should lead a skilled player to more conservative play early in a tournament. That’s because the dollar value of your chips comes primarily from being able to use them to fight your way to the higher payouts.

Chip Value at the Start of a Tournament

If chips are worth more in a ten-player SNG when three players remain than when only two players remain, it follows that at the start of this ten-player SNG, the chips will have even greater value. Let’s assume that the ten initial players buy-in for $530 each, creating the $5300 prize pool—a $3500 first place payout, and an $1800 second place payout. Eight of the ten players in this SNG will get no payout. Each of the ten entrants, however, is provided with $8,000 in tournament chips to start, creating the $80,000 total chips in play. (In a real-world tournament, that extra $30 each player pays to buy-in would go to the house fee. But to keep the math simple, since we’re primarily interested in the logic, we’re ignoring the house fee.)

So, how do we calculate the dollar value of the chips at the start of this SNG? We must first subtract from the prize pool all payouts that will be paid to players who have zero chips.

In this case, we must subtract the second place prize of $1800 from the $5300 prize pool (because $1800 will be paid to a player—the second place finisher—who has zero chips), and we get $3500. Then we divide the second place prize—which is paid to a player with zero chips—by the number of players who will share it. (They won’t really share it, as we know that one player will get it all, but they will share its value.) In this case, we divide the $1800 by the nine players who will not get the first place prize, and we get a $200 average value per player.

We then subtract $200 from the $3500 first prize, and we get $3300. The $80,000 in chips at the start of this equal skill tournament have a total dollar value of $3300. And as long as there are ten players still remaining at the table, the value of the individual chips in each player’s stack—regardless of stack size—is identical. In this case, with ten payers, the value comes to $4.125 per $100 chip.

So, Are Poker Tournaments Bad Investments?

Here’s an interesting thought: When you initially buy into the above SNG for $530, the chips you receive are only worth $330, because of the average value of zero chips to each of the ten starting players. So, why would any intelligent poker player pay $530 for only $330 worth of chips? In a cash game, his $530 would purchase $530 worth of chips. Doesn’t this make a tournament a bad investment?

Before we answer this question, let’s consider what the chips are worth in a major tournament, rather than a hypothetical ten-player SNG. What are the $10,000 in chips worth that you purchase when you enter the main event of the WSOP? Let’s use the recent 2006 event as a model, but we’ll round off the numbers for convenience, and again, ignore the house fee.

In the 2006 WSOP main event, there were 8700 entrants who paid a total of $87 million to purchase chips. Of the $87 million in chips purchased, $12 million went to the winner. That means $75 million was paid out to players who finished with zero chips!

The average dollar value of a zero-chip finish at the start of the tournament was therefore $75,000,000 divided by the 8699 non-first-place finishers, or about $8,600. Which is to say, assuming this was entirely a non-skill event (yeah, right!), the chips any individual player purchased for $10,000 had a dollar value of less than $1400, because if he ultimately managed to collect all $87 million in chips in play, he would be paid less than 14 cents on the dollar of the chips’ initial purchase cost.

Now, let’s ask again why any intelligent poker player would pay $10,000 for only $1400 worth of chips? Wouldn’t this purchase be a bad investment?

Not at all. Tournaments are not cash games. In a cash game, you can never be paid for losing all of your chips. If you lose your $10,000 buy-in in a cash game, you go home broke. In a tournament, if you have the skill to outlast enough of your competitors, you can bust out—literally leave the table with zero chips—and cash out for substantially more than you bought in for.

Tournaments with percentage payout structures are designed to pay players for exhibiting exceptional skill, even if they do not beat all competitors. The chips’ dollar value is a minor factor. Let me repeat:

Poker Tournaments Are Not Cash Games

If you read the history of the WSOP, you’ll find that when it started in 1970, and for the first eight years, the main event was played with a winner-take-all format. During those early tournaments, every chip purchased really did have an exact dollar value based on its cost (or it would have had an equal value if the tournament players all had equal skill, which they probably did not).

In the 1970 event, there were only six players, so no one even considered paying second place. By the fifth year (1974), the number of entrants had grown to 16, and by the eighth year (1977), there were 34 players who paid their $10,000 entry fee, creating a $340,000 prize for the winner. That was the last year that the winner-take-all format was used.

In 1978, because the field of players had grown so large—there were 42 players that year, a humongous field!—it was decided to pay the top five finishers based on their finishing positions, with first place getting only 50% of the prize money instead of the whole enchilada. The WSOP main event has remained a percentage payout tournament ever since.

In a tournament like the sample SNG that we looked at earlier, you are willing to pay $530 for $330 worth of chips because—just as in the WSOP main event—you are betting that even if you do not win the whole thing, you will play with sufficient skill to outlast enough of your opponents that you will be paid well for your finishing position, even with zero chips.

Generally speaking, the bigger the event, the more the prize pool is spread out, and the smaller the percentage of the prize pool that goes to the winner. But professional players are willing to pay $10,000 for $1400 worth of chips because they see value in their skill at using those chips. Also, the dollar value of the chips without skill is negligible, and the pros know this. Without skill, chips purchased are dead money. As I pointed out in “The Implied Discount”, most of the value of tournament chips is in their value as ammunition in the battle for survival and dominance.

But many less knowledgeable poker tournament players have a tendency to confuse tournaments with cash games. This, in fact, has been true in all types of gambling tournaments. When casinos first started offering blackjack tournaments about twenty years ago, the first blackjack tournaments—like the first poker tournaments—were “real money” events. Players who entered purchased chips that they could cash out (if they had any left) when the tournament ended, and in some tournaments, they could keep any winnings they acquired in the course of the tournament.

(Unlike poker tournaments, blackjack tournaments, for reasons based on the game structure, cannot be played until one player has all the chips. They are generally played in rounds with a specified number of hands, with an elimination format that removes players based on their chip counts in relation to their opponents.)

“Play money” blackjack tournaments soon started appearing, however, in which the chips purchased for playing the tournament had no cash-out value, and any winnings the players accumulated during the tournament likewise had no value.

Which kind of blackjack tournaments do you think the pros preferred? Real money? Or play money? The answer is real money. Why? Because many of the players who entered the real money blackjack tournaments thought of the tournament chips they purchased as real money, which led them to play their chips too conservatively, and tended to give the pros a substantial advantage over them. Players in these tournaments who thought of their chips as real money were more afraid to bet aggressively, and they often could not bring themselves to violate the basic strategy blackjack plays that would be correct in cash games, but that were incorrect in many tournament situations.

Blackjack tournament pros know (as poker tournament pros know today) that the dollar value of chips in a tournament is negligible because players must beat their competitors within a game structure that makes normal “smart” cash game betting and playing decisions wrong. In fact, in the first book ever written that described winning blackjack tournament strategies—Tournament Blackjack by Stanford Wong (Pi Yee Press, 1987)—Wong explained in no uncertain terms that players who bet with extreme aggression (even when no advantage existed in the blackjack game itself due to, for example, a high proportion of tens and aces in the remaining shoe), and who were willing to violate many of the basic strategy plays that would be correct in cash games, would obtain a huge advantage over their more conservative opponents.

When Jamie Gold won $12 million in the 2006 WSOP main event, some players pointed to this result as evidence that the tournament was a “poor value,” since the winner only got paid for a fraction of the chips he won. Or, as Mason Malmuth put it in Gambling Theory, the winner suffered a “bad beat.”

But it is short-sighted to believe that Jamie Gold was in any sense “short-changed” by the payout structure. Is there a player anywhere who believes that anyone could enter a cash game with a $10,000 buy-in, and emerge some sixty hours of play later with $12 million? To make these numbers a bit more real to those who play poker at smaller levels, it would be like buying in for $100 and leaving with $120,000. Has any cash game player in history ever done this?

The fact is, no cash game player can play poker that skillfully, or find opponents who play that poorly. No opportunity to win this kind of money in such a short time with such a small buy-in will ever exist in any real-world cash game.

Poker Tournament Chip Values Are Really Based on Player Skill

It remains my primary contention that tournaments are never “equal skill” events, even at a final table. Even if a tournament has become a coin-flip-with-a-prayer for the shortest stack(s) at the table, players with bigger chip stacks can almost always still play their hands more selectively, giving them a skill advantage. And tournament strategies based on the dollar value of the chips in play, as opposed to their value as ammunition, can only lead to poor results.

If you make your strategy decisions based on the idea that the fewer chips in your stack, the more each chip is worth, you will be too inclined to use low-risk, conservative playing strategies designed to “protect” your precious few chips. If I believed one of my short-stack chips was worth more than one of your big-stack chips, I would too often see it as foolish to risk one of my precious chips against one of your lower-value chips.

I would also fail to see the huge value of building a big chip stack early—or at any point in a tournament—since I would believe these extra chips to have lower value than the chips in the smaller stacks around me. Again, the reverse chip value theory is not only untrue from the start to the finish of a tournament, it represents the exact opposite of the true value of tournament chips, most of which is based on their utility.

Chip Utility Value

What I am hoping to do with this article is to help struggling tournament players who have been playing according to ideas derived from an incorrect reverse chip value theory, and who can’t figure out why they’re not winning.

An equal-skill tournament analysis would lead us to believe that if only two players remained at a final table, with one player holding 90% of the chips and the other holding 10%, the player with 10% would still have a 10% chance of winning the event. In fact, if these players were flipping coins to determine the winner, that would be true. But it’s not true if the player with the bigger chip stack is a skilled tournament player who understands how to use his chips. In this case, the player with only 10% of the chips has almost no chance of winning, even if that player with 10% of the chips matches the skill level of his opponent.

We saw a perfect example of this in a televised final table a few years back, when Paul Phillips and Dewey Tomko were heads up at the end, with Phillips holding most of the chips and Tomko extremely short-stacked. Hand after hand, Dewey Tomko pushed all-in, doing his best to give himself at least a coin-flip chance of winning.

Unfortunately for him, Paul Phillips had so many chips that he didn’t have to flip coins. He just let Dewey take the blinds until he found a premium starting hand, then he called Dewey’s all-in bet. It’s not a coin-flip situation if one player is betting everything on any two random cards and the other player is playing selective strong hands.

Dewey had almost no chance of winning this tournament against Phillips’ strategy. And I’m not criticizing Tomko’s all-in-on-every-hand strategy. In fact, with his desperately short chip stack, this was his most intelligent strategy. He knew he had second-place prize money locked up if he ended up with zero chips, and he knew his short stack chips weren’t worth squat. But he had no intention of waiting for premium cards—a strategy that would almost invariably lead him to going out with a whimper.

He could not afford to give up the blinds, nor could he afford to play poker. His chips had no utility value at all, and the poker skills he possessed were crippled without the chips to play.

So, What Are Your Poker Tournament Chips Worth?

My analyses of chip values in this article, based on coin-flip/equal-skill assumptions, are not meant to imply that these chip values ever exist in real world tournaments. They don’t. I only present the analyses in this article to correct misinformation about chip values, based on faulty logic, that has been accepted as correct among many tournament players for decades.

Here are the two chip-value formulas that should guide your strategy decisions from the start to the finish of every tournament:

1. The more skill you have, the more your chips are worth.

2. The more chips you have, the more your skill is worth.

In Conclusion: A New Aggressive Attitude toward Poker Tournament Strategy

If you want to make money in poker tournaments, the first thing you have to do is stop thinking of your chips as money. Your chips are ammunition, and no gunfighter ever came out on top by hoarding his ammo.

You must think in terms of building your chip stack right from the start of the tournament. Nothing is more valuable to a skilled player than a chip lead on his opponents, and this is true right from the start of a tournament to the finish. You must always keep in mind not only the cash value of chips, but their more important utility value and intimidation value.

See “The Implied Discount” for a further discussion of chip utility and the value of chips in poker tournaments when combined with skill.  ♠

Addendum: Questions and Answers from the Web Discussion About This Article

Player Question: First, the conclusions you draw from the heads-up situation aren’t really based on a percentage payback format. When the tournament becomes heads up, in effect $1800 is paid to each remaining player, and they are now playing for the remaining $1700. It is effectively now winner takes all. You alluded to this in your “Response to Mason Malmuth …” article.

Response from Arnold Snyder: The situation I describe is very much a percentage payback situation, identical to the percentage payback situation described in Gambling Theory. All percentage payback tournaments have a prize that is awarded to the first place finisher for winning all the chips, and other prizes awarded for finishing with zero chips in various position(s) other than first, provided you outlast enough of your opponents.

The percentage payback example that Malmuth and Weitzman used in Gambling Theory was just as bad as the example that Malmuth used in his 2+2 posts where he tried to defend his position on why add-ons were bad for players who had big stacks. (For more information, see my article, The Implied Discount.)

What I show in my chart is that the two remaining players are only fighting over $1700. There was a “base prize” of $1800 already guaranteed to both. The example I give in my article shows how the percentage payback structure can lead to the erroneous thinking that chips in a big stack are worth less than chips in a small stack when you come down to the portion of the tournament where the remaining players are already guaranteed a base prize for zero chips, if you fail to account for this base prize.

If you read Weitzman’s example in Gambling Theory on how to chop up a prize pool, you’ll find that he uses the exact same method that I use in my article to determine a player’s dollar EV based on chip stack. In fact, instead of the two-player example I used in my article, I could have just used the example that Mark Weitzman used. It is a mathematically identical situation with slightly different numbers. He specifies that there are only two players remaining. He has the chip leader with 70% of the chips and his opponent with 30%. The prize pool awards $800 to first place and $200 to second place. He then assigns 70% of $800 and 30% of $200 to the chip leader and the rest to the player with fewer chips.

There are actually two ways to do the math on this, and Weitzman shows both methods. One way is exactly as descibed above. The other way is to award each player the base prize first (in Weitzman’s example $200), then give the player with 70% of the chips 70% of the $600 that is being fought over, and the player with 30% of the chips the other 30%. If you work it out both ways, you’ll discover that both methods come to the same answer. In my article, I explained the first method, but I could have used either and I would have come to the same conclusion.

Weitzman correctly includes the base payout for zero chips in his calculation of EV for the prize pool chop, because both players have already earned this base payout, even if they finish with zero chips. But for calculating chip value, his statement implies that he erroneously assumed that this base payout was part of what the chips were worth rather than what zero chips were worth.

But, again, the base payout of $200 has nothing to do with the chips in players’ stacks at this point. Either player would get that $200 for finishing with zero chips. If Weitzman had done the math correctly on determining the value of each player’s chips–that is, realizing that the players were only fighting over the $600 that was not guaranteed–he would have seen that the individual chips in both players’ stacks had the exact same dollar value.

But instead, Weitzman erroneously concludes from his analysis that “This example illustrates a general point: If you are behind in a percentage-payback tournament, your chips are worth more than their face value, and if you are ahead, your chips are worth less than their face value.”

Malmuth and Sklansky then took Weitzman’s erroneous notion of “what your chips are worth” (based on the size of your chip stack) to mean that individual chips in a big stack are worth less than individual chips in a small stack, which they clearly are not. They also concluded that a chip you win is worth less than a chip you lose, which would only be true if the Weitzman’s conclusion about chip values were true, but it’s not. The actual dollar value that any chip adds or subtracts is always identical, regardless of the size of the chip stack.

To clarify this player’s question for others who may not understand, he is asking if I am making the same mistake in my percentage payback analysis that Malmuth made in his analysis in his 2+2 posts, where he based bad rebuy advice on a model that gave all players an immediate rebate on their buy-in. Essentially, this rebate may be considered a payout for zero chips.

My answer to this player is that yes, both Malmuth’s model and mine award prizes for zero chips. But Malmuth failed to subtract the zero-chip payouts from his buy-in amounts, which made his rebuys cost five times more than a buy-in and made his evaluation of the profit potential of rebuys mathematically incorrect. And, in Gambling Theory, Malmuth makes a very similar mistake in failing to subtract the value of zero-chip payouts before making calculations of chip value.

My examples, by contrast, correctly subtract the payouts for zero chips.

Player Question: Second, the way you describe the value of chips seems strange to me. At the beginning of a tournament, if you were to sum up the value of all the chips you will have a total far less than the prize pool. That is because each player will also have some additional EV based on their chance of finishing in the money, but ending with 0 chips. This seems very awkward.

The additional value is tied to the chip stacks, so can’t that value be attributed to the chips? It seems to me that the player’s entire EV should be the same as his total chip value.

Response from Arnold Snyder: I agree with you that the correct method of calculating “chip value” will always come up with a value for all the chips in play that is equal to less than the total prize pool in a percentage payback tournament. And the reason for this is exactly as you state: There is a large value to the zero-chip finishes. The only real importance of the chip value number is in refuting the false notion that the value of individual chips goes down with stack size.

I personally think it’s a waste of time for real-life players to calculate chip value in this way. In my opinion, if the prize pool is $5300, then you are shooting for as much of that prize pool as you can get, and your skill determines the value of your chips, while your chips determine the value of your skill.

But in order to determine chip value after the players have arrived at the portion of an “equal-skill” tournament where there is a base payout guaranteed for those who remain, we must isolate the zero-chip value from the first-place finish value. Again, since I don’t believe that equal-skill tournaments even exist, I primarily see this as an exercise in futility. I’m only doing these analyses to refute erroneous notions of chip value that are leading to very bad strategy advice.

My example of the chip value at the start of the WSOP was included primarily to show players the absurdity of valuing chips as money. There can be a huge value to a zero-chip finish, and how much of that value you can extract will be a function of how well you can use your chips to outlast your opponents.

Players need to know that chips in a larger stack are not worth less than chips in a smaller stack. They must know that a chip you win is not worth less than a chip you lose. The analyses from which these theories came are deeply flawed.

Posted on Leave a comment

Best Online Blackjack Strategy

Optimal Blackjack Strategy with a Wagering Requirement

by Arnold Snyder
(From How to Beat Internet Casinos and Poker Rooms, Cardoza Publishing, 2005)
© 2005 Arnold Snyder

When you are playing blackjack online to meet the wagering requirement for a bonus in an Internet Casino, or in any other situation where you have a wagering requirement, the best basic strategy for blackjack changes slightly.

For those who already know blackjack basic strategy who are surprised to learn that correct strategy with a wagering requirement would be different, here is the logic as it applies to a double down decision:

If I want to know whether I should double down on a total of 9 against a dealer deuce, I have to consider the return on getting double the money on the table with this strong total while giving up the option to rehit the hand if I am dealt a 2 or 3 on it. To double down on a 9 v. 2 and catch a deuce on it is a truly miserable result. Here I am with a total of 11, that I cannot take another card on, and I have double my bet on this hand!

As it turns out, this is one of those borderline decisions that changes according to the number of decks in play. In single and double-deck, the basic strategy is to double down on the total of 9 v. 2, because having that one deuce taken out of play (the dealer’s upcard), has removed a significant enough percentage of the remaining deuces to make the double down the optimal play.

In fact, with three decks, it’s correct to double down on 9 v. 2 if my total of 9 is comprised of a 7-2, since this would mean two deuces would have been removed from the remaining cards. But as soon as we get to 4 or more decks, the basic strategy for 9 v. 2 is to hit, and not double. That’s how the logic works.

But let’s look at how much of a borderline decision this is. In a shoe game (and it will be slightly different with 4, 6, or 8 decks), if I am dealt a total of 9 v. 2, I have a approximately a 7.85% advantage over the house if I just hit. In dollars and cents, this means that with a $100 bet, my average return on this hand with basic strategy (hit) is to make $7.85.

How much money do I lose if I double down? Well, not really that much. If I double down on this hand in a 4-deck game, my win expectation is about $7.45. Card counters who table hop and play only plus counts just about always double down on 9 v. 2 because with most balanced count systems the index number for doubling down on this hand is 0. If you have just the slightest positive count, doubling down becomes the correct play.

In any case, since a return of $7.45 is less than a return of $7.85, basic strategy with 9 v. 2 is to hit in all games with more than three decks, not double down.

But, consider an Internet 4-deck game where I have a wagering requirement to fulfill. Let’s say I have a total wagering requirement of $2000, and I’ve already played through $1800 in action. In other words, I have exactly $200 of action left to meet my wagering requirement. The casino allows a $100 max bet. I place a $100 bet, and I am dealt a 9 v. 2. How should I play it?

Consider:

If I hit, I have an expected return on this hand of $7.85. I then must play one additional $100 hand, and I must assume that the cost of this random hand will have the house edge of 0.50%. This second $100 hand that I must play to meet my wagering requirement has a negative return of -$0.50. So, for these two hands, my total return is $7.85 – $0.50 = $7.35.

If, however, I violate standard basic strategy and double down on my 9 v. 2, my total return on the $200 in action will be 10 cents higher, $7.45. So, when there is a wagering requirement, basic strategy for the 4-deck game changes.

But, with 6 decks, if I double down on 9 v. 2, it will cost me about 21 more cents than hitting and playing a second hand against the house edge, so with 6 or more decks, it is best to follow the standard multiple-deck basic strategy for 9 v. 2, and just hit.

The logic here does not require that you be down to the last two bets of a wagering requirement. As long as you are playing to meet a wagering requirement, and every additional bet (double or split) that you don’t place on a hand where you have this option will require another bet on a random hand with the house edge, you will be in a situation where the value of doubling down or splitting must include the value of eliminating a random hand that must be played at the house advantage.

In any case, the value of following a Wagering Requirement Basic Strategy as opposed to a standard blackjack basic strategy where no wagering requirement is imposed is negligible. But it does exist, and smart players may want to know about it. For those who are out there playing on bonuses with wagering requirements in Internet casinos, here are the changes:

9 v. 2 = double (4 decks or fewer)
A7 v. 2 = double
A6 v. 2 = double
8 v. 6 = double down in a 2-deck game with a 5-3, but not with a 6-2
11 v. A = double down in a 2-deck game

Normal basic strategy with 9 v. 2 is to double down in 1 and 2-deck games only. With a wagering requirement, we should also double down in 4-deck games. In a 6-deck game with a wagering requirement, however, this double down would cost us an extra 21 cents on a $100 bet, so we only make the altered double down in a 4-deck game, unless we’re looking for a cheap camo play in 6-deck.

Normal basic strategy with a total of 8 v. 6 is to double down in single-deck only. With a wagering requirement, we are correct to double down on 8 v. 6 in 2-deck games if our cards are 5-3, but not 6-2. With more than 2 decks, it is not correct to double down on 8 v. 6 with a wagering requirement.

Double down on 11 v. A in a 2-deck game. Normal basic strategy is to double down on 11 v. A in single-deck only, or in multi-deck if the dealer hits soft 17. In Theory of Blackjack, Griffin provided refinements to this rule, namely that if the player’s 11 is a 6-5 or 7-4 (but not a 9-2 or 8-3), it is also correct to double down in 2-deck. With a wagering requirement, it is optimal to double down with 11 v. A with 6-5, 7-4, and 8-3, but 9-2 is still on the other side of borderline. With more than two decks, however, do not double down on this hand.

Comprehensive Wagering Requirement Online Blackjack Strategy for Any Number of Decks

STAND

Stand23456789XA
17SSSSSSSSSS
16SSSSSHHHH1H
15SSSSSHHHHH
14SSSSSHHHHH
13SSSSSHHHHH
12HHSSSHHHHH
A7SSSSSSSHHS2

DOUBLE DOWN, HARD TOTALS

Double23456789XA
11DDDDDDDDD3D12
10DDDDDDDDHH
9D11DDDDHHHHH
8HHHD5D13HHHHH

DOUBLE DOWN, SOFT TOTALS

Soft Totals23456789TA
(A,9)SSSSSSSSSS
(A,8)SSSSD5SSSSS
(A,7)DsDsDsDsDsSSHHS2
(A,6)DDDDDHHHHH
(A,5)HHDDDHHHHH
(A,4)HHDDDHHHHH
(A,3)HHD5DDHHHHH
(A,2)HHD5DDHHHHH

SURRENDER (LATE)

Late Surrender23456789XA
17¢6
16¢7¢¢8
8-8¢9
15¢10¢6
7-7¢5¢9

S = Stand, H = Hit, D = Double Down (if doubling not available, then hit), Ds = Double Down (if doubling not available, then stand),
в = Surrender

1 = Stand with 3 or More Cards
2 = Hit in Multi-Deck, or if Dealer Hits S17
3 = European No-Hole Hit
4 = S17 Multi-Deck or European No-Hole Hit
5 = Single-Deck Only
6 = With Hit Soft 17 Only
7 = Single Deck Hit
8 = Single Deck, X-6 Only
9 = With Hit Soft 17 in Multi-Deck
10 = Excluding 8,7
11 = 4 decks or fewer only
12 = Always double in H17 games. In S17 games, double in single and 2-deck games only.
13 = Double in single deck games. In 2-deck games, double on 5-3 but not 6-2. With more than 2 decks, do not double.

PAIR SPLITS
WITH DOUBLE AFTER SPLITS

Pairs23456789TA
(A,A)YYYYYYYYYY1
(T,T)NNNNNNNNNN
(9,9)YYYYYNYYNN
(8,8)YYYYYYYYY1Y1
(7,7)YYYYYYY2NNN
(6,6)YYYYYY2NNNN
(5,5)NNNNNNNNNN
(4,4)NNY2YYNNNNN
(3,3)YYYYYYY2NNN
(2,2)YYYYYYNNNN

PAIR SPLITS
NO DOUBLE AFTER SPLITS

Pairs23456789TA
(A,A)YYYYYYYYYY1
(T,T)NNNNNNNNNN
(9,9)YYYYYNYYNY
(8,8)YYYYYYYYY1Y1
(7,7)YYYYYYNNNN
(6,6)Y2YYYYNNNNN
(5,5)NNNNNNNNNN
(4,4)NNNNNNNNNN
(3,3)NNYYYYNNNN
(2,2)NY2YYYYNNNN

INSURANCE: NO


There may be a few other violations of standard blackjack basic strategy that would bring you an extremely small extra return in particular games, based on the exact number of decks in play and the precise rule set, when you are playing to meet a wagering requirement, but they will not be worth much to players in dollars and cents.

Also, some players have questioned whether correct online blackjack basic strategy would change again in a situation where you have a win target, such as when you are playing on a sticky bonus. It turns out that the online blackjack strategy for win target situations with a wagering requirement is the same as the regular Wagering Requirement Online Blackjack Strategy. I will explain why in a separate Blackjack Forum article.

Again, the value of following a Wagering Requirement Online Blackjack Basic Strategy as opposed to a standard blackjack basic strategy where no wagering requirement is imposed is negligible. But it does exist, and smart players should know it. ♠

Posted on Leave a comment

Estimation of True Count Using the Red 7:

Sensitivity of Blackjack True Count to Errors in Estimating Decks Remaining

by Conrad Membrino
© Blackjack Forum 1990

rc.r7 = red 7 running count
n = number of decks
tc = true count
dr = decks remaining

rc.r7 = 2*n + (tc – 2) * dr

Number of Decks = 8

Red-7 Running Counts corresponding to various True Counts for an Eight deck game

Eight Deck Gamerc.r7
rc.r7 = 23456 + (7p/2) – TApdecks played
tcrc.r734567
2161616161616
316 + dr2120191817
416 + 2*dr2624222018

Estimation of true count with the Red 7 in an Eight Deck Game:

  1. Estimate decks remaining
  2. Compare Red 7 running count with 16, 16 + dr, or 16 + 2*dr for true counts of 2, 3, or 4
  3. Use calculated true count with High-Low strategy indicies.

Sensitivity of True Count to Errors in Estimating Decks Remaining:

  1. The Closer to the Pivot Point, the less sensitive the true count is to errors in estimating the decks remaining.
  2. At the pivot point, ther true count is independent of the decks remaining
  3. Pivot Point of the Red 7: True Count = 2
  4. Pivot Point of High-Low: True Count = 0
  5. At True Counts = 2:
    (a) Red 7 is closer to its pivot point (tc=2) than the High-Low is to its pivot point (tc=0)
    (b) Red 7 is less sensitive to errors in estimating decks remaining when calculating true count.
    (c) Red 7 gives more accurate true counts than High-Low.

Example:

A = Actual
E = Estimated
dr:a = actual dr
dr:e = estimated dr
tc:a = actual tc
tc:e = estimated tc

Eight Decks

r7 = red 7hi = high-low
tc.r7 = 2 + (rc.r7 – 16) / drtc.hl = rc.hl / dr

Eight Decks
dr:a = 4 and tc:a = 3

Red 7High-Low
estimatederrorestimatederror
dr:erc.r7tc:e(tc:e – tc:a)rc.hltc:e(tc:e – tc:a)
6202.7-0.3122.0-1.0
52.8-0.22.4-0.6
43.00.03.00.0
33.30.34.01.0
24.01.06.03.0
Posted on Leave a comment

Red 7 vs Hi-Lo

Six Deck Unbalanced Red 7 Running Count Conversion to Equivalent Hi-Lo Balanced True Count and Sensitivity of True Count to Errors in Estimating Decks Remaining

by Conrad Membrino
(From Blackjack Forum Vol. XVII #4, Winter 1997)
© Blackjack Forum 1997

rc.u = 23456p + (7p/2) – TAp

Red-7 is almost equivalent to hi-lo count + counting all the sevens as (1/2) each.
rc.u = unbalanced running count = 23456+ (7p/2) – TAp
tc.b = balanced true count
n = number of decks
dp = decks played
dr = decks remaining
rc.u(tc.b) = unbalanced running count corresponding to a balance true count of tc.b
rc.hl = hi=lo running count = 23456p – TAp
rc.u = hi-lo + (7p)/2
if hi-lo has a true count of “t” then rc.hl = t*dr and

rc.u = rc.hl + ExpVal(7p)/2 = t*dr + 2*dp = (t+2-2)*dr + 2*dp = (t-2)*dr + 2*n

rc.u = 2*n + (tc.b – 2) * dr

Number of Decks = 6

Red-7 Running Counts Corresponding to Various True Counts for a Six Deck Game

Six Deck Gamerc.unbal
rc.unbal = 23456 + (7p/2) – TApdecks played
tc.balrc.unbal12345
012 – 2*dr246810
112 – dr7891011
2121212121212
312 + dr1716151413
412 + 2*dr2220181614

Sensitivity of True Count to Errors in Estimating Decks Remaining

Estimation of True Count Using the Red 7:

rc.r7 = red 7 running countn = number of decks
tc = true countdr = decks remaining
rc.r7 = 2*n + (tc – 2) * dr

Number of Decks = 8

Red-7 Running Counts Corresponding to Various
True Counts for an Eight Deck Game

Eight Deck Gamerc.r7
rc.r7 = 23456 + (7p/2) – TApdecks played
tcrc.r734567
2161616161616
316 + dr2120191817
416 + 2*dr2624222018

Estimation of true count with the Red 7
in an Eight Deck Game:

  1. Estimate decks remaining
  2. Compare Red 7 running count with 16, 16 + dr, or 16 + 2*dr for true counts of 2, 3, or 4
  3. Use calculated true count with High-Low strategy indices.*

(*Ed. Note: Membrino is suggesting here that you may use this true count method not only to estimate your advantage, but also to alter your strategy with all Hi-Lo strategy indices. This is not the way I have developed the Red 7 in the new Blackbelt, but if you used a Starting Count of 0, then you could use this true count methology with any standard set of Hi-Lo count-per-deck indices. –Arnold Snyder)

Sensitivity of True Count to Errors
in Estimating Decks Remaining:

  1. The closer to the pivot point, the less sensitive the true count is to errors in estimating the decks remaining.
  2. At the pivot point, the true count is independent of the decks remaining
  3. Pivot Point of the Red 7: True Count = 2
  4. Pivot Point of Hi-Lo: True Count = 0
  5. At True Counts ≥ 2:
    (a) Red 7 is closer to its pivot point (tc=2) than the Hi-Lo is to its pivot point (tc=0)
    (b) Red 7 is less sensitive to errors in estimating decks remaining when calculating true count.
    (c) Red 7 gives more accurate true counts than Hi-Lo.

Example:

A = ActualE = Estimated
dr:a = actual drdr:e = estimated dr
tc:a = actual tctc:e = estimated tc

Eight Decks

r7 = red 7hl = hi-lo
tc.r7 = 2 + (rc.r7 – 16)/drtc.hl = rc.hl/dr

Eight Decks
dr:a = 4 and tc:a = 3

  Red 7  Hi-Lo 
  estimatederror estimatederror
dr:erc.r7tc:e(tc:e – tc:a)rc.hltc:e(tc:e – tc:a)
6202.7-0.3122.0-1.0
5 2.8-0.2 2.4-0.6
4 3.00.0 3.00.0
3 3.30.3 4.01.0
2 4.01.0 6.03.0
Posted on Leave a comment

McDowell’s Blackjack Ace Prediction

Fundamental Mistakes in Math and Methodology in David McDowell’s Blackjack Ace Prediction

by Arnold Snyder
(From Blackjack Forum Spring 2005)
© Blackjack Forum 2005

David McDowell’s Blackjack Ace Prediction is not a book I recommend for any blackjack player who wants to learn to track aces for profit. Despite the author’s claims, you cannot learn to track aces profitably from the information in this book. The author provides a modicum of the theory of ace location or prediction, but his understanding of casino shuffles, tracking methodology, and ace prediction in the real world is seriously flawed, and his blackjack math is replete with errors.

I usually ignore blackjack books that are worthless. I see little point in trashing some unknown author whose lack of credentials will ensure him a place in obscurity.

But I cannot ignore this book. I believe McDowell attempted to invent a valid ace-location method that could be used by serious blackjack players. I believe McDowell attempted to run his ideas by a number of notable blackjack experts to get their input on his methods. I do not believe he was trying to pull a scam on players and sell a phony ace prediction system. I think his heart was mostly in the right place.

Unfortunately, this is not just some unknown nobody that I can send a polite personal note to and tell him his system is all wet. McDowell’s book has been published with endorsements on the back cover from half a dozen notable blackjack authorities, and my own name is invoked throughout McDowell’s text in a such a way that my own writings seem to be lending credibility to his false conclusions.

I have been getting emails from players telling me that they are actually considering using McDowell’s ace prediction techniques in casinos. One email is particularly disturbing to me because it is from a very knowledgeable card counter whom I have known for many years, and whom I know has recently lost a substantial portion of his bankroll due to miserable negative fluctuation. He is hoping that the 4% edge McDowell has calculated for his ace location techniques will restore his bankroll a lot quicker than the 1% count game he is more accustomed to.

So, rather than concern myself with the feelings of a well-intentioned but misguided author, I will be blunt in my remarks on this book. I will not make any attempt myself to provide a comprehensive critique of this book. McDowell’s conclusions and methods have so many flaws that I could write a book myself just on the mistakes in his book! Instead, I will point out one of the key errors in the math. I will also publish a review later by experienced trackers that addresses some of the book’s worst tracking methodology flaws, as well as any further response that seems to be needed.

You may be wondering, as another player put it in an email to me, if in fact McDowell has at last told the big secrets of the ace trackers, and whether I, in fact, might not be just trying to “put the lid back on” the pros’ secrets in order to protect them. “How could all those other experts who endorsed McDowell’s book be so wrong?” this player asked me.

The fact is that the other blackjack experts who endorsed the book either didn’t actually read it or didn’t do the math. So, here’s my suggestion if you believe I’m just trying to cover up the professional gamblers’ secrets.

I am going to provide a simple mathematical analysis in this article. McDowell claims roughly a 4% advantage for his methods on what he describes as a two-riffle R&R shuffle. Snyder claims McDowell’s advantage is roughly 0%. This isn’t a judgment call based on opinions. It’s math. Either do the math yourself, or take it to another expert for help.

A Simple Tell that McDowell Doesn’t Know What He’s Doing in Blackjack Ace Prediction

This example of one of the blackjack math errors in this book, the main one I am going to address, can be found on page 114, where the author describes how he estimates his advantage from tracking aces. I’m using this example because he explains that he used “Snyder’s rule of thumb” to develop the formula, so I fear that readers might conclude that McDowell’s findings would reflect my own.

This rule of thumb, as McDowell describes it, says that if the player is playing heads up, and he bets on only one hand when a key card predicts an ace, then over the long run any keyed aces that appear will be split 50/50 with the dealer.

Let me take a moment to point out that this is in no way my “rule of thumb” or overall recommendation for the best way to approach ace-sequencing. I specified in my Blackjack Shuffle Tracker’s Cookbook that my coverage of ace-sequencing was a cursory treatment that covered only a few of the basic theories, and that I considered ace location via a single key card, and playing a single hand, advisable only in a specific type of game primarily as a “side” strategy to be used in conjunction with other tracking/counting techniques.

But since the player McDowell describes is using no technique to “steer” the aces (that is, he is not playing multiple hands as necessary in an attempt to keep any keyed aces away from the dealer), I will go along with his assumption that what he terms my “rule of thumb” would fit this situation. Since the key card on the prior round that signals an impending ace would have an equal likelihood of being dealt as any card in that round, then I would agree that the ace the key card signals would as likely go to the dealer hand as the player hand on the next round where the player is betting on the ace.

Using McDowell’s single-key/single-bet method, in the shuffle he describes, he estimates that for every 100 times he bets on the ace because he saw his key card, he will actually be dealt an ace on that hand 13 times. He estimates that this is about 6 extra aces per 100 bets. He then assumes, via “Snyder’s rule of thumb,” that the dealer will also get 6 extra aces “by accident” (his words). And he points out again, as per Snyder, that the value of the ace to the player is much greater than the value of the ace to the dealer (51% versus 34%, to use his numbers), so that even if the player is splitting the extra aces with the dealer, the player will enjoy a substantial win rate. Using all of these assumptions, McDowell calculates his win rate on these hands as 4.2%.

I have known a number of successful ace trackers who have used various methods to locate aces, none of which are described by McDowell, and most have told me they estimate their overall advantage over the house at about 2% to 4%. So, McDowell’s estimate of the potential advantage from ace-location is not in itself inordinate.

What I could not fathom, however, was how he came up with this advantage if he was only hitting the ace on 13 out of every 100 times he bet on its coming. Most of the ace trackers I have known tell me they expect to hit the ace 40% to 60% of the times that they bet on it, depending on the shuffle, the number of hands they are betting, the number of keys they are using, etc.

The higher percentages of hit rates assume the player is betting on multiple hands to catch the ace and—especially—to act as a “buffer” against the dealer getting the ace.) But McDowell comes up with this 4.2% win rate when he is failing to get an ace on his bet 87% of the time!

So, without stopping right now to show all of the places where his math and methodology went wrong, I will first show how we figure out what the player’s edge would really be using McDowell’s hit rate. To keep things simple, I will also use the same numbers he used with regards to the value of an ace: 51% advantage if it hits the player’s hand, and -34% if it hits the dealer’s hand.

To estimate the value of this hit rate, the first thing to do is figure out how many times per 100 hands the player would be dealt an ace as the first card at random. Since there is one ace per 13 cards, this is a simple calculation. If a blindfolded monkey were placing these bets, he would hit a first-card ace 7.7 times per 100 hands. (McDowell estimates this number as an even 7 times, but I prefer to use the exact number, 7.7.) Therefore, if the player is hitting 13 first-card aces per 100 times that he bets on hitting one, he is hitting 5.3 extra aces per 100 times he bets on one coming.

To keep things simple, I will also use McDowell’s assumption that the player is betting only one spot, and that the keyed aces that appear are being split with the dealer. So, the dealer is also getting an extra 5.3 aces per 100 hands (not 6, as per McDowell).

To calculate the value of this hit rate, I first figure out the value of the extra aces when they land on the player’s hand (assuming for simplicity $1 bets each time an ace is predicted):

5.3 x 0.51 = $2.70 per 100 bets on the ace.

I then calculate the cost to the player when the dealer gets the extra aces:

5.3 x -0.34 = -$1.80 per 100 bets on the ace.

I then calculate the player disadvantage on all of the remaining hands on which the player placed a $1 bet for the ace, assuming the player is facing a standard house (dis)advantage of -0.50%. This would occur on 89.4 hands (subtracting from 100 the total number of hands, 10.6, where either the player or dealer got the keyed ace).

89.4 x -0.005 = -$0.45 per 100 bets on the ace.

So, the player’s dollar win per 100 ace bets is:

$2.70 – $1.80 – $0.45 = $0.45

Which is 45 cents profit per $100 bet ($1 x 100), or a percentage win rate of 0.45%.

Obviously, a win rate of only 0.45% is quite a bit different from the 4.2% win rate provided by McDowell. In fact, McDowell’s number is off by almost a factor of ten!

And there is another very important point that must be clarified here: This 0.45% win rate is the player’s win rate only when betting on a predicted ace. This is not his overall predicted win rate on the game if we assume he has any “waiting bets” while he is playing hands and watching for his key cards. Instead, this is what his expectation would be if he was betting only on the keyed aces, and he (and the dealer) were each getting an extra 5.3 aces per 100 hands.

And note that this is the player’s percentage win rate on these bets. (It makes no difference if the player is betting $1 or $1000 on these bets, his percentage win rate would be 0.45%.) I will address the cost of the waiting bets, and the effect of the player actually raising his bet when the ace is predicted, below.

For now, let’s return to McDowell’s calculation of his advantage to find out why there is such a huge discrepancy in our results. It turns out there is a rather gross error in his math. He assumes that the player gets a total of 13 first-card aces, but that the dealer gets a total of only 6 first-card aces. In other words, he gives the player both the random aces (7.7) and the extra aces from tracking (5.3), but he assumes that the dealer only gets a total of 6 aces, fewer than even the random number expected per 100 hands, despite specifying a playing style where the dealer will be getting the same number of extra aces as the player, due to “Snyder’s rule of thumb.”

There are various ways of doing the math for this problem. But you cannot calculate the player’s aces expected at random into the estimate of advantage. Or, at least, not the way McDowell has done it. Let me explain why…

We know that in a completely random game the player and dealer will each get 7.7 first-card aces per 100 hands. Since the expected value of these aces to the player is 51% on the aces dealt to the player and -34% on the aces dealt to the dealer, there is a very strong player advantage on these 15.4 hands per 100—in fact, an average advantage per hand of about 8.5%.

If we then estimate that, on the remaining 84.6 hands per hundred when neither the player nor the dealer is dealt an ace, there is a standard house advantage of -0.50%, we would have to conclude that a blindfolded monkey could beat blackjack simply by betting big on every hand.

So, where is the error in this thinking? The error, and it is a serious one, is in believing that the house has only a 0.50% advantage on the 84.6 hands when no first-card aces are occurring.

That 0.50% house edge assumes that we are off the top of a full 6-deck shoe, and that an ace will be dealt as the first card on one hand out of every 13 for both the player and the dealer. McDowell’s formula removed all 13 of the player’s first-card aces to calculate the player’s ace-hit advantage, but only used the 6 “extra” aces the dealer was dealt to calculate the player’s disadvantage on these 6 hands, leaving the dealer’s 7 “random” aces in the 81 remaining hands, where he then estimates the house edge at 0.50%.

If our assumption, however, is that the player will be dealt no first-card aces in these 81 hands (because we have already accounted for the player’s share of both random and keyed aces), whereas the dealer will get his 7 “random” first-card aces in these 81 hands (which is how McDowell does the math), then the house edge on these 81 hands is roughly 4.5%, not 0.50%.

You may be tempted to correct McDowell’s error by simply taking the player’s 13 first-card aces x 51%, and the dealer’s 13 first-card aces x -34%, and assuming that the remaining 74 hands in which neither the player nor dealer are dealt an ace as first card are played with a -0.50% house edge. Wrong. If we assume that no aces are dealt to either the player or dealer as a first card on their hands—as we must because both have now received their full share of both keyed and random first-card aces—but that all other cards are dealt in their expected proportions, then the house edge against the player on these 74 hands is almost 1.5%.

Note that we are still discussing only the hands on which the ace tracker has bet on an impending ace, as predicted by his key card. If the ace tracker using McDowell’s method is able to bet on 3 to 4 hands per shoe, then on 3 to 4 hands per shoe he will be betting with an advantage over the house of about 0.45%. Now let’s return to the cost of the waiting bets. Subject to the rules, on the other 40 or so hands per shoe when no ace is predicted, the ace tracker will be playing against a house advantage of about 0.50%. So, if he flat bets $100 when no ace is predicted, then bets $1000 when his key card predicts an ace is coming, he will almost—but not quite—be playing a break-even game.

In other words, the actual overall advantage of his system is not 0.45%, but quite a bit lower. You need a pretty big spread to break even, and you might get an edge of about 0.20% with a 1-to-20 spread; but can anyone afford to play with such a small edge over the house?

You cannot (in practical terms) beat a blackjack game via ace tracking if you are only successful at hitting the ace on a total of 13 out of 100 bets. You must use a method that will get you closer to at least a 40% hit rate, and 50+% would be much preferable.

To save myself from having to answer a million posts from fellow math geeks, let me say clearly that I know that this methodology I’m proposing is an oversimplified way of addressing McDowell’s win rate calculation. The actual win rate of an ace tracker is complicated by numerous other factors. For example, if we are successfully locating 4 aces per shoe via key cards, then the house edge on a hand where there is no key card predicting an ace should reflect the fact that we are playing in a six-deck shoe game minus four aces, since the number of random aces available to us has been diminished by 4.

That would make our waiting bets more expensive than 0.50%. Some of this is explained in my Shuffle Tracker’s Cookbook, but I’m not going to go into it all here. I’m simply trying to point out a glaring error in McDowell’s methodology, not provide a comprehensive guide for ace trackers.

I have already said that I do not believe the author was attempting to pull a scam with this book, and that, based on the endorsements on the cover, it appears he made an effort to send the book to noted authorities for an opinion. Unfortunately, he did not send the book to any actual ace trackers.

It is even more unfortunate that within the text he seems to represent that he himself has used these methods with great success. I suspect that some (or all) of the authorities he sent his manuscript to believed that he had used these methods with success and was writing from personal experience. So, they didn’t question, or even look at, the math.

In fact, McDowell tells me he used other ace location methods on much simpler shuffles many years ago, but has never attempted to use the methods he proposes in this book in the shuffles he describes. And, unfortunately, the methods in this book will not succeed.

Another Doozy in McDowell’s Blackjack Ace Prediction

There are many other examples of bad math in the book. I’m not going to waste my time going into all of them. But here’s another quick doozy:

On page 100, the author describes a player hand of hard 15 versus a dealer 4 upcard, and the hand is sandwiched between two hands that contain aces that would wind up being adjacent to each other in the discard tray if that hard 15 were not on the table. McDowell states that “an Ace tracker may deliberately hit the hand until it busts” so that the two aces on the hands on either side of it will be adjacent to each other in the discard tray.

Well, maybe. But then, he says that the cost of this play is “relatively small, about 0.20 of the bet.” Hmmm… Since when does deliberately busting a hard 15 cost only 20% of the bet? According to my copy of Griffin’s The Theory of Blackjack, the player’s expectation when standing on a hard 15 versus 4 is -21% of the bet. Deliberately busting any hand provides an expectation of -100% of the bet, or an additional -79% in this case.

This, of course, is silly. Maybe McDowell didn’t mean to say that the tracker would “deliberately hit the hand until it busts.” It’s kind of hard to imagine the looks he’d get from the dealer and other pit staff if he drew a 6 on his 15 for a total of 21, then insisted on hitting it again!

Dealer: But, sir, you have 21.

McDowell: I know how to add, damn it! Now hit that hand!

The problem, I’m sure, is that McDowell never actually did this stuff, so he didn’t think it through. He simply looked up the cost of violating basic strategy on 15 versus 4, which is about 20%, and used this cost as the cost of “deliberately busting” the hand. This is always the kind of problem that occurs when someone is thinking theoretically instead of realistically, because the person never actually did what they are proposing.

Anyway, I’m sitting here looking at the endorsements on the book, and I’m thinking, “Steve! Ed! Don! I know you guys have never tracked aces, but couldn’t you at least have taken out a calculator and spent ten minutes going over some of the math before jumping on this bandwagon? Does Snyder always have to be the bad guy delivering the bad news?”

There are at least a dozen more examples of bad math in Blackjack Ace Prediction, but this is all the time I’m going to spend on it. Anyone who understands gambling math can go over it and find the errors fairly easily. The problem is that if you correct the errors, there just isn’t much of a book left. As for the tracking methodology, I do not at all mean to imply that, because I’ve addressed a math problem, the other stuff is okay. The system this book touts doesn’t work, but those problems will be addressed in subsequent reviews.

Maybe someday Tommy Hyland or Al Francesco or another of the real-life ace trackers out there will write a book on this subject and really tell you how to do it. The top ace trackers are hitting the ace on 40% to 70% of their bets, not 13%. If you want to track aces and actually make a profit from the endeavor, David McDowell’s book is not what you’re looking for.

And if anyone cares to argue about “Snyder’s rule of thumb” on this website, please post your arguments in the Fight Club where I can invoke “Snyder’s rule of finger.” ♠

Posted on Leave a comment

The Blackjack Shuffle-Tracker’s Cookbook: How Players Win (and Why They Lose) with Shuffle Tracking

Comments on the Blackjack Shuffle Tracker’s Cookbook

by Arnold Snyder

If you think nothing new has happened in the world of winning blackjack strategies in the past couple of decades, read The Blackjack Shuffle Tracker’s Cookbook.

If you think you already know how to track shuffles, I’ll bet you don’t. Read the Cookbook.

Although the full 3-part Blackjack Forum Shuffle Tracking Series is contained in The Blackjack Shuffle Tracker’s Cookbook, the Cookbook also contains much more. I guarantee you will learn more about shuffle tracking from the never-before-published Parts IV and V of the Series than you ever dreamed possible.

This is some of the secret stuff I’ve been keeping out of print for years. To my knowledge, the only players who know some of this stuff are a handful of trackers that I trained myself. I’ve never even seen these concepts discussed by other shuffle trackers, not in print, not on the Internet, not anywhere. From what professional shuffle trackers have said to me through the years about tracking, I know they don’t know these concepts.

This is not rehashed crap about how to draw maps and size your bets. This is not just a bunch of boring theory and analysis that’s already been discussed to death on the blackjack Web sites.

This is the stuff that none of the other shuffle-tracking experts ever even thought about. This is a guide to making money by tracking shuffles. This is primarily a guide for professional gamblers who want to get two to six times the edge over the house at blackjack that they can get from traditional card counting.

If you want to beat the complex multi-plug, multi-pass, stepladder/R&R combo shuffles that most of the major casinos are using today, and if you want to know why these are the most profitable shuffles available for trackers today, read the Cookbook. The never-before-published Part IV and Part V of the Shuffle Tracking Series will open your eyes to a world of blackjack opportunity you never even knew existed.


More information on The Blackjack Shuffle Tracker’s Cookbook

(by Arnold Snyder, From Blackjack Forum Vol. XXIII #3, Fall 2003)

Heresy Today, Gone Tomorrow

This is not so much a Sermon as a blatant advertisement for my new book. As a man of the cloth, it is not only my prerogative, by also my obligation as your spiritual advisor, to use this pulpit for your enlightenment. I know you always read this column first, looking for my pithy, and often brilliant, analogies between pit bosses and various of the knuckle-dragging species; but this month, there is a deeper and more pressing topic. The Bishop has something to sell.

I have just republished my complete Blackjack Forum “Shuffle Tracking Series,” along with a lot of new, revolutionary, and never-before-published information about shuffle tracking, in a new report titled: The Blackjack Shuffle Tracker’s Cookbook.

“Arnold, why would you want to do this?”

“I’m a heretic.”

“You’re not a heretic. You’re an imbecile.”

Well, I guess that’s debatable.

This Sermon is to let you know that the Cookbook is to shuffle tracking what the Blackjack Formula (my first book) was to card counting when it was published in 1980. Many counters who were around at that time considered the Blackjack Formula to be something of an oracular revelation, as it explained for the first time ever how to judge the real value of a game.

Up until that book was published, card counting experts put a lot more weight on the system being used than they did on table conditions. The game factor considered most important at that time was the house edge off the top. A good set of rules was every serious counter’s prime concern. Counters who aspired to professional level play were advised to use multi-level systems (such as Uston’s level-3 APC, Wong’s level-3 Halves, Canfield’s level-2 Master Count (later reborn as Carlson’s Advanced Omega II), the level-2 Hi-Opt II with multiparameter tables, etc.

All of these professional-level counting systems included charts for adjusting play with a side-count of aces, and they included 150+ strategy indices. The multi-parameter approach was carried even further in many of the high-end professional-level systems. Pros had strategy charts available which allowed them to use as many as half a dozen side-counts with Hi-Opt I, Hi-Opt II, and the DHM Professional system.

It was widely believed among experts at that time that as the games got tougher (primarily, as more decks were added), the counting systems had to get more complex to beat the games. No attention whatsoever was paid to the importance of deck penetration, nor did counters have any idea of exactly how much of a betting spread they would need to beat any specific game conditions.

Only two authors at that time had workable approaches to beating shoe games. Stanford Wong, in his groundbreaking Professional Blackjack, advised players to table-hop shoes in order to avoid playing in negative counts. And, because Wong was not playing in negative shoes, he also provided the first intelligently abridged set of indices, as he tossed out most of the strategy changes that occurred at negative counts.

Ken Uston, in The Big Player and all of his books, discussed Al Francesco’s “big player” team concept for shoe games, which also kept players from betting in negative expectation situations in shoe games. Both Francesco’s and Wong’s approaches were adopted out of the necessity to camouflage card counting strategies, as just about all casinos had learned by the 1970s to identify card counters by their betting spreads.

But, camo or no camo, the approach of most pros was to play the game with the best set of rules, using the most complicated advanced system they could handle, and every index number they could squeeze into their heads.

So, in 1980, I began my career as blackjack’s official heretic. Over a period of three years, in three books, a couple technical reports, and within the early pages of this very quarterly, I proposed a lot of hare-brained ideas.

I told players that finding deep penetration was more important than keeping a side count of aces.

I said that most of the 150+ index numbers players used were virtually worthless.

I stated that a simple, level-one, unbalanced counting system could perform by running count with nearly the same power as a “professional level” true-count system in most shoe games.

And I got a lot of flack from many of the game’s experts until independent computer simulations bore out my claims.

Most players today, however, don’t think of me as a heretic. They weren’t around back then. I’ve become mainstream, stodgy, just another stick in the blackjack mud. So, simply to add a little more fun to my life, it’s time to hit the heresy trail again.

Shuffle trackers today are in the same boat as the pre-1980 card counters. Trackers look at all the wrong factors, and devise strategies based on their general misunderstanding of the opportunities. The approaches to tracking today are eerily similar to the old days of card counting, when teams of players were struggling to get an edge in shoe games with 65% pen, using 150 strategy indices and side-counting aces, when the game across the street, with a slightly worse rule set, had 85% pen, and could have been murdered with the simple Hi-Lo count and a comparative handful of indices.

There was something truly weird going on back then. The casinos with the less-attractive rules felt that they had protected themselves from card counters, oblivious to the fact that their deep penetration actually made them sitting ducks for any counters who understood the value of penetration. But since counters didn’t know the value of penetration, the less-attractive-rules countermeasure worked! The casinos with the truly best games were protected because card counters simply didn’t play there!

The old-time card-counting experts were not, of course, incorrect that the multi-level, multi-parameter, mega-index systems were the most powerful systems ever devised by man. But they were so enamored of accurate play (even when the game itself sucked!), and so satisfied with each other’s convictions, that they never looked for the strongest ways that a player could use the count in order to get the most money from the casinos.

Tracking experts have blundered just as badly. They have devised all of these incredibly complex methods for tracking casino-style shuffles, with no idea that there is a stronger way to get more money faster. Just as with card counting, they worked out the math on their old ideas to the nth degree, without ever seeing the strongest profit opportunities.

And, ironically, the casinos have responded in kind. Just as the casinos used to foil counters by putting in less attractive sets of rules, today’s casinos have put in shuffles designed to foil the types of tracking strategies that today’s tracking experts advise. In fact, the casinos do not know what constitutes a beatable shuffle! They simply know what the trackers are out there looking for, and they foil the trackers by offering something different. Lucky for the casinos! Like the counters of 25 years ago, trackers today are in the dark ages and they ignore the juiciest opportunities. The ignorance about shuffle tracking pervades both sides of the table.

Shuffle trackers today believe that the most profitable shuffles are the simplest shuffles. They believe they will find their best opportunities in the few remaining one-pass, riffle-and-restack shuffles, preferably with big grabs so the slugs are easy to follow and do not get broken up.

So, most of the big casinos today employ multi-pass shuffles with multiple plugs, small grabs, multiple piles, and usually at least one stepladder (dilution) pass. These complicated shuffles annoy the tracking experts no end, because they believe that the most profitable approach to shuffle tracking is to track the shuffles. In fact, the most profitable approach to shuffle tracking is not to track the shuffles, but to track the slugs. These are two entirely different approaches.

A shuffle tracker who looks for opportunities by looking for the “easiest” shuffles to track is like a card counter who looks for opportunities by looking for the lowest house edge off the top. The smart counter does pay attention to the house edge off the top, but he chooses playing opportunities by looking for the game which offers the most frequent, and strongest, player advantages. (This usually equates to deep penetration, and not necessarily a good rule set.)

Similarly, the “easy” shuffles, as a general rule, offer weak slugs. The more complex shuffles, on the other hand, offer strong slugs, and the most frequent, and strongest, player advantages.

In fact, the complex shuffles do protect the casinos from shuffle trackers, not because the shuffles lack tracking opportunities but because the tracking experts have analyzed these shuffles as “weak” and trackers avoid them. So, here’s a bit of heresy for you to sink your teeth into: These complex shuffles offer trackers the greatest slug tracking profit opportunities available in shoe games today!

Why should I publish this heresy at this time? (And believe me, the tracking experts will be as incredulous at this notion as Peter Griffin was in 1983 when I said the Red Seven Count would perform in shoe games nearly as well by running count as the full-blown Hi-Lo. I devised the Red Seven Count almost entirely from the data I found in Griffin’s book, yet he thought my idea of using an unbalanced point count system was a huge mistake.) Won’t the publication of these secrets hurt the shuffle trackers who already know this stuff and are keeping it to themselves? Well, I don’t claim to know every shuffle tracker on the planet, but my personal assessment of the situation is that there aren’t any trackers out there using this stuff. The only trackers who know this stuff, to my knowledge, are the handful of players I’ve personally trained.

The question of whether or not I should reveal this information at this time comes down to a question of whether or not the revelation will take money of my own pocket. The trackers I’ve trained are playing for me. Will I regret this decision because it will ultimately hit me in the wallet? I’ll take my chances.

Ah, the quandaries of life, made even more perplexing by my position as your religious leader, the man you trust to guide you on your path to wealth and spiritual fulfillment.

Should I have published the Blackjack Formula in 1980? Should I have told players at that time that deep penetration was the single most important factor in assessing a game’s value to card counters? Card counting had been around for almost 20 years at that time, but this was not known by players – pros or otherwise. Ken Uston did not know this. Lawrence Revere never knew this. Ed Thorp did not know this. Stanford Wong did not know this. Lance Humble did not know this. Ian Andersen did not know this. Peter Griffin had devised charts which showed this to be true, but he had never done any analysis to discover the practical applications of his findings.

Mathematicians almost never understand the meaning of their own work. In fact, I’ll let you in on a secret: my methods for analyzing shuffles have been derived, almost entirely, from information in Peter Griffin’s Theory of Blackjack. Yet, Griffin never even mentions shuffle tracking in his book, and the one time I tried to discuss tracking with him back in the mid-1990s, he told me he didnít know that much about it. And here he had the key to unlocking shuffles in black and white in a book he’d written 15 years earlier!

I have no regrets today about telling card counters back in the 1980s that deep penetration was the key to value, that simplified sets of indices can be powerful, that side counts are not necessary, etc. I enjoy making the casinos scramble, because they scramble so slowly and incompetently.

Smart players who apply themselves should have years to reap the benefits from the Cookbook. Twenty-three years after the Blackjack Formula, blackjack games with deep penetration are still out there, and pros are still making money from my heresy. The casinos are still in a jam, trying to protect their games while competing with each other and catering to their customers’ likes and dislikes.

The number of shuffle trackers, in fact, who will make money from learning how to track slugs, instead of how to track shuffles, is small. It’s even smaller than the number of card counters who currently make money from knowing the value of deep penetration. Most players just do not do their homework. A handful of serious pros will reap the rewards.

The casinos are already doing everything they can to convert their games to machine shuffles, but they can only do this as quickly as their customers accept the change. In Las Vegas, they keep trying machines, then going back to hand shuffles. In new locales, machines are often brought in as the norm from the start. The machine salesmen are fear-mongers, hyping their wares to a gullible industry.

The Cookbook, on the other hand, honestly describes the difficulties of tracking shuffles for profit. Intelligent game protection personnel who read this report will realize that they are, for the most part, wasting huge sums of money reacting to a phantom when they buy these auto-shufflers. The casinos make twice as much money from incompetent players who attempt to track shuffles as they lose to competent trackers. Tables with hand shuffles, attractive rule sets, and deep penetration will always make more money than tables with blackjack games that players see as unbeatable.

In any case, my faithful flock, if you think nothing new has happened in the world of blackjack strategies in the past couple of decades, read the Cookbook. If you read the BJF Shuffle Tracking Series back in 1994, but found the concepts too difficult to apply in the casinos, read the Cookbook. If you think you already know how to track shuffles, I’ll bet you don’t. Read the Cookbook.

Although the full 3-part BJF Shuffle Tracking Series is contained in The Blackjack Shuffle Tracker’s Cookbook, the Cookbook also contains much more. You will learn more about shuffle tracking from the never-before-published Parts IV and V of the Series than you ever dreamed possible. This is the stuff I’ve been keeping out of print for years, concepts I’ve never seen discussed in print, not on the Internet, nor anywhere else, not even spoken of in whispers over champagne at Max Rubin’s Blackjack Ball. If there are other trackers who know this stuff, they’ve been keeping a lid on it.

This is not rehashed crap about how to draw maps and size your bets. This is not just a bunch of boring theory and analysis. This is a guide to making money by tracking slugs. This is a guide for professional players who want to get two to three times the edge over the house at blackjack that they can get from traditional card counting, and who want to be invisible while doing it.

If you want to beat the complex, multi-plug, multi-pass, stepladder/R&R combo shuffles that most of the major casinos are using today, and if you want to know why these are the most profitable shuffles available for slug trackers today, read the Cookbook. The Cookbook will open your eyes to a world of blackjack opportunities you never knew existed.

And that’s heresy. ♠

Get The Blackjack Shuffle Tracker’s Cookbook. If you are new to shuffle tracking, there is an introduction to this professional gambling technique in Arnold Snyder’s Blackbelt in Blackjack.