For my forthcoming book *Math Games with Bad Drawings*, I considered including a classic dice game called “Drop Dead.” It narrowly missed the cut. Strike one: the lousy name. Strike two: it’s a game of pure chance, with no room for decision-making. And strike three: well, that name again.

Still, I want to share the game here, because it teaches a useful lesson about the mathematics of risk.

On your turn, **you roll five dice, and score their sum**, with one big exception: **2’s **and **5’s** are fatal. They immediately **drop dead **and are removed from play. Not only that, but **whenever any 2’s or 5’s appear, the other dice are worthless; **you score no points for the roll. Then, whether you scored points or not, you **roll all remaining dice again**, and continue repeating this process until all five dice have dropped dead. Play for a set number of turns per player (say four), after which the highest total score wins.

Here’s a sample turn that lasted for six rolls, scoring a total of 15 points.

Notice that on my first roll, I didn’t score any points. Die #4 (by coming up with a two) negated the collective efforts of Die #1, Die #2, Die #3, and Die #5.

That kind of failure is common. You’ll score on your opening roll just 13% of the time. A single 2 or 5 suffices to spoil the party, and with five potential party spoilers, few parties remain unspoiled. You thus wind up scoring most of your points with just one or two dice remaining, because smaller “parties” are more likely to go off successfully.

This leads to our larger theme, and the lesson that interests me: ** Starting with extra dice barely helps.** In fact, there’s little benefit past your eighth die, and almost none past your twelfth.

It seems weird. An extra die can’t hurt, can it? Best-case scenario, it adds to your score, and worst-case scenario, it comes up 2 or 5, at which point you discard it, and wind up right back where you started.

Well, sure, it can’t hurt. But past a certain point, it doesn’t much help, either. Each die has a 1-in-3 risk of dropping dead. Compounded many times, that becomes a virtual guarantee: *somebody* is going to spoil the party. With just twenty dice, the probability of entirely avoiding 2’s and 5’s is just 0.03%, roughly your lifetime chance of being struck by lightning.

Let’s say you begin with 5 quadrillion dice, enough to blanket the state of West Virginia. Seems like you should score tons of points, right? Nope. Roll after roll, about 1/3 of your dice will spoil the party. This will repeat a hundred times in succession, your score stuck on zero, until finally, with just a few dice remaining, you begin to score points. (About 17 points, on average.)

5,000,000,000,000,000 dice. 17 points.

The moral: **Don’t design systems where everything needs to go right.** If your machine is doomed by one broken part; if your party is spoiled by one late guest; if your game plan crumbles when one player strays out of position; then you’ve got yourself a problem. A lot of problems, actually: one per component. Crowds are good for some things, but achieving unanimity is not one of them.

By the way, if you want to turn this into an actual game, Joe Kisenwether has a good idea: **You may start with as many dice as you want, but your turn ends immediately after your 5th roll.** Thus, you want to pick enough dice that you don’t run out (1 or 2 is probably too few) but not so many that you waste early rolls on scoring zero (so 20 is too many).

Puzzle: in this version, what’s the optimal number of dice?

What’s your expected score if you start with infinity dice? (i.e. the limit of your expected score as the number of dice goes to infinity.)

Is it true that for all n, starting with n+1 dice is at least as good than starting with n?

If you *truly* have infinite dice, then I believe the expected value is impossible to compute, since the game never ends. (Each roll, infinite dice drop dead, but infinite other dice still remain; so you never score, but you also never have to stop playing.)

But asymptotically, as the number of dice goes to infinity, your score seems to level off around 17.3. I don’t have a proof but I’m pretty confident (since as n grows, the chance of scoring decays exponentially, while the potential score grows linearly, and the number of extra chances to score grows even slower than that).

I’m also almost certain that for all n, the expected score with n+1 dice is at least as high as the expected score with n dice, but I haven’t proved it to my satisfaction.

How did you determine the “average score per round” numbers?

As I recall, I did something like this:

Define f(n) as the average score with n dice

Calculate f(1) directly from the fact that f(1) = (2/3)*[3.5 + f(1)]

Calculate f(2) similarly by writing f(2) in terms of f(1) and f(2)

Continue iterating

This is why probability and I are always at odds. I spent a good amount of time today playing with this using CoCalc for calculations, thinking about maybe using Markov chains, creating a probability matrix, looking at its powers, then kind of starting over finding the expected value of 1 die via a series of nested geometric series (and got 7), then going up to 2 dice and finding its expected value through another series of nested geometric series (and got 11.2).

Now I see this comment, try it out this way, and sure enough in 3 minutes total, got f(1)=7 and f(2)=11.2.

I admit, some of my earlier explorations were necessary for understanding how to write the iterated formula, what probabilities go with what average values, etc.

But this is as it ever was with me and probability. There’s usually a better way of looking at the problem, but it’s not the way I begin to approach it.

I think I’ve got my little average value program running correctly. But just to check, the average value for 5 dice is actually 16.06, not 16.6, right? Everything else checks out, including average value for 12 on up being 17.24 or smaller, so I think I’m doing it correctly.

It’s a neat little game/question!

I believe the answer is six dice.

The probability that you actually get to score in any round is (2/3)^N where N is the number of die you have in that round. If you score, each alive die gives you an expected score of 3.5. (.25x(1+3+4+6)).

So, for example, if you start round 1 with 6 die, then there is an 8.78% chance that you actually score points in the first round. If you score points, your expected score will be 6×3.5=21. Therefore, your unconditional expected score in the first round is .0878×21 = 1.844.

Now, let’s think about round 2. You only actually make it to round two if at least one die in round one is not a 2 or a 5. Said another way, you don’t make it to round 2 if all of the die are a 2 or 5. I.e., you don’t make it to round 2 with probability (1/3)^N (where N is the number of dice). Therefore you do make it to round two with probability 1-(1/3)^N. If you do make it to round two, the expected number of dice that you will have is 2/3rds of the number you had in round one. (Since, on average, 1/3rd of them die.)

E.g., if you started round 1 with six dice, then you’d expect two of them to die, leaving you with four expected dice in round 2. Now just repeat the calculations described above:

There’s a 99.86% chance that any dice survive to round 2. If you get to round 2 you expect to have 4 dice. You expect to be able to actually score points in round two (2/3)^4 = 19.75% of the time. If you do score, each of the 4 dice is expected to give you 3.5 points. So your overall expected score in round 2 is 0.9986×0.1975x4x3.5=2.76.

Rinse and repeat!

You can then model this out for five rounds and add the expected score in each round. I believe that you’ll find the maximum expected score occurs when you start with six dice.

I gave up with the Markhov chains and just ran a simulation. I found that 5 dice had the highest expected value, and on average had higher winning percentages against both 4 and 6 dice, even though that percentage is less than 50% in both cases, because of ties.

I want to preface this comment by clarifying that I am a “language person”, not a mathematician. I love reading your blog though, because arithmetic is fun, and mathematics fascinating. Your writing is clear and typically you find some enlightening topics and have great teaching stories. I even bought your first book in hardcover. .

That said, this game looks neat. If two-thirds of reason that you are not including it, is really because you don’t like the name, know that you can change the name. If you are publishing a book about math games, it is your prerogative to give this a name that you think is better. Since it will then be published with this name, that becomes the name, and it may carry into common usage.

I propose: “Eighty-six 2s and 5s”.

Definition of the slang term 86 follows, from Merriam Webster:

“Eighty-six is slang meaning ‘to throw out,’ ‘to get rid of,’ or ‘to refuse service to.’ It comes from 1930s soda-counter slang meaning that an item was sold out. There is varying anecdotal evidence about why the term eighty-six was used, but the most common theory is that it is rhyming slang for nix.”

By the way, you have an uncharacteristic language error in your post. You say, “Here’s a sample turns that lasted for six rolls”. “A sample” is singular, but you pluralized “turn”. I can imagine how it occurred, since dice and rolls are both plural, but it threw me for a moment. ☺️

Keep up the great work in both mathematics teaching and blog writing!

Hi Millie, thanks for the thoughtful comment!

Good call on the language error, though you’re far too generous to call it “uncharacteristic”! I’m constantly fiddling with sentences, so most likely I pluralized (or singular-ized) one part of the sentence and neglected to follow through.

I like your new name for Drop Dead! I did rename a few games for the book (e.g., “Taxman” became “Tax Collector”) although in general I tried to remain loyal to existing names (e.g., I don’t like the name “Amazons,” but the game is well-known under that title, so I stuck with it). Anyway, the real issue was that every other game in the book gives players meaningful decisions, so this purely random game wouldn’t quite have fit.

Anyway, thanks again for your kind reading!

An extension of Joe Kisenwether’s what-is-the-optimal-number-of dice question: does the answer change if instead of playing “Eighty-six 2s and 5s” you play “Eighty-six 1s and 4s” (or any of the other possible versions of the game (there are 15 possible versions in all) )?

Ooh, interesting! Need to think through how changing the expected value per successful roll would affect this analysis…