Tag Archives: challenging probability

Curing the Compulsive Gambler: Challenging Probability Problem


Mr. Brown always bets a dollar on the number 13 at roulette against the advice of Kind Friend. To help cure Mr. Brown of playing roulette, Kind Friend always bets Brown $20 at even money that Brown will be behind at the end of 36 plays. How is the cure working?

(Most American roulette wheels have 38 equally likely numbers. If the player’s number comes up, he is paid 35 times his stake and gets his original stake back; otherwise, he loses his stake)


Solution:

If Mr. Brown wins once in 36 turns, he is even with the casino. His probability of losing all 36 times is \displaystyle \left( \frac{37}{38} \right)^{36} \approx 0.383 . In a single turn, his expectation is

35\left( \frac{1}{38} \right)-1\left( \frac{37}{38} \right) = - \frac{2}{38}\ \text{dollars}

and in 36 turns

-\frac{2}{38}\left( 36 \right) = -1.89 \ \text{dollars}

Against Kind Friend, Mr. Brown has an expectation of

+20\left( 0.617 \right)-20\left( 0.383 \right)\approx +4.68 \ \text{dollars}

And so, all told, Mr. Brown gains +4.68 – 1.89 = + 2.79 dollars per 36 trials; he is finally making money at roulette. Possibly Kind Friend will be cured first. Of course, when Brown loses all 36, he is out $56, which may jolt him a bit.


Advertisements
Advertisements

Chuck-a-Luck: Challenging Probability Problem


Chuck-a-Luck is a gambling game often played at carnivals and gambling houses. A player may bet on anyone of the numbers 1, 2, 3, 4, 5, 6. Three dice are rolled. If the player’s number appears on one, two, or three of the dice, he receives respectively one, two, or three times his original stake plus his own money back; otherwise, he loses his stake. What is the player’s expected loss per unit stake? (Actually, the player may distribute stakes on several numbers, but each such stake can be regarded as a separate bet.)


Solution:

Let us compute the losses incurred (a) when the numbers on the three dice are different, (b) when exactly two are alike, and (c) when all three are alike. An easy attack is to suppose that you place a unit stake on each of the six numbers, thus betting six units in all. Suppose the roll produces three different numbers, say 1, 2, 3. Then the house takes the three unit stakes on the losing numbers 4, 5, 6 and pays off the three winning numbers 1, 2, 3. The house won nothing, and you won nothing. That result would be the same for any roll of three different numbers.

Next suppose the roll of the dice results in two of one number and one of a second, say 1, 1, 2. Then the house can use the stakes on numbers 3 and 4 to payoff the stake on number 1, and the stake on number 5 to payoff that on number 2. This leaves the stake on number 6 for the house. The house won one unit, you lost one unit, or per unit stake you lost 1/6.

Suppose the three dice roll the same number, for example, 1, 1, 1. Then the house can pay the triple odds from the stakes placed on 2, 3, 4 leaving those on 5 and 6 as house winnings. The loss per unit stake then is 2/6. Note that when a roll produces a multiple payoff the players are losing the most on the average.

To find the expected loss per unit stake in the whole game, we need to weight the three kinds of outcomes by their probabilities. If we regard the three dice as distinguishable –say red, green, and blue — there are 6 \times 6 \times 6= 216 ways for them to fall.

In how many ways do we get three different numbers? If we take them in order, 6 possibilities for the red, then for each of these, 5 for the green since it must not match the red, and for each red-green pair, 4 ways for the blue since it must not match either of the others, we get 6 \times 5 \times 4 = 120 ways.

For a moment skip the case where exactly two dice are alike and go on to three alike. There are just 6 ways because there are 6 ways for the red to fall and only 1 way for each of the others since they must match the red.

This means that there are 216 - 126 = 90 ways for them to fall two alike and one different. Let us check that directly. There are three main patterns that give two alike: red-green alike, red-blue alike, or green-blue alike. Count the number of ways for one of these, say red-green alike, and then multiply by three. The red can be thrown 6 ways, then the green 1 way to match, and the blue 5 ways to fail to match, or 30 ways. All told then we have 3 \times 30 = 90 ways, checking the result we got by subtraction.

We get the expected loss by weighting each loss by its probability and summing as follows:

\underbrace{\frac{120}{216}\times 0}_\text{none alike} + \underbrace{\frac{90}{216}\times \frac{1}{6}}_\text{2 alike}+\underbrace{\frac{6}{216}\times \frac{2}{6}}_\text{3 alike} = \frac{17}{216} \approx 0.079

Thus, you lose about 8% per play. Considering that a play might take half a minute and that government bonds pay you less than 4% interest for a year, the attrition can be regarded as fierce.

This calculation is for regular dice. Sometimes a spinning wheel with a pointer is used with sets of three numbers painted in segments around the edge of the wheel. The sets do not correspond perfectly to the frequencies given by the dice. In such wheels I have observed that the multiple payoffs are more frequent than for the dice, and therefore the expected loss to the bettor greater.


Advertisements
Advertisements

Coin in Square: Challenging Probability Problem


In a common carnival game, a player tosses a penny from a distance of about 5 feet onto the surface of a table ruled in 1-inch squares. If the penny (3/4 inch in diameter) falls entirely inside a square, the player receives 5 cents but does not get his penny back; otherwise, he loses his penny. If the penny lands on the table, what is his chance to win?


Solution:

When we toss the coin onto the table, some positions for the center of the coin are more likely than others, but over a very small square we can regard the probability distribution as uniform. This means that the proba­bility that the center falls into any
region of a square is proportional to the area of the region, indeed, is the area of the region divided by the area of the square. Since the coin is 3/8 inch in radius, its center must not land within 3/8 inch of any edge if the player is to win. This restriction generates a square of side 1/4 inch within which the center of the coin must lie for the coin to be in the square. Since the proba­bilities are proportional to areas, the probability of winning is \displaystyle \left( \frac{1}{4} \right)^2 = \frac{1}{16}. Of course, since there is a chance that the coin falls off the table altogether, the total probability of winning is smaller still. Also, the squares can be made smaller by merely thickening the lines. If the lines are 1/16 inch wide, the winning central area reduces the probability to \displaystyle \left( \frac{3}{16} \right)^{2} = \frac{9}{256} or less than \displaystyle \frac{1}{28}.


Advertisements
Advertisements

Trials until First Success: Challenging Probability Problem


On the average, how many times must a die be thrown until one gets a 6?


Solution:

Let p be the probability of a 6 on a given trial. Then the probabilities of success for the first time on each trial are (let q = 1 - p):

TrialProbability of success on trial
1 p
2 pq
3 pq ^2
..
..
..

The sum of the probabilities is

\begin{align*}
p+pq+pq^2+\ldots & = p\left( 1+q+q^2+\ldots \right) \\ \\
 & = \frac{p}{1-q} \\ \\
 & = \frac{p}{p} \\ \\
 & = 1
\end{align*}

The mean number of trials, m, is by definition,

m = p + 2pq + 3pq^2 + 4pq^3+ \ldots

Note that our usual trick for summing a geometric series works:

qm = pq + 2pq^2+3pq^3 + \ldots

Subtracting the second expression from the first gives

m-qm=p+pq+pq^2+\ldots

or

m\left( 1-q \right) = 1

Consequently,

mp=1

and

m=1/p

We see that p=1/6, and so m=6.

On the average, a die must be thrown 6 times until one gets a 6.


Advertisements
Advertisements

The Flippant Juror: Challenging Probability Problem


A three-man jury has two members each of whom independently has proba­bility p of making the correct decision and a third member who flips a coin for each decision (majority rules). A one-man jury has probability p of making the correct decision. Which jury has the better probability of making the correct decision?


Solution:

The two juries have the same chance of a correct decision. In the three-man jury, the two serious jurors agree on the correct decision in the fraction p \times p = p^2 of the cases, and for these cases the vote of the joker with the coin does not matter. In the other correct decisions by the three-man jury, the serious jurors vote oppositely, and the joker votes with the “correct” juror. The chance that the serious jurors split is p\left( 1-p \right)+\left( 1-p \right)p or 2p\left( 1-p \right). Halve this because the coin favors the correct side half the time. Finally, the total probability of a correct decision by the three-man jury is p^{2}+p\left( 1-p \right) =p^{2}+p-p^{2}=p, which is identical with the prob­ability given for the one-man jury.

The two options have equal probability of making the correct decision.


Advertisements
Advertisements

Purchase Fifty Challenging Problems in Probability


You can complete your purchase even if you do not have a Paypal account. Just click on the appropriate card type you have below the “Pay with PayPal” button.

For concerns, please send an email to [email protected]

Purchase Fifty Challenging Problems in probability with solutions for $29

Fifty Challenging Problems in Probability with Solutions

This is a PDF copy of the book, Fifty Challenging Problems in Probability with Solutions. Expect the copy to be sent to your email address within 24 hours. If you have not heard from us within 24 hours, kindly send us a message to [email protected].

$29.00


Looking for another material? Kindly send us an email and we will get back to you within 24 hours.


Advertisements
Advertisements

Successive Wins: Challenging Problem in Probability


To encourage Elmer’s promising tennis career, his father offers him a prize if he wins (at least) two tennis sets in a row in a three-set series to be played with his father and the club champion alternately: father-champion-father or champion-father-champion, according to Elmer’s choice. The champion is a better player than Elmer’s father. Which series should Elmer choose?


Solution:

Since the champion plays better than the father, it seems reasonable that fewer sets should be played with the champion. On the other hand, the middle set is the key one, because Elmer cannot have two wins in a row without winning the middle one. Let C stand for the champion, F for father, and W and L for win and loss by Elmer. Let f be the probability of Elmer’s winning any set from his father, c the corresponding probability of winning from the champion. The table shows only possible prize-winning sequences together with their probabilities, given independence between sets, for the two choices.

Set with:

Father First

FCFProbability
WWWfcf
WWLfc(1-f)
LWW(1-f)cf
Totalfc(2-f)

Champion First

CFCProbability
WWWcfc
WWLcf(1-c)
LWW(1-c)fc
Totalfc(2-c)

Since Elmer is more likely to best his father than to beat the champion, f is larger than c, and 2-f is smaller than 2-c, and so Elmer should choose CFC. For example, for f=0.8, c=0.4, the chance of winning the prize with FCF is 0.384, that for CFC is 0.512. Thus, the importance of winning the middle game outweighs the disadvantage of playing the champion twice.

Many of us have a tendency to suppose that the higher the expected number of successes, the higher the probability of winning a prize, and often this supposition is useful. But occasionally a problem has special conditions that destroy this reasoning by analogy. In our problem, the expected number of wins under CFC is 2c+f, which is less than the expected number of wins for FCF, 2f+c. In our example with f=0.8 and c=0.4, these means are 1.6 and 2.0 in that order. This opposition of answers gives the problem flavor.


Advertisements
Advertisements

The Sock Drawer: Challenging Problem in Probability


A drawer contains red socks and black socks. When two socks are drawn at random, the probability that both are red is 1/2.

a) How small can the number of socks in the drawer be?

b) How small if the number of black socks is even?


Solution:

Just to set the pattern, let us do a numerical example first. Suppose there were 5 red and 2 black socks; then the probability of the first sock’s being red would be 5/(5+2). If the first were red, the probability of the second’s being red would be 4/(4+2), because one red sock has already been removed. The product of these two numbers is the probability that both socks are red:

\frac{5}{5+2}\times \frac{4}{4+2}=\frac{5\left( 4 \right)}{7\left( 6 \right)}=\frac{10}{21}

This result is close to 1/2, but we need exactly 1/2. Now let us go at the problem algebraically.

Let there be r red and b black socks. The probability of the first sock’s being red is \frac{r}{r+b}; and if the first sock is red, the probability of the second’s being red now that a red has been removed is \frac{r-1}{r+b-1}. Then we require the probability that both are red to be \frac{1}{2}, or

\frac{r}{r+b}\times \frac{\:r-1}{r+b-1}=\frac{1}{2}

One could just start with b=1 and try successive values of r, then go to b=2 and try again, and so on. That would get the answer quickly. Or we could play along with a little more mathematics. Notice that

\frac{r}{r+b}>\frac{\:r-1}{r+b-1}

Therefore, we can create the inequalities

\left(\frac{r}{r+b}\right)^2>\frac{1}{2}>\left(\frac{\:r-1}{r+b-1}\right)^2

Taking the square roots, we have, for r>1.

\frac{r}{r+b}>\frac{1}{\sqrt{2}}>\frac{\:r-1}{r+b-1}

From the first inequality we get

r>\frac{1}{\sqrt{2}}\left( r+b \right)

or

\begin{align*}
r & >\frac{1}{\sqrt{2}-1}b \\ \\
r & > \left( \sqrt{2}+1 \right)b
\end{align*}

From the second we get

\left( \sqrt{2}+1 \right)b>r-1

or all told

\left(\sqrt{2}+1\right)b+1>r>\left(\sqrt{2}+1\right)b

For b=1, r must be greater than 2.414 and less than 3.414, and so the candidate is r=3. For r=3, \ b=1, we get

P\left(2\:\text{red socks}\right)=\frac{3}{4}\cdot \frac{2}{3}=\frac{1}{2}

And so, the smallest number of socks is 4.

Beyond this, we investigate even values of b.

br is betweeneligible rP\left(2 \ \text{red socks}\right)
25.8, 4.85\frac{5\left( 4 \right)}{7\left( 6 \right)} \neq \frac{1}{2}
410.7, 9.710 \frac{10\left( 9 \right)}{14\left( 13 \right)} \neq \frac{1}{2}
615.5, 14.515 \frac{15\left( 14 \right)}{21\left( 20 \right)} = \frac{1}{2}

 

And so, 21 is the smallest number of socks when b is even. If we were to go on and ask for further values of r and b so that the probability of two red socks is 1/2, we would be wise to appreciate that this is a problem in the theory of numbers. It happens to lead to a famous result in Diophantine Analysis obtained from Pell’s equation. Try r = 85, b = 35.


Advertisements
Advertisements