# Story

I started off with reading up this book on . Fairly, quickly into the book, they start off with some code plotting posterior bayesian probability estimates, of a coin toss. For this book style this is just an obligatory example. I also have come across a set of stories designed to teach probabilities. You can find these . One of the exercises in that series, is where the teacher lies to the student about having a fair coin and demonstrates coin toss series of heads. A question at the end of the story is how many series of heads, do you have to see, before you suspect some foul-play. ^{1} Now, that’s an interesting question and while the text gives some answers,it’s just a set of heuristics. I on the other hand, wanted to see some specific graphs based on the bias. So I picked up the code and modified. Here’s the base code for an unbiased coin. Written inside an ipython shell.

figsize( 11, 9)

import scipy.stats as stats

dist = stats.beta

n_{t}rials = [0,1,2,3,4,5,8,15, 50, 500]

data = stats.bernoulli.rvs(0.5, size = n_{t}rials[-1] )

x = np.linspace(0,1,100)

for k, N in enumerate(n_{t}rials):

sx = subplot( len(n_{t}rials)/2, 2, k+1)

plt.xlabel("*p*, probability of heads")

if k in [0,len(n_{t}rials)-1] else None

plt.setp(sx.get_{y}ticklabels(), visible=False)

heads = data[:N].sum()

y = dist.pdf(x, 1 + heads, 1 + N - heads )

plt.plot( x, y, label= "observe plt.fill_{b}etween( x, 0, y, color="#348ABD", alpha = 0.4 )

plt.vlines( 0.5, 0, 4, color = "k", linestyles = "–", lw=1 )

leg = plt.legend()

leg.get_{f}rame().set_{a}lpha(0.4)

plt.autoscale(tight = True)

plt.suptitle( "Bayesian updating of posterior probabilities", y = 1.02, fontsize = 14); plt.tight_{l}ayout()

Unbiased coin means P(H) = P(T) = 0.5 And here is the plot.

Now, let’s go and change that distribution. The code derives a binomial distribution from the scipy.stats package. See the line data = stats.bernoulli.rvs(0.5,size=n_{t}rials[-1]) ? All we have to do is change that 0.5 to 0.6.

Here’s the plot:

Notice how the distribution starts varying very visibly right after 4 tosses. Now, I would say, that’s about the first sign of trouble.Unfortunately, in real scenario, you won’t be able to visualize distribution, but you can have a heuristic. Let’s assume you start of playing with a $1000 ^{2} Bet for each coin-toss, about the 4th toss is when you say, I am suspicious, I should sum up the past results and reconsider. If they are all 4 Heads/Tails<whichever you lose> you should either stop or bet less.^{3} By the time you’re at 8th Toss, if you have even seen 6 Heads, you can just quit and call the other a liar.

If my understanding of Casinos, usually don’t design any payoffs worse than this. So the rest of the pictures may be just academic. But, I had plotted them, so here it goes.

Now, let’s try with 0.7.

If you notice close enough, you’ll see that all first 5 tosses resulted in Heads.

Now, let’s try 0.8

Interesting graphs. See even after 15 tosses there are 11 Heads. Meanwhile with 0.7, 15 tosses resulted in 13 Heads. Power of randomness. This means, that for those of you with a high risk appetite, if the reward is big enough,you can afford to wait for 15 tosses, before you review your decision.

Now, let’s try 0.9

Once again, even 15 tosses, display 12 Heads. For risk-averse, right away ditch it after 3-4 tosses.

Now how is it all useful or meaningful for a real-life use?

Well, if you think of stocks, with the company’s quarter profits as coin-tosses it’s a interesting idea to form investment strategy. But never forget, this is a laboratory experiment. The distribution assumes probability of head/tail to be static over time. Not only do Company profits, vary, but the probability of a company making profits also varies based on a whole lot of other factors.