Probability– teaching, bayes vs frequentists etc..

I see this kind of reasoning at the core of denouncing standard null hypothesis testing in financial models as this blog says

I see the core error being the same.i.e: trying to derive inferences from probability calculations that ignore conditional probabilities or treat them as no different from other probabilities.

Now, i have specifically tried to stay out of the Finance sector as a field of employment. I never really thought or questioned the whys’ of it, but am beginning to understand. I actually like money and am a reasonable saver, and like mathematics so the sector has been and perhaps still is a perennial attraction it does pay a hell of a lot more.
but am beginning to realize the reason i have instinctively flinched from it. the most available jobs are accounting and customer relations, i don’t have much stomach for the routine of accounting and am no good at customer relations.. but after that the jobs and openings are myriad higher and higher levels of abstraction
1.Quantitative Trading
2. derivatives trading
3. risk analysis
4. Portfolio management


Infact, i think this is the same problem with organizations doing normalizations of ratings and what not. I have a problem not because, i don’t think it makes sense to have all their employee ratings to fit to a normal curve, but i do have a problem in tweaking to fit exactly the normal curve at each reporting level. it’s just stupid and crazy application of standards and rules.

Also despite having a master’s degree, a bachelor’s in engineering, and having read a lot of science publications, and definitely having studied for exams, i never really understood the significance of p-values. I don’t really remember studying them very well, and somehow i don’t think they made sense if we studied it at any level of statistics course must look it up some other time.

(Obliquely related)
Probabability by stories:

I came across this story form of probability theory teaching.
See here

And was reading along, at the initial read of the story my first thought was that’s awfullay bayesian biased.
Soon realized, I never studied probability formally, definitely never beyond the dice/coin-toss example.
Have read, here and there(LW,NNT,EY and other blogs), knew there were three different interpretations,
but never was sure what those three were.

Anyway, reading the blog, it defines ‘classical’ as chalkboard situations, where we naively assume equal likelihood.
Now, that’s a category NNT would have called dangerously academic.(am somehow skeptical of this Defn.)

‘Empirical’ view relies on real-world frequencies.
(based on the examples, it’s more like projecting empirical observations from the past to the future)
Again, that sounds dangerously naive. Simply because it’s extrapolation with static/linear implicit assumptions.

‘Subjective’ view aims to express uncertainty in our minds, and therefore harder to define.

I am now finding all of these views rather, useless.
At this point am not sure what’s the point of these theoretical differences,
as they don’t seem to have a single effect on practice(i.e: reasoning with probabilities)

After reading the rest of the seriees, I get the reason why people are so divided on these interpretations.
But overall,think these should be personal preferences ultimately irrelevant to making a tight argument.(which should be based on the theorems)


Bollywood’s new trend of movies/plots have that strong derived from U.S feel.
There’s zombie movies like(Go,Goa,Gone), there’s teen flicks like (Yeh jawaani hai deewani).
I haven’t come across any coming of age movies, or teen spy movies but suspect they too are in order fairly soon.
There’s the mass,mindless cultural import from U.S (probably of 90s??).
Anyway, one good thing it might bring about is some changes in the taboo about sex.
Not sure, how exactly it’s going to play out, but expect teen pregnancies making headlines in 5 years time.
(Expect a Juno remake in 10 years time)
Am very very sure, there’s a part and bunch going on the liberal route, and drugs route*.
The perhaps good side of all this is, if it works out exactly the U.S path, we would find some trade booms.
Specifically in luxury trips, higher-priced trek gear, etc.
The downside is a lot of these are exploiting naive optimism of the early adult generation coming to pass.

Ofcourse, as one can expect, love,meaning and search for these concepts(rather haphazardly defined)
are the core of so many plots.

All of these being the signs of a generation looking for external validation and source of inspiration and motivation.

The irony being, the modern world is full of lack of these.

Typically this movie(Yeh Jawaani Hai Deewani), has a love story, an almost marriage, duh so very stereotypical.
Hopefully not a happy ending with a lived happily forever story, but all signs suggest it’s gonna end like that.

For the record,I watch these for the guitar.

* — Go,Goa,Gone comes with a #drugsarebad moral. Banal I must add.:-)

Python and church numerals

This is a cool piece of python code showing off the first class function ability of python, written by Vivek Haldar.

Perhaps, not very useful (well maybe except in some long integer handling, but not efficient there), nevertheless it’s a cool piece of code.
It’s cool, because it illustrates 2 things:
1.Python’s first class treatment of functions and lambda function facility allows you to stack on a bunch of func calls, while evaluating them only at the end.
2.Natural numbers can be represented as set and operators that add elements to the set. or It can be represented as a set of functions composed and applied on an initial value.

Here’s the code:

#Copied from (
zero = lambda f: lambda x: x

succ = (lambda n: lambda f: lambda x: f(n(f)(x)))

one = succ(zero)

add = (lambda m: lambda n: lambda f: lambda x: n(f(m(f)(x))))

mult = (lambda m: lambda n: lambda f: lambda x: n(m(f))(x))

exp = lambda m: lambda n: n(m)

plus1 = lambda x:x+1

church2int = lambda n: n(plus1)(0)

def int2church(i):
if i == 0:
return zero
return succ(int2church(i-1))

def peval(s):
print s, ' = ',eval(s)




peval('church2int(add(succ(one)) (succ(one)))')
peval('church2int(add(succ(one)) (succ(one)))')




c232 = int2church(232)
c421 = int2church(421)


print "232*421 = ",232*421

c2 = int2church(2)
c10 = int2church(10)

print '2**10 = ',2 **10

Now the coolest part is how he builds on expressions using the lambda(anonymous function ability.)
This is also the reason I tend to frown, when someone calls Python a Object-Oriented language.

I mean, it’s as much function-oriented as it is object-oriented language.
The two don’t necessarily need to be exclusive, but calling a language , implies or atleast entails,
that design debates/arguments (ones that don’t have clear datapoints to swing either way), tend to be decided/settled by .

I have used python for about 5 years, hung around passively, python-ideas,python-users,python-core-mentorship mailing lists,
And I can’t find that strong a bias in any of these places.
I can perhaps claim Python is more (English language specific?) than any other paradigm.
It’s an issue I get into arguing with some interviewers in the past,
except very few seem to even try and explain why they call it object-oriented.
(signs they don’t know what they are talking about??)

I know the Python homepage calls itself object-oriented language, but that doesn’t mean the language is Object-oriented.
Besides, I am not really worried too much about the orientation of a language,person or cat.
I think, the point really comes down to this. What properties are you implying/inferring when you say object-oriented?
What properties do you need for your application that depends on the choice of the language?
Once you can answer some parts of the questions, you have an idea of what language to choose.

Transcendental Numbers


Transcendental numbers are defined as those numbers which are not algebraic. In other words the numbers that are not the root of a non-zero polynomial equation with rational coefficients.

Corollary 1: Since all real, rational numbers are algebraic, all real, transcendental numbers are irrational.

Proven Transcendental numbers:

  1. ea – where a is algebraic and non-zero
  2. phi
  3. ephi
  4. ab – where a is algebraic but not 0 or 1 and b is irrational algebraic.
  5.  sina,  cosa,  tanaand their multiplicative inverses for any nonzero algebraic number a.
  6.  ln(a)  – where a is algebraic and not equal to 0 or 1, for any branch of the logarithm function
  7. W(a)  – where a is algebraic and nonzero, for any branch of the Lambert W function.
  8.  ∑ j = 1β2n – where  0 < detβ < 1  and β is algebraic.
  9.  ∑ j = 1βn! – where  0 < detβ < 1  and β is algebraic.

Engineering –rants

It seems there’s the dawn of a new term called Financial Risk Engineering.
I am beginning to be annoyed by this habit of appending engineering to fields,
that are too new to actually do any predictable engineering.

Ex: Software Engineering,Financial Engineering,Financial Risk Engineering etc…
Talk to me about Software Engineering, when the field has grown enough to
publish a 300 page Engineering data book with values for specific design cases.

Values, that have evolved as a result empirical experience and tinkering.
Otherwise, you are just tinkering or hacking.
I prefer tinkering since hacking has changed connotations in current usage.

I mean talk about safety factors and an idea of what safety factor,
to pick in what use-case,
and where exactly to apply/use them in the subsets of design case.

Only then, you can talk about Engineering. Otherwise,you’re just exaggerating,
for the sake of “i don’t know”(marketing/selling/cheating etc.)

Engineering at it’s core is about reliability,robustness under uncertain conditions.
To paraphrase, it’s about working around uncertainty(or entropy if you’re a

The underlying point being, figuring out how to make things stable, even when there’s extra load.
It’s about designing a crane that doesn’t break and drop the steel girder,
because some worker placed a bucket of wet concrete on the girder and forgot to remove it.

If you can’t imagine, google for construction site accidents or
just ask your friendly neighbourhood construction worker for stories of injured friends.
But don’t fucking form opinions/judgements before any of these.

It’s about designing a CNC program, that doesn’t move the drill bit into the worker’s eye socket,
because the materials manufacturer, added extra carbon and the steel is harder than it should be.

Don’t append Engineering to a field, because that’s what you want to do.
Try to do it, try to create the data books etc..

And oh, by the way, I was just assuming design of one subpart of the machines in those previous cases.
Not to saying anything about reliability of the machine with all the parts together.
Last, I checked (admittedly atleast 10 years ago), that was still a greenhorn research field,
with not a lot of conclusive evidence/heuristics.

(Disclaimer: I don’t have any real life experience in the design,manufacture,use,feedback,re-design cycle for manufacturing.
The closest I have is writing software programs. Am a sellout in the eyes of quite a few people)

Ironically enough, the traditional fields, there seems to be a forgetting of these facts,
or atleast a movement/approach towards the let’s patch together stuff, sell it and then think about it.

Also, there was an interesting point that arose, while I was on an interview. Apparently mysql(even with innodb),
though provides ACID transactions for DML(CRUD On tables), DDL’s are not atomic.Damn, that’s painfully problematic.
That Oracle/mysql at PyCON 2012,Bangalore, conveniently omitted it when he was talking about mysql 5.5/6’s new features.
Caveat emptor indeed.

I bought a bicycle with gears, one which is the hybrid model, and the designers wanted a shock absorber.

So, they basically, added a Spring load to the core frame, the Spring is situated at the junction of the seat,
and the pedal frame. Now this means that all force I put on the pedal is also offset by my weight on the spring.

Overall, making me put extra effort on pedalling. Now one could say, just get a road bike.
True, but my point being, the designers did not bother testing the new frame design or it’s effects on pedal torque.

Welcome to the modern world where as Daniel H. Pink puts it “To sell is human” and therefore everyone should sell.

To be fair, his book doesn’t say everyone should sell, or one should compromise other stuff for selling,but does
observe the fact that in the modern world a lot of jobs involve some form of selling.

The problem being, the designer(s) of that bike model, would have an easier time selling, if they put up charts,
that project added revenues by presence of a shock-absorber or not, rather than trying to display revenue losses by increased pedal torque.

In this specific case, these revenue losses are by my recommendations (or lack thereof) and even negative reviews.

Moral of the story: via negativa proofs are harder to explain, harder tools to use to convince people to act*
(as compared to inductive/extrapolative projections under naive assumptions ).
Therefore, the simple act of trying to convince people can cause troubles in judgement of engineering quality.
So, if you want to get good Engg., try to keep the responsibility of selling Engineering decisions vs making Engg. decisions separate.
I am not sure about the best way to go about it, and am not qualified (in terms of experience or lack of it) to do that.
A heuristic I can think of is :
Make sure to ask what I am selling more often than who I am selling to.

P.S: if you found yourself agreeing eagerly with this rather meandering post, read this link.

* — I can speculate the reasons might lie in some cognitive biases, but that’s beside the point here.

ELF file– Tentative Symbols

Tentative symbols:
I was messing around with an .o object file and exploring it.
I was having trouble with modifying redis source, and ended up
doing an objdump -t on the object.o.
I noticed an entry ‘bloom ‘ with the section classification as
So what exactly does *COM* section mean in a elf object file?
After googling and traipsing through a list of links,on google page,
and stack overflow questions, I finally found this meaningful.

Unfortunately, it seems to be a documentation for solaris, or atleast written by the solaris team.
Any case, it’s meaningful. The basic reasoning being if the linker finds any variable at the file scope level,
that is not initialized or declared extern, it assumes that variable is defined in another source file.
And therefore while creating the ELF object file it marks that variable as COMMON blocks.
As it turns out the name COMMON blocks, originated while linking fortran files.
It seems it used to be a common practice in the days of Fortran based program compiling and linking.

Probability of a biased coin.


I started off with reading up this book on . Fairly, quickly into the book, they start off with some code plotting posterior bayesian probability estimates, of a coin toss. For this book style this is just an obligatory example. I also have come across a set of stories designed to teach probabilities. You can find these . One of the exercises in that series, is where the teacher lies to the student about having a fair coin and demonstrates coin toss series of heads. A question at the end of the story is how many series of heads, do you have to see, before you suspect some foul-play. 1 Now, that’s an interesting question and while the text gives some answers,it’s just a set of heuristics. I on the other hand, wanted to see some specific graphs based on the bias. So I picked up the code and modified. Here’s the base code for an unbiased coin. Written inside an ipython shell.

figsize( 11, 9)
import scipy.stats as stats
dist = stats.beta
ntrials = [0,1,2,3,4,5,8,15, 50, 500]
data = stats.bernoulli.rvs(0.5, size = ntrials[-1] )
x = np.linspace(0,1,100)
for k, N in enumerate(ntrials):
sx = subplot( len(ntrials)/2, 2, k+1)
plt.xlabel("p, probability of heads")
if k in [0,len(ntrials)-1] else None
plt.setp(sx.getyticklabels(), visible=False)
heads = data[:N].sum()
y = dist.pdf(x, 1 + heads, 1 + N - heads )
plt.plot( x, y, label= "observe plt.fillbetween( x, 0, y, color="#348ABD", alpha = 0.4 )
plt.vlines( 0.5, 0, 4, color = "k", linestyles = "–", lw=1 )
leg = plt.legend()
plt.autoscale(tight = True)
plt.suptitle( "Bayesian updating of posterior probabilities", y = 1.02, fontsize = 14); plt.tightlayout()

Unbiased coin means P(H) = P(T) = 0.5 And here is the plot. figure_1

Now, let’s go and change that distribution. The code derives a binomial distribution from the scipy.stats package. See the line data = stats.bernoulli.rvs(0.5,size=ntrials[-1]) ? All we have to do is change that 0.5 to 0.6.
Here’s the plot: figure_2_0.6
Notice how the distribution starts varying very visibly right after 4 tosses. Now, I would say, that’s about the first sign of trouble.Unfortunately, in real scenario, you won’t be able to visualize distribution, but you can have a heuristic. Let’s assume you start of playing with a $1000 2 Bet for each coin-toss, about the 4th toss is when you say, I am suspicious, I should sum up the past results and reconsider. If they are all 4 Heads/Tails<whichever you lose> you should either stop or bet less.3 By the time you’re at 8th Toss, if you have even seen 6 Heads, you can just quit and call the other a liar.

If my understanding of Casinos, usually don’t design any payoffs worse than this. So the rest of the pictures may be just academic. But, I had plotted them, so here it goes.

Now, let’s try with 0.7.
If you notice close enough, you’ll see that all first 5 tosses resulted in Heads.

Now, let’s try 0.8


Interesting graphs. See even after 15 tosses there are 11 Heads. Meanwhile with 0.7, 15 tosses resulted in 13 Heads. Power of randomness. This means, that for those of you with a high risk appetite, if the reward is big enough,you can afford to wait for 15 tosses, before you review your decision.

Now, let’s try 0.9 figure_5_0.9

Once again, even 15 tosses, display 12 Heads. For risk-averse, right away ditch it after 3-4 tosses.

Now how is it all useful or meaningful for a real-life use?

Well, if you think of stocks, with the company’s quarter profits as coin-tosses it’s a interesting idea to form investment strategy. But never forget, this is a laboratory experiment. The distribution assumes probability of head/tail to be static over time. Not only do Company profits, vary, but the probability of a company making profits also varies based on a whole lot of other factors.

  1. Biased-coin,some hidden manipulation of the toss, false reporting of result etc..

  2. I know the amount sounds crazy,unrealistic.

  3. I don’t know how much money you have :)

Attention Economy/Attention metaphors

Attention Economy/Attention metaphors

Ever since i read this post on
by Venkatesh rao,
i have been thinking of the metaphors we use for our
We live in an attention economytextsuperscript{1}
Steve Jobs, and his persona was a good example of it. Good as in someone who
exemplified the power of attention and what it can be put use to. While i must
say, i haven’t used many apple products, I don’t think all of their products
deserve to be worshipped. I agree they have been the leaders with iPad and it’s
a cool technology alright(along with ipod of course). But they have always made
design decisions that have alienated a nerd,power user like me.

Anyway, the point am trying to make is that with the mouse, Mac captured the
attention of a range of audience lot bigger than the small,programming and/or
nerd community. You can extend that to explain almost all of their products.
They were all designed to capture the majority of (sigma, for summation)attention
of the consumer population. That was Steve Job’s vision and the Reality
Distortion Field**, is his ability capture who he was talking to/selling his
ideas to.

It struck me that majority of us use the clock metaphor when it comes to our

i.e: it’s a replenishable resource.

1 – A phrase if i remember right coined about 4-5 years ago in some pop-best
seller (research required on what it was).

** – I have no personal experience meeting, so my only idea of the
RDF is that from reading and watching “pirates of the silicon valley”

*** – My emotional core/moral value system definitely looks down at manipulating attention to profit or make money, but the rational side of me sees it as ok and normal in the case of what steve jobs did, just as we have magicians. I assume if any of that had led to federal policy changes, that in turn caused loss of lives, even the rational side would be furious about that.

PG’s post on startup trends.

And paulgraham is an optimist and believes total economic activity(productive output??) will only grow in the future.. hmm..
Am reading his post on trends in startups. and i still think his optimism is closer to the truth.
the reasoning he gives for his conclusion is via negativa, but most likely i like optimism is why i agree, any case.
I also realize the reason i like his posts so much, he knows the basic math of stats and calculus,
and better can express his opinions and ideas about trends in reasonably simple words, though,
i think his avoidance of equations makes his posts a little verbose, but it is perhaps justified(nay pragmatic), given the math hate/fear that prevails and topics he writes on.

Like a lot of bad things, this didn’t happen intentionally.

He’s definitely an optimist :-P,
I would have interpreted,Series A investors wanting to invest a minimal amount of the company stock’s worth as definitely malicious planning, but then i have no real experience.

You can’t fight market forces forever.

, That’s a great quote for any article on economy.