Statistics — Tests of independence

Tests of independence:

Basic principle is the same as ${\chi}^2$ – goodness of fit test
* Between categorical variables

${\chi}^2$-square tests:

The standard approach is to compute expected counts, and find the
distribution of sum of square of difference between expected counts and ordinary
counts(normalized).
* Between Numerical Variables

${\chi}^2$-square test:

  • Between a categorical and numerical variable?

Null Hypothesis:

  • The two variables are independent.
  • Always a right-tail test
  • Test statistic/measure has a ${\chi}^2$ distribution, if assumptions are met:
  • Data are obtained from a random sample
  • Expected frequency of each category must be
    atleast 5
  • ### Properties of the test:
  • The data are the observed frequencies.
  • The data is arranged into a contingency table.
  • The degrees of freedom are the degrees of freedom for the row variable times the degrees of freedom for the column variable. It is not one less than the sample size, it is the product of the two degrees of freedom.
  • It is always a right tail test.
  • It has a chi-square distribution.
  • The expected value is computed by taking the row total times the column total and dividing by the grand total
  • The value of the test statistic doesn’t change if the order of the rows or columns are switched.
  • The value of the test statistic doesn’t change if the rows and columns are interchanged (transpose of the matrix

Regularization

Based on a small post found here.

One of the standard problems in ML with meta modelling algorithms(Algorithms that run multiple statistical models over given data and identifies the best fitting model. For ex: randomforest  or the rarely practical genetic algorithm) ,  is that they might favour overly complex models that over fit the given training data but on live/test data perform poorly.

The way these meta modelling algorithms work is they have a objective function(usually the RMS of error of the stats/sub model with the data)  they pick the model based on.(i.e: whichever model yields lowest value of the objective function).  So we can just add a complexity penalty(one obvious idea is the rank of the polynomial that model uses to fit, but how does that work for comparing with exponential functions?)  and the objective function suddenly becomes RMS(Error) + Complexity_penalty(model).

 

Now depending on the right choice of Error function and Complexity penalty this can find models that may perform less than more complex models on the training data, but can perform better in the live model scenario.

The idea of complexity penalty itself is not new, I don’t dare say ML borrowed it from scientific experimentation methods or some thing but the idea that the more complex a theory or a model it should be penalized over a simpler theory or model is very old. Here’s a better written post on it.

Related Post: https://softwaremechanic.wordpress.com/2016/08/12/bayesians-vs-frequentistsaka-sampling-theorists/

 

Statistical moments —

Inspired by this blog from paypal team. Moment is a physics concept(or atleast I encountered it first in physics, but it looks it has been generalized in math to apply to other fields).

If you followed that wikipedia math link above, you’ll know the formula for moment is
\mu_n = \int\limits_{-\infty}^{+\infty} (x-c)^n f(x)\ dx\
where x — the value of the variable
n — order of the moment (aka nth moment, we’ll get to that shortly)
c — center or value around which to calculate the moment.

However if you look at a few other pages and links they ignore that part c.. and of course use the summation symbol.**

The reason they don’t put up ‘c’ there is they assume moment around the value 0. As we’ll see below this is well and good in some cases, but not always.

The other part n- order of the moment is an interesting concept. It’s just raising the value to nth power. To begin with if n is even the the negative sign caused by differences goes away. So it’s all a summary and becomes a monotonically increasing function.

I usually would argue that the ‘c’ would be the measure of central tendency like mean/median/mode and a sign of fat-tailed/thin-tailed distributions is that the moments will be different if you choose a different c and the different moments change wildly.

The statsblog I linked above mentions something different.

Higher-order terms(above the 4th) are difficult to estimate and equally difficult to describe in layman’s terms. You’re unlikely to come across any of them in elementary stats. For example, the 5th order is a measure of the relative importance of tails versus center (mode, shoulders) in causing skew. For example, a high 5th means there is a heavy tail with little mode movement and a low 5th means there is more change in the shoulders.

Hmm.. wonder how or why? I can’t figure out how it can be an indication of fat-tails(referred by the phrase importance of tails in the quote above) with the formula they are using. i.e: when the formula doesn’t mention anything about ‘c’.

** — That would be the notation for comparing discrete variables as opposed to continuous variables, but given that most of real-world application of statistics uses sampling at discrete intervals, it’s understandable to use this notation instead of the integral sign.

Chi-Square — goodness of fit Test

Pre-Script: This was inspired/triggered by this post.

For a long time, I’ve in the past taken a  “religiously blind”TM stance in the frequentists vs Bayesians automatically. (as evident in this post for example) For the most part it was justified in the sense that I didn’t understand the magic tables and the comparisons and how the conclusions were made. But I was also over-zealous and assumed the Bayesian methods were better by default. After realizing it I wrote a blog post (around the resources I found on the topic). This process convinced me that while the standard objections about frequentist statistical methods being used in blind faith by most scientists, may be true, there’s enough power they provide in many situations where Bayesian method would become computationally unwieldy. i.e: in cases where a sampling theory approach would still allow me to make conclusions with rigourous methods based uncertainty estimates, where Bayesian methods would fail.

So without further ado, here’s a summary of my attempt at understanding the Chi-Square test. Okay, first cut Wikipedia: . Ah.. Ok.. abort mission .. that route’s a no-go.. Clearly the Wikipedia Defn:

A chi-squared test, also referred to as a

{\chi}^2

test (or chi-square test), is any statistical hypothesis test wherein the sampling distribution of the test statistic is a chi-square distribution when the null hypothesis is true. Chi-squared tests are often constructed from a sum of squared errors, or through the sample variance. Test statistics that follow a chi-squared distribution arise from an assumption of independent normally distributed data, which is valid in many cases due to the central limit theorem. A chi-squared test can be used to attempt rejection of the null hypothesis that the data are independent.

Has too many assumptions. It’s time to go back and read what’s Chi-squared Distribution first, and may be not just Wikipedia, but also that statistics textbook, that I’ve been ignoring for some time now.

Ok the definition of Chi-squared distribution looks straight forward, except for the independent standard normal part. I know what independent means, but have a more vague idea of standard and normal variables More down the rabbit-hole.
Ok that wikipedia link points here. So it basically assumes the k-number of variables are :

  • a, Independent of each other,
  • b, Are drawn from a population that follows the Standard Normal Distribution.

That sounds fairly rare in practice, but can be created by choosing and combining variables wisely(aka feature engineering in ML jargon). So ok. let’s go beyond that.

The distribution definition is simple. Sum of squares. according to wikipedia, but my text book says, it’s something like (X - \frac{\mu(X)}{\sigma(X)})^2 .
Hmm..

The textbook talks about Karl Pearson’s Chi-Square Test so I’ll pick that one to delve deeper.
According to the textbook, Karl Pearson proved* that the Sum of squares of ( \frac{(Observed - Expected)}{Expected})^2 follows a Chi-Squared Distribution.

The default Null Hypothesis or H0 in a Chi-Square test is that the difference between observed and theoretical/expected(according to your theory) values have no difference.
So clearly that magic comparison values are really just some p-percentage of significance you need on the ideal Chi-square distribution and seeing if your calculated value is less or more.
Conclusion comes from whether calculated value is less .If it’s less it means the Null Hypothesis is true by chance at the given significance level. Or to write it in Bayesian Terms P(Observations | H0) == P(chance/random coincidence)**
If it’s more well here’s what I think it means. P(Observations | H0) != P(chance |random coincidence) and we are p%*** confident about this assertion.

P.S.1: At this point the textbook goes into conditions where a Chi-Squared test is meaningful, I’ll save that for later.
P.S.2: Also that number k is called degrees of freedom, And I really need to figure out what it means in this context. I know what it means in the field of complexity theory and dynamical systems, but in this context I’ll have to look at the proof or atleast math areas the proof draws upon to find out. #TODO for some time. another post.

  • — According to the Book the Chi-Squared test does not assume any thing about the distribution of Observed and Expected values and is therefore called non-parametric or distribution-free test. I have a difficult time imagining an approach to a proof that broad, but then I’m not much of a mathematician, for now I’ll take this at face value.

** — I almost put 0.5 here before realizing that’s only to for a coin-toss with a fair coin.

*** — The interpretation what this p-value actually means seems to be thorny issue. So I’ll reserve it for a different post.

Bayesians vs Frequentists(aka sampling theorists)

This is a long standing debate/argument and like most polarized arguments, both sides have some valid and good reasons for their stand. (There goes the punchline/ TLDR). I’ll try to go a few levels deeper and try to explain the reasons why I think this is kind of a fake argument. (Disclaimer: am just a math enthusiast, and a (willing-to-speculate) novice. Do your own research, if this post helps as a starting point for that, I’d have done my job.)

  • As EY writes in this post about how bayes theorem is  a law that’s more general and should be observed over whatever frequentist tools we have developed?
  • If you read the original post carefully, he doesn’t mention the original/underlying distribution, guesses about it or confidence interval(see calibration game)
  •  He points to a chapter(in the addendum) here.
  • Most of the post otherwise is about using tools vs using a general theory and how the general theory is more powerful and saves a lot of time
  •  My first reaction to the post was but obviously there’s a reason those two cases should be treated different. They both have the same number of samples, but different ways of taking the samples. One sampling method(one who does sample till 60% success) is a biased way of gathering data .
  • As a different blog and some comments point out, if we’re dealing with robots(deterministic algorithmic data-collector) that precisely take data in a rigourous deterministic algorithmic manner the bayesian priors are the same.
  • However in real life, it’s going to be humans, who’ll have a lot more decisions to make about considering a data point or not. (Like for example, what stage of the patient should be when he’s considered a candidate for the experimental drug)
  • The point I however am going to make or am interested in making is related to known-unknowns vs unknown-unknowns debate.
  • My point being even if you have a robot collecting the data, if the underlying nature of the distribution is unknown-unknown(or for that matter depends on a unknown-unknown factor, say location, as some diseases are more widespread in some areas)  mean that they can gather same results, even if they were seeing different local distributions.
  • A contiguous point is that determining the right sample size is a harder problem in a lot of cases to be confident about the representativeness of the sample.
  • To be fair, EY is not ignorant of this problem described above. He even refers to it a bit in his 0 and 1 are not probabilities post here. So the original post might have over-simplified for the sake of rhetoric or simply because he hadn’t read The Red queen.
  • The Red queen details about a bunch of evolutionary theories eventually arguing that the constant race between parasite and host immune system is why we have sex  as a reproductive mechanism and we have two genders/sexes.
  • The medicine/biology example is a lot more complex system than it seems so this mistake is easier to make.
  • Yes in all of the cases above, the bayesian method (which is simpler to use and understand) will work, if the factors(priors) are known before doing the analysis.
  • But my point is that we don’t know all the factors(priors) and may not even be able to list all of them, let alone screen, and find the prior probability of each of them.

With that I’ll wind this post down. But leave you with a couple more posts I found around the topic, that seem to dig into more detail. (here and here)

 

P.S: Here’s a funny Chuck Norris style facts about Eliezer Yudkowsky.(Which I happened upon when trying to find the original post and was not aware of before composing the post in my head.) And here’s an xkcd comic about frequentists vs bayesians.

UPDATE-1(5-6 hrs after original post conception): I realized my disclaimer doesn’t really inform the bayesian prior to judge my post. So here’s my history/whatever with statistics. I’ve had trouble understanding the logic/reasoning/proof behind standard (frequentist?) statistical tests, and was never a fan of rote just doing the steps. So am still trying to understand the logic behind those tests, but today if I were to bet I’d rather bet on results from the bayesian method than from any conventional methods**.

UPDATE-2(5-6 hrs after original post conception):  A good example might be the counter example. i.e: given the same data(aka in this case frequency of a distribution, nothing else really, i.e: mean, variance, kurtosis or skewness) show that bayesian method gives different results based on how it(data) was collected and frequentist doesn’t. I’m not sure it’s possible though given the number of methods frequentist/standard methods use.

UPDATE-3 (a few weeks after original writing): Here’s another post about the difference in approaches between the two.

UPDATE-4 (A month or so after): I came across this post with mentions more than two buckets, but obviously they are not all disjoint sets(buckets).

UPDATE-5(Further a couple of months after): There’s a slightly different approach to splitting the two cultures from a different perspective here.

UPDATE-6: A discussion in my favourite community can be found here.
** — I might tweak the amount I’d bet based on the results from it .

Central Tendency — measures

The 3 common measures of central tendency used in statistics are :

  • 1. Mean
  • 2. Median
  • 3. Mode

There are of course other methods as the Wikipedia page attests. However the inspiration for this post was from yet another J.D.Cook’s blog ..

Note: That all these three and the other measures do obey the basic rules of measure theory.

The point being what you choose to describe your central tendency is key and should be decided based on what you want to do with it. Or more precisely what exactly do you want to optimize your process/setup/workflow for, and based on that you’ll have to choose the right measure. If you read that post above you’ll understand that:

Note: that even within mean there are multiple types of mean. For simplicity I’ll assume mean means arithmetic mean (within the context of this post).

  • Mean — Mean is a good choice when you want to minimize the variance(aka, squared distance or second statistical moment about central tendency measure).. That’s to say your optimization function is dominated by a lot of square of distance(from central tendency measure) terms. Think of lowering mean squared error. and how it’s used in straight line fitting
  • Median — Median is more useful if your optimization function has distance terms but not squared ones. So this will in effect be the choice when you want to minimize the distance from central tendency.
  • Midrange — Midrange is useful when your function looks like max(distance from central measure)..

If most of that sounded too abstract then here’s a practical application I can think of right away to use. Imagine you’re doing performance testing and optimization of a small API you’ve built. Now I don’t want to go into what kind of API/technology behind it or anything. So let’s just assume you want to run it multiple times and calculate a measure of central tendency from it and then try to modify the code’s performance(with profiling + different libraries/data structures whatever….), so what measure of central tendency should you pick?

  • Mean — Most Engineers would pick Mean and in a lot of cases it’s enough but think about it. It optimizes for variance of run/execution time. Which is important and useful to optimize in most cases, but in some cases may not be that important.
  • Mode — An example is if your system is a small component of say a high-frequency trading platform and the consumer of it has a timeout and fails if it times out.(aka your api is mission-critical, it simply cannot fail). Then you want to make sure even in the lowest case your program completes. If the worst case runtime complexity is what you want to lower then you should pick mode. (Note this is still a trade-off over not lowering the average/mean use-case, just like hard-choice.)
  • Median — This is very similar to Mean, except it doesn’t really care about variance. If you’re picking median, then your optimized program is sure to have the best performance in the average run/case/dataset
  • Midrange — Well this is an interesting case. Think about it.. even in the previous timeout example i mentioned this could be useful. Here it goes,suppose your api is not mission-critical(i.e: if it fails the overall algorithm will just throw out that data term and progress with other data sources). when you want to maximize the number of times your program finishes within the timeout. i.e: you’re purely measuring the number of times you finish/return a value within the timeout period. You don’t care about the worst-case scenario.

There are other measures, such as:

 

Additionally, you can take mean of functions(non-negative ones too). See JDCook’s blog again.

Harmonic Mean

This is a followup post to geometric mean post.

What exactly is Harmonic mean ?
Well to summarize the wikipedia link, it is basically a way to average of rates of a some objects.

Continuing with the Laptop, example , let’s see how to compare the laptops in terms of best bang for the buck.

Once again, we have three attributes and we divide the attribute values by the cost of the laptop. Now this will give us (rather approximately) how much GB/Rupee* we get.

The we apply the formula for harmonic mean.: i.e: 3/(1/x1 + 1/x2 +1/x3).

Just for the fun of argumentation, I threw in a Raspberry Pi 2 + cost of 32 GB SD Card inside.
And Of course** the Raspberry Pi 2 comes out on top on the Harmonic mean(of most bang for the buck) ranking..

Note, how i divided the attributes by cost. In other words, I did that because harmonic mean doesn’t make sense to apply to values that are not rates. (aka, for the engineers, the units have to have a denominator.)

Also note that, the Raspberry Pi 2 is lower in both the arithmetic and geometric means of the attributes(CPU speed, Disk space, RAM), but higher when it comes to value per price. That’s one reason to use harmonic mean of rates (of price/time/) when comparing similar purchases, with multiple attributes/values to evaluate.

Now, so far these are all individual attributes, that don’t talk about or evaluate other factors.

Like for example the apple’s retina display technology. Or for that matter, CPU Cache, or AMD vs Intel processor, Or multithreading support, Or number of cores etc..

All of these could be weighted, if you do know how to weight them. And weighting them right would require some technical knowledge, and reading up reviews of products with those features on anandtech’s reviews/comparison blog posts.

* — If you look closely at the Excel sheet, I’d have multiplied the GHz by 1000, and get KHz to get the numbers in a decent level.

** — Of course, because, it doesn’t come with a monitor, keyboard or mouse. It is simply a PCB chip.