10000 hr practice study.

Once again, as I was traipsing around the internet , I came across yet another reference to the 10,000 hrs of deliberate practice, paper quoted yet again.(Thankfully, it was arguing more for the deliberate practice part than the 10,000 hrs). Nevertheless, i got sick of reading and hearing about it, and for once and all decided read the actual damn paper and make up my mind. The following is notes, I used for the purpose cleaned up a little.

Here’s a Summary of what i found by reading the copy found here. http://www.mockingbirdeducation.net/uploads/5/4/0/7/5407628/ericsson_1993.pdf

I’ll skip most of the preface, and survey parts as they were too much, and too hard to actually form a opinion, without getting into a rabbit-hole search of references and many days trying to read and digest them.

Will just observe that this was a paper from 1993.
There were 2 studies, This is the first one:
Study I:
Experimental Method:
Subjects:
Student Musicians(Violinists): From some univ in Germany.
1. Professors recommended 14 of them to be potential professionals. 10 of those were studied.(4 excluded for bad German, and pregnancy)
(Tagged as best violinists)
2. Control group was another 10 students from a different university with lower admission standards.selected again from professors recommending as good violinists(Tagged as music teachers)
They matched sex and age across these two groups.
3. Another control group of 10 middle-aged violin professionals

Note: This seems to fall under what can be called stratified random sampling.

Data Collection:
1. 3 sessions of interviews with the subjects.
a, autobiographical + question to estimate the hours of practiceper year, since the subject started playing.
A pilot study had helped form a taxonomy of 12 music related activities and non-related activities. This taxonomy was presented and explained. Then they were asked to rate each of these activities/categories(some were non-violin players in the control group) on 3 Dimensions.
1. How relevant it was in terms of performance improvement.
2. How much effort was required to perform it?
3. How enjoyable it was regardless of the result.

b, Questions about practice and concentration, + previous days’ activities and subjective estimates.
Study II:

Results:
What I interpret from the graphs. best violinists vs teachers’ there is quit a bit of number of hours practicing almost no difference. No difference between best violinists vs good violinists(professional violinists).
When plotted from age since they started playing the solo playing time/practice Teachers fall behind everyone else very clearly, but not much difference between the others.

Sense of agency vs Hierarchical control

Levels of Hierarchy:
Two levels are studied in this experiment.
1.Perceptual-motor Level
2.Goal Level

* – Sense of Agency
**– Expected from previous trials and practice trials.

Inferences from previous Research:
Measures of agency:
All of these should be validated/verified against measure theory principles.
2 Types:
Implicit — Intentional binding is most associated(aka correlated??) with measures like
efference,sensory feedback,causal feedback and intentionality(??).
Explicit — explicit rating of authorship

1.Intentional binding —
a,self reported temporal closeness between action(in this case,self-generated,shooting at some target)
and its’ feedback(in this case a circle flashed at the aimed location).
b,It has been shown motor cortex stimulation induced movement doesn’t affect intentional binding.
(Might affect in long-term meditators, perhaps??)
c, Stronger when the subject believes that he has control over the environment.
(People managers/Executives etc. wonder about the implications?:-))

Experiment Design:
Base Paradigm:
Subject Task(Explicit goal):
1.Aim at shoot at a target in a noisy set of visual stimuli.
2.
Variables:
1.Amount of noise
2.Interval between the trigger press and target stimuli appearance.
3.Estimation of the interval between trigger press and target appearance.
4.Self-reported control
5.

2.SOA* — manipulating/changing the time lapse between subject/user action and expected** response

Basic hypothesis/argument:
The concept of control in perceptual-motor(action) event loop provide a basic framework to understand explicit and implicit sense of agency.

Papers to read:
1.Event-control framework for sense of agency (Jordan, 2003;)

Statistical Results:
I don’t qualify to judge whether their choice of ANOVA is right or not. And similarly I don’t try to make sense of the stat results and interpret them as i never learnt beyond first order statistics. Am working on it though, so later.
Conclusion:
Overall, I learnt a lot from the background, theory, and summary of some of the references.
They ware all new and interesting to me. But i was left hoping i had picked a paper which had concrete(negative or positive) results on a specific(tight?) hypothesis.

A case for DVORAK layout

Peter norvig recently published some word frequency and letter frequencies.

See original here:

I did a quick overview and at the look at letter frequencies,
I had to immediately interpret it as good for DVORAK.
Anyway,here’s the table for top 50% of letters(13)(ordered by frequency of occurence)

Letter Count PERCENT
E 445.2 12.49%
T 330.5 9.28%
A 286.5 8.04%
O 272.3 7.64%
I 269.7 7.57%
N 257.8 7.23%
S 232.1 6.51%
R 223.8 6.28%
H 180.1 5.05%
L 145.0 4.07%
D 136.0 3.82%
C 119.2 3.34%
U 97.3 2.73%

Now these 13 letters account for a total frequency of 84.05%.
That’s a 0.84 probability the letter you type is one of these.

Let’s see how many of these are in the home row.
For Qwerty:
A,S,H,L,D
8.04,6.51,5.05,3.82
Total = 23.41%

Or there’s a probability of 0.2341 that you’ll hit the key,
without moving from the home row.

For Dvorak(US-Right handed layout):
E,T,A,O,I, N,S,H,D,U
12.49,9.28,8.04,7.64,7.57,7.23,6.51,5.05,3.82,2.73
Total = 70.36
That’s a probability of 0.7036 that you’ll hit the key,
without moving from the home row.

Now, this is just a peek.
Infact one can argue that key sequences have a bigger impact on typing speed,stamina, and accuracy.
For those, we can take a look at Bigram,Trigram,..N-gram sequences and frequencies.
Only in those case the analysis gets a little more complicated.
i.e: is it better to have 2-letter sequences on one hand or across two hands?,
is it better to have contiguous sequences(2,3,4…) across rows or in the same row?
etc…

Intuitively, some of these questions seem trivially answerable.
like: 2-letter sequences/bigrams are better if they are distributed across hands.
Less no. of rows for an N-gram the better etc..

But a little dose of skepticism is in order. Anyway, will take a shot at that advanced analytics later.

Another relevant link can be seen here.
It takes as input any text you care to copy,paste(i recommend your most typical typing session)
and plots a heat map of the keys, for different layout.

Power vs motor neuronal mirroring

Power Changes How the Brain Responds to Others ,Jeremy Hogeveen Michael Inzlicht,Sukhvinder S. Obhi

The paper can be found here.

Disclaimer: This is a very quick,cheap attempt at science journalism. All conclusions are very very suspect.

These people used TMS(Transcranial Magnetic Simulation) device and used priming stimuli, to prime for high,low,neutral power conditions.

They used the TMS to cancel out neuronal activity in vertex and left hand area of left primary motor cortex.
They also used a couple of surface electrodes(left hand area) to measure MEP(Motor evoked potentials:electro myography measurements, these are triggered by the TMS stimulation. )

They delivered the TMS at the time point where the participants where watching video of a hand squeezing a ball to the maximum squeeze.

They observed that the high power primed individuals showed low MEP measures.

The priming was done thus:
    The participants were asked to write about a previous experience they had, where either they had power over someone or someone had power over them.
    The neutral power group participants were asked to write about their previous day.

So what does all this mean? Well i’ll leave it for you to decide. But will just say if their stat tests(checkout, right test for this method, enough sample size, etc..I am not qualified to judge those stuff), this means that the mental state associated with having power is also associated with less motor neuron mirroring activity.

UPDATE:
There’s an interesting experiment that all those bosses can try , namely trying to mentally visualizing themselves doing that exact job, they are delegating. I don’t know whether it will help or cause problems, but would bet that bosses who have high expertise in the job they are delegating will have good, positive results from doing this exercise before delegating. Unfortunately testing it might be highly troublesome, because good blind cases will need bosses to explicitly not do it in some cases and in some others. I doubt that for all the intelligence 95% (of bosses/humans) might lack the Executive control to do these experiments deliberately. But that might be the only thing that can avoid a Douglas Adams’ world style future.

information content — Wordnet

I was trying to rewrite(badly needed) my M.Sc thesis, and had to go traipsing through the paper trails involved for information content calculations.
It turns out it is calculated by -Log(P(Concept)) (as per this paper)
Here P(x) ==> Probability of X.

And in the case of the nltk.wordnet package the probability is calculated directly in reference from a given corpus with the straightforward n(Concept) / N(Concepts in corpus).
n(Concept) — number of occurences of the Concept in the corpus
N(Concepts in corpus) — Size of the corpus in terms of Concepts/sensets.

Now, my innately(insanely?) curious brain goes off why those specific choices? i.e: negative log and probability calculation.

1. The -ve is mostly convenience. Since all probabilities are less than 1, if we use log to the base of 2, (which i presume is true in this case) the log results will always be negative, so a -ve makes sense.

2. Log, now here’s the interesting part, A log is essentially the inverse of an exponential function. And exponential functions blow-up/magnify relative differences(aka first-order difference). Which means a log will reduce them. So in effect if two points on the probability distribution are closer, (compared other two) they will move even closer.

3. P(x) — This one’s rather straightforward, as it is a simple ratio, it gives a good idea of which concepts are often and most used in a given corpus.

Introspection

Clearly the computing substrate i have grown up uses algorithm on this priority.
1.recursion
2.pattern matching(probably a side-effect of strong memory)
3.Deductive Reasoning
4.Inductive Reasoning

Note Inductive Reasoning at the last?? that may be the reason, i haven’t developed many products so far.. I should include probabilistic Reasoning, but not sure where that comes in…