Technical Debt in ML models

Technical Debts are also there in ML:

Complex models erode boundaries

* -- Entanglement of features and feature distributions
* -- Correction cascades creating cascade chains of models and dependency hell
* -- Undeclared consumers for the model predictions

Data dependencies are costlier than code dependencies

* -- Unstable Data Dependencies unstable input data or signals or predictions from a previous model..(For ex: in [speech to text](https://github.com/kaldi-asr/kaldi), the syllables is a prediction and signal/input to the word-language model)
* -- Underutilized Data Dependencies   (Creep in via Legacy Features, Bundled Features,
  Correlated Features etc)
* -- Static analysis of data dependencies can help mitigate these issues to some extent

Feedback loops

* -- Direct Feedback loops(In speech to text ,it can come from changes in languages and
  pronounciation)
* -- Hidden Feedback loops (These can come from not understanding the business use-case as
  explicitly as possible or other things like change in the nature of the use-case and
  demand itself. For ex: user expectations changing after getting used to the tool)

ML-system anti-patterns:

* -- Glue code -- in general, things like cleaning code, connecting model prediction, and
  business use-case etc.
* -- Pipeline Jungles  Huge mess of pre-processing of audio files.. different formats,
  different language and accent detection(this can also be cascaded models) etc..

* -- Dead code on Experimental codepaths: probably from a bunch of experimental models
  different NN architectures, different custom models etc..

* -- Abstraction Debt: No clear standard abstraction for ML models. (like RDBMS for
  database)

Common Smells:

* --  Plain-old-data type smells.. assume some data types but the input stream is
  changing...
* -- Multiple Language smell: this is programming language and how using multiple languages
  in a project cause multiple problems/issues at the interfaces.
* -- Prototype smell: The prototype is written and makes invalid assumptions. Even whatever
  validation that has been done for the prototype is not valid outside of the small audience
  this was tested on.

Configuration Debt:

* -- Wide range of configurable options from input data stream segregation/categorizations,
  model size and dependencies tuned to latency/thoroughput of the predictions, model choice,
  input features, data summarization methods, verification methods etc..
* -- If there's a lack of configuration management the system can become a black box
  impossible to debug and therefore improve. While these are similar to common software
  applications, these are doubly problematic in ML models as a lot of models are considered
  black-box by default and are already hard to reason about without these configuration
  issues.

Dealing With Changes in the external world.

* -- Fixed thresholds in Dynamic Systems:
* -- Monitoring and Testing for the model's failure limits (for ex: in case of a data
  outlier)
    Things to monitor: * -- Prediction Bias
               * -- Action Limits(say a trading algo relying on a model should
                 have limits)
               * -- Up-stream Producers (aka data pre-processing pipelines, for
                 ex: a moving window of 100 ticks/events may not be right for
                 different(higher) velocity of input data.)

Others:

* -- Data testing Debt
* -- Reproducibility Debt
* -- Process Management Debt
* -- Cultural Debt

Laptop purchase decision

So in the past, I’ve ranted about the “confusion marketing” in the laptop market. (see here).

So this time around after more than 5 years, when i had to buy a new laptop, I decide to apply some analytical ideas, i’ve picked up over this time working.

So I created this sheet, which helped me out.

Since i had written a series of posts in the past about different types of mean, i had created this blog post.
So I ended up buying this.

Yentl Syndrome: A Deadly Data Bias Against Women

Longreads

Caroline Criado Perez | An excerpt adapted from Invisible Women: Data Bias in a World Designed for Men | Harry N. Abrams | 22 minutes (5,929 words)

In the 1983 film Yentl, Barbra Streisand plays a young Jewish woman in Poland who pretends to be a man in order to receive an education. The film’s premise has made its way into medical lore as “Yentl syndrome,” which describes the phenomenon whereby women are misdiagnosed and poorly treated unless their symptoms or diseases conform to that of men. Sometimes, Yentl syndrome can prove fatal.

If I were to ask you to picture someone in the throes of a heart attack, you most likely would think of a man in his late middle age, possibly overweight, clutching at his heart in agony. That’s certainly what a Google image search offers up. You’re unlikely to think of a woman: heart disease is…

View original post 6,045 more words

Aruvi(2017) — review

Movie: Aruvi

Brilliance of screenplay irony:

  • — The scene at the shooting set after the shooting, fights with that cheater, exploitative guy.
    The irony arises from the numbness/indifference of the actress, and the way she makes puppets out
    of the shooting crew, which for a change have to face real life-or-death drama rather than the
    made-up things they shoot. Not to mention the background bass.

  • — The idealist wannabe director’s impractical dream story to direct.

  • — The role reversal of the prima donna actress serving tea.

  • — Ofcourse, the mock-tv show, of the actress.

Some things about the movie that left me cold though
* — The manner how the protagonist got HIV infection is a rather low probabilty and rare method(it had to go from the live blood of coconut seller to, any bleeding gums in the protagonist mouth or other bleeding wounds in the digestive tract), and the director had to reach for it to avoid the pre-marital sex taboo culture. I personally think it is a meaningless one to stick to. (Not that i suggest we let US style marketing and companies use dating and sex as a lure. Ironically, sticking to cultural excuses would drive some youth to that approach only. )

I’m not giving up!”  I raised my voice, angry, surprised at myself for being angry.  I took a breath, forced myself to return to a normal volume, “I’m saying there’s probably no fucking way I’ll understand why she did what she did.  So why waste my time and energy dwelling on it?  Fuck her, she doesn’t deserve the amount of attention I’ve been paying her. I’m… reprioritizing.”

She’s a bully,” I said.  “At the end of the day, she only wants to fight opponents she knows she can beat.”

“I’ve fought two Endbringers,” Shadow Stalker said, stabbing a finger in my direction.  “I know what you’re trying to do.  Fucking manipulating me, getting me into a dangerous situation where you’ll get me killed.  Fuck you.”

Gaussian Mixture Models.. GMMs

Gaussian Mixture Models

  • A probabilistic model
  • Assumes all data points are generated from a mixture of finite no. of gaussian
    distributions
  • The parameters of the gaussian distributions are unknown.
  • It is a way of generalizing k-mean(or k-medoid or k-mode for that matter) clustering to use the
    co-variance structure/stats as well as the mean/central-tendency measures of latent
    gaussians.

scikit-learn

Pros:

  • Fastest for learning mixture models
  • No bias of means towards zero, or bias cluster sizes to have specific structures

Cons:

  • When there’s not enough points per mixture, estimating covariance matrices becomes
    difficult
  • Number of components; will always use all the components it has access to, so might need
    missing or test-reserved data..

  • No. of components can be chosen based on BIC criterion.

  • Variational Bayesian Gaussian mixture avoids having to specify number of components

Variational Bayesian Gaussian Mixture

Fitting a Gaussian model to data