Technical Debt in ML models

Technical Debts are also there in ML:

Complex models erode boundaries

* -- Entanglement of features and feature distributions
* -- Correction cascades creating cascade chains of models and dependency hell
* -- Undeclared consumers for the model predictions

Data dependencies are costlier than code dependencies

* -- Unstable Data Dependencies unstable input data or signals or predictions from a previous model..(For ex: in [speech to text](https://github.com/kaldi-asr/kaldi), the syllables is a prediction and signal/input to the word-language model)
* -- Underutilized Data Dependencies   (Creep in via Legacy Features, Bundled Features,
  Correlated Features etc)
* -- Static analysis of data dependencies can help mitigate these issues to some extent

Feedback loops

* -- Direct Feedback loops(In speech to text ,it can come from changes in languages and
  pronounciation)
* -- Hidden Feedback loops (These can come from not understanding the business use-case as
  explicitly as possible or other things like change in the nature of the use-case and
  demand itself. For ex: user expectations changing after getting used to the tool)

ML-system anti-patterns:

* -- Glue code -- in general, things like cleaning code, connecting model prediction, and
  business use-case etc.
* -- Pipeline Jungles  Huge mess of pre-processing of audio files.. different formats,
  different language and accent detection(this can also be cascaded models) etc..

* -- Dead code on Experimental codepaths: probably from a bunch of experimental models
  different NN architectures, different custom models etc..

* -- Abstraction Debt: No clear standard abstraction for ML models. (like RDBMS for
  database)

Common Smells:

* --  Plain-old-data type smells.. assume some data types but the input stream is
  changing...
* -- Multiple Language smell: this is programming language and how using multiple languages
  in a project cause multiple problems/issues at the interfaces.
* -- Prototype smell: The prototype is written and makes invalid assumptions. Even whatever
  validation that has been done for the prototype is not valid outside of the small audience
  this was tested on.

Configuration Debt:

* -- Wide range of configurable options from input data stream segregation/categorizations,
  model size and dependencies tuned to latency/thoroughput of the predictions, model choice,
  input features, data summarization methods, verification methods etc..
* -- If there's a lack of configuration management the system can become a black box
  impossible to debug and therefore improve. While these are similar to common software
  applications, these are doubly problematic in ML models as a lot of models are considered
  black-box by default and are already hard to reason about without these configuration
  issues.

Dealing With Changes in the external world.

* -- Fixed thresholds in Dynamic Systems:
* -- Monitoring and Testing for the model's failure limits (for ex: in case of a data
  outlier)
    Things to monitor: * -- Prediction Bias
               * -- Action Limits(say a trading algo relying on a model should
                 have limits)
               * -- Up-stream Producers (aka data pre-processing pipelines, for
                 ex: a moving window of 100 ticks/events may not be right for
                 different(higher) velocity of input data.)

Others:

* -- Data testing Debt
* -- Reproducibility Debt
* -- Process Management Debt
* -- Cultural Debt

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.