Consciousness — some dreamy hand-wavy theorizing

I am not a professional scientist or an expert in any of the areas, I refer to in the following article. I have picked up a couple of unrelated degrees in my life so far, but don’t consider myself an expert in any of them. I consider the rest of the article as something, I would have written if I had been interviewed for that (pop-sci) book “What I believe, but cannot prove”.

This is an attempt to summarize a bunch of my thoughts and ideas on consciousness.
This definitely is a biased summary, as abhay asked me to write a post and with a set deadline, my brain was scampering around and recalled the most recent thoughts.
A few basic assumptions*, I tend to make are:
1.Consciousness is a emergent phenomenon, that arises out of essentially materialistic universe.
2.Only known/agreed fundamentals of universe are dictated by physics as Space, Time, Energy, Mass.
3.While our neuronal activity may not be sufficient to explain all the phenomenology of consciousness,
they definitely are a necessary condition for enabling consciousness.**
4. Space-time continuum and the extended Space-Time-Energy-Mass Continuum is true, and will be proven sometime in the future.

A couple of basic hypothesis I would like to add is:
1. Attention is a core, integral, fundamental part of our universe on par with Space, Time, Energy and Mass.
2. Attention is part of a continuum of Space-Time-Energy-Mass-Attention Continuum
3. Big bang is the event when this continuum evolved/broke apart enough to these distinct components.

For the rest of the writing, I would use the terms consciousness and attention rather inter-changeably, because that’s how I consider it to be.
Do note that, I haven’t defined attention, because despite the standard definiton, I am not sure, that fits in with what am building here.
Instead, I’ll try to provide as many specific examples as I can think of and let you infer or construct a definition, that fits.

Some evidence*** I would like to sketch out are:
1.Heisenberg’s uncertainty principle:
i.e: It is impossible to know both position and mass of a particle to the exact precision allowed by the equipments’ capability.
This has been interpreted in many ways, and discussed rather extensively in some philosophical circles and popular science books.
The main point,I wish to make being once you think of the measuring equipments as exhibitng some characteristics of attention, it’s easy to interpret this as evidence for attention being a part of the universe and being converted into other forms.

One corollary of this is

Am picking just the terms I am familiar with. But this principle, that all human beings/living beings are part of a whole has been proposed in quite a few mythologies and stories spread around the world. Now, it could be argued that, this really is just a bias that led to this kind of independantly originated similar myths, but am at the other end.I’ll just list out a few variants, I’ve come across here, “Treat unto others”, “Gaia theory”
3.Causality and Probabilistic Graphical models:
Judea Pearls’ work on causality is fairly known.But I only have the basic ideas from reading easy text and avoiding the core math part.Based on a few of the sketches, I have indeed read, I would say, quite a few of our currently used models of causality are suspect, which I believe is the biggest argument for a peer review process, rather than a made-up gold standard template for scientific experimentation and conclusions.
But I digress, my core argument being that, if and once you agree that highly engineered, technical equipments(like the LHC) possess some level of attention/consciousness, then your probability graph immediately adds in complexity. To put it mildly, the possible number of agents goes up so dramatically, that trying to assign strong probability to any event as being the cause of another becomes a unrealistic computation problem.

4.Speed of light constant:
That mysterious rule that Speed of light is a maximum limit constant that cannot be exceeded.Also the corollary, about exchanging information.

Some predictions I would like to make are:
1.Sometime in the distant future, physicists will start acknowledging and forming theories around the agency of their equipments and how to make experiments that account for those, but still add value/evidence to the original hypothesis.
2.Neuroscience,technology, and our understanding of ethics will progress to levels, that enable us to more actively experiment on consciousness by enabling us to set/change/modify variables involved in affecting it.

Few corollaries, supposing those hypothesis are true:
1.Consciousness/Attention has been increasing in the universe, since Big bang.
2.It’s possible to convert from consciousness to mass/energy/space/time. (this means some superpowers like that of Vista, clockblocker, etc. in The Worm web serial novel)
3.The orders of magnitude of Consciousness contained in the form of mass/space/time is humongous. (Just think how much Energy is released by converting mass into Energy.of the order of 10^^12 times our known measure)
4.Our current measures of Consciousness/attention are too crude and they underestimate by incredible numbers.
5.In terms of life as a optimization problem viewpoint, we are currently woefully stuck in local optima points. We are just simply ignoring what is atleast 1 extra dimension, by not acknowleding it. (Not to imply we are utilizing the others we acknowledge optimally)

*— Some of these are just plain and simple biases I’ve accrued over 30 years of my life, and not necessarily proven science facts.
** — It follows that, I don’t consider bacteria, viruses, DNA/RNA etc. as conscious. I remember Harward declared animals are conscious, (though not sure they included microbes or not) and I agree with that. Am open to being convinced otherwise though.They do have a few basic reactions to environmental changes, so to take a weaker position, I would say, the bacteria and virii have a very tiny/micro/minute level of consciousness.
*** — evidence as suggested by law scenes in movies and serials, rather than from hard sciences.I could probably argue, there’s probabilistic graphical models based strong evidence, but I don’t have the time or effort to put a good one. This includes narrative, story-based evidences.

Dear interviewers — rants

Dear interviewers,
Please think about interviews first. And don’t use information asymmetry to hide or manipulate the candidates(good ones, hate those interviews). Try to think of it as a honest conversation to figure out if you can work together, instead of trying to play cat and mouse game.

And seriously rethink old habits of calling in people and asking them to write code on paper, and then criticizing the syntax.

And don’t really sit back and shift the “burden of proof” to the candidate altogether.

I tend to think of interviews as two-way sales process. Not one way,
where the candidate has to sell his skillset,with/without knowledge of the job roles.
If you are gonna sit there and question my skillsets, and ask about my interest in the job,
without divulging anything more than a vague “pure python framework”, then well,
we’re not likely to work together at all. Sorry cya.

Because: are employing information asymmetry
2.You are shifting “burden of proof”, all the while absolving yourself of any such thing.
3.You are refusing to provide enough information for me to judge my interests.

When I say, a two-way sales process, I mean:
1. Both the parties are trying to exchange some thing that’s of value to the other.
2. Both the parties have to convince the other, what they have is worth it.
3. Both the parties have to be able to trust the other won’t cheat them out of what they’re agreeing to.

Establishing all of these, takes deliberate attention to these ideas first, acceptance next and then open communication.

Unfortunately, real-life is filled with imperfect people trying to half-ass things,(some deliberately,some ignorantly, some just plain simply don’t have the time/attention,
but always to their self-advantage) and we end up with the messy abstractions we have.

And if you find it difficult to switch from an admittance gate mode to two-way sales communication mode,
I recommend you give atleast one read to Daniel H. Pink’s “To sell is human”. I highly recommend a read-pause-write/record-read cycle though.

Quantum of work

Quantum of work:
He(Vivek Haldar) talks about quantum of work here.
He mentions that “knowledge work ” is measured in minutes. I hate going by words, rather than some solid measurable definition,
but if I have to roll with it, I would say he has a point. Though he seems to be biased by Academic Researchers’ workstyle.
I get that most of his work(researcher), is limited to reading a set of papers, thinking about the ideas,
writing out the flaws/gaps/criticisms/related implications in them (in a mail to the author?) and then moving on to other papers and ideas.

He defines it as

A quantum of work is the theoretical longest amount of time you can work purely on your own without needing to break out into looking up something on the web or your mail or needing input from another person.

He may be right about it being measured in a matter of minutes, But there are implicit assumptions.
Since he uses a fiction writer,(he says, unless you have all your references and notes in hand,you’ll break out).
I would contend that all of that presumes the fiction writer is looking up references to learn how to use them,
rather than looking them up for patching the details. Or as some writers like to claim, imagine most of what they write themselves.

When I usually am writing these blog posts(obviously not the same as fiction), I start off putting together the elements of the post in my head.
Infact, at this stage, I usually am not even at the computer, then I write out the text, while leaving in placeholders for links and stuff i need to lookup.

Now, I am not really experienced or voluminous or good writer, but I’ll refer to what VGR says here.
Most of his posts even the ones that are 20K words long could have been and sometimes do get written in a condensed form first.
In this post he says that, he usually uses previous ideas and defintions he has established,
and can actually write down the condensed form in a napkin.

Note: this kind of work has a heavily strong backlinking to one’s own previous work, and tends to ignore others’ work around the area.

VH also says,

The trajectory from linear long form work to fragmented quantized work had been quite steady and there are absolutely no signs of it reversing.

. Now for people like me that’s a more scary thought.

My sense of best done/most work-satisfaction, tends to come from having focused on some work for a long time(going over the trade-offs repetitively and obsessively, till am sure I can’t improve the decisions).
I don’t mean continously incrementive work on that time, but not doing anything else when I am on it.
I might just be pondering/reflecting over it.Unfortunately, the modern day workplace is the antithesis of it.
And it seems to be not only seen as normal, but also a requisite to fit into the “Work Culture”.
I suspect this is one of the reasons, modern workplaces are “fragile”
This is one of those things that makes the reliance on big corporations scary.

I also wonder if these two work styles are related to the ghost vs vampire approach to life.

I tend to think of the second work-style(measured in minutes) as vampiricist, in that it’s made of experience seeking rather than meaning-seeking. VGR claims, people fall either in the extreme experience-seeking or extreme meaning-seeking categories. I am not so sure about it though.

I have also noticed when I write code, for a reasonably complex feature, I tend to write code and think alternatively in sprints. These two reach or hit a natural rhythm of variations that help the feeling of flow. I have also observed sitting at my desk is perhaps the easiest way to get stuck in my thinking flow/mode.

But the most important of all is carrying out a conversation with someone or clarifying a doubt or something like that. These things almost always create an immediate impact on the ability to write code. A lot of programmers have written about this, so I will just note the observations/thoughts that came to my mind.

It seems the actual act of conversing with someone about code(either the one you’re writing or he’s writing or either of you are planning) is perhaps the fastest way to lose track of your thoughts/ideas/dilemmas/trade-offs. It’s not that you are not conversing with yourself, it’s just that the conversation with yourself is at a level that’s more primitive(??) or natural than words can capture. You’re usually forming theses, testing them, collecting data. Forming theses is the part that involves most of language, the rest need language only when you’re trying to convince someone else of your conclusions. Otherwise they are left to your perceptive and pattern matching parts of the mind.

As always, this is all theory, and reality has a way of finding richer variations and surprising people (more specifically theorists) :)