Other things on this site...

MCLD
music
Evolutionary sound
Listen to Flat Four Internet Radio
Learn about
The Molecules of HIV
MCLD
software
Make Oddmusic!
Make oddmusic!

Review of "Living Together: Mind and Machine Intelligence" by Neil Lawrence

In the early twentieth century when the equations of quantum physics were born, physicists found themselves in a difficult position. They needed to interpret what the quantum equations meant in terms of their real-world consequences, and yet they were faced with paradoxes such as wave-particle duality and "spooky action at a distance". They turned to philosophy and developed new metaphysics of their own. Thought-experiments such as Schrodinger's cat, originally intended to highlight the absurdity of the standard "Copenhagen interpretation", became standard teaching examples.

In the twenty-first century, researchers in artificial intelligence (AI) and machine learning (ML) find themselves in a roughly analogous position. There has been a sudden step-change in the abilities of machine learning systems, and the dream of AI (which had been put on ice after the initial enthusiasm of the 1960s turned out to be premature) has been reinvigorated - while at the same time, the deep and widespread industrial application of ML means that whatever advances are made, their effects will be felt. There's a new urgency to long-standing philosophical questions about minds, machines and society.

So I was glad to see that Neil Lawrence, an accomplished research leader in ML, published an article on these social implications. The article is "Living Together: Mind and Machine Intelligence". Lawrence makes a noble attempt to provide an objective basis for considering the differences between human and machine intelligences, and what those differences imply for the future place of machine intelligence in society.

In case you're not familiar with the arXiv website I should point out that articles there are un-refereed, they haven't been through the peer-review process that guards the gate of standard scientific journals. And let me cut to the chase - in this paper, I'm not sure which journal he was targeting, but if I was a reviewer I wouldn't have recommended acceptance. Lawrence's computer science is excellent, but here I find his philosophical arguments are disappointing. Here's my review:

Embodiment? No: containment

A key difference between humans and machines, notes Lawrence, is that we humans - considered for the moment as abstract computational agents - have high computational capacity but a very limited bandwidth to communicate. We speak (or type) our thoughts, but really we're sharing the tiniest shard of the information we have computed, whereas modern computers can calculate quite a lot (not as much as does a brain) but they can communicate with such high bandwidth that the results are essentially not "trapped" in the computer. For Lawrence this is a key difference, making the boundaries between machine intelligences much less pertinent than the boundaries between natural intelligences, and suggesting that future AI might not act as a lot of "agents" but as a unified subconscious.

Lawrence quantifies this difference as the numerical ratio between computational capacity and communicative bandwidth. Embarrassingly, he then names this ratio the "embodiment factor". The embodiment of cognition is an important idea in much modern thought-about-thought: essentially, "embodiment" is the rejection of the idea that my cognition can really be considered as an abstract computational process separate from my body. There are many ways we can see this: my cognition is non-trivially affected by whether or not I have hay-fever symptoms today; it's affected by the limited amount of energy I have, and the fact I must find food and shelter to keep that energy topped up; it's affected by whether I've written the letter "g" on my hand (or is it a "9"? oh well); it's affected by whether I have an abacus to hand; it's affected by whether or not I can fly, and thus whether in my experience it's useful to think about geography as two-dimensional or three-dimensional. (For a recent perspective on extended cognition in animals see the thoughts of a spiderweb.) I don't claim to be an expert on embodied cognition. But given the rich cognitive affordances that embodiment clearly offers, it's terribly embarrassing and a little revealing that Lawrence chooses to reduce it to the mere notion of being "locked in" (his phrase) with constraints on our ability to communicate.

Lawrence's ratio could perhaps be useful, so to defuse the unfortunate trivial reduction of embodiment, I would like to rename it "containment factor". He uses it to argue that while humans can be considered as individual intelligent agents, for computer intelligences the boundaries dissolve and they can be considered more as a single mass. But it's clear that containment is far from sufficient in itself: natural intelligences are not the only things whose computation is not matched by their communication. Otherwise we would have to consider an air-gapped laptop as an intelligent agent, but not an ordinary laptop.

Agents have their own goals and ambitions

The argument that the boundaries between AI agents dissolve also rests on another problem. In discussing communication Lawrence focusses too heavily on 'altruistic' or 'honest' communication: transparent communication between agents that are collaborating to mutually improve their picture of the world. This focus leads him to neglect the fact that communicating entities often have differing goals, and often have reason to be biased or even deceitful in the information shared.

The tension between communication and individual aims has been analysed in a long line of thought in evolutionary biology under the name of signalling theory. For example the conditions under which "honest signalling" is beneficial to the signaller. It's important to remember that the different agents each have their own contexts, their own internal states/traits (maybe one is low on energy reserves, and another is not) which affect communicative goals even if the overall avowed aim is common.

In Lawrence's description the focus on honest communication leads him to claim that "if an entity's ability to communicate is high [...] then that entity is arguably no longer distinct from those which it is sharing with" (p3). This is a direct consequence of Lawrence's elision: it can only be "no longer distinct" if it has no distinct internal traits, states, or goals. The elision of this aspect recurs throughout, e.g. "communication reduces to a reconciliation of plot lines among us" (p5).

Unfortunately the implausible unification of AI into a single morass is a key plank of the ontology that Lawrence wants to develop, and also key to the societal consequences he draws.

There is no System Zero

Lawrence considers some notions of human cognition including the idea of "system 1 and system 2" thinking, and proposes that the mass of machine intelligence potentially forms a new "System Zero" whose essentially unconscious reasoning forms a new stratum of our cognition. The argument goes that this stratum has a strong influence on our thought and behaviour, and that the implications of this on society could be dramatic. This concept has an appeal of neatness but it falls down too easily. There is no System Zero, and Lawrence's conceptual starting-point in communication bandwidth shows us why:

  • Firstly, the morass of machine intelligence has no high-bandwidth connection to System 1 or to System 2. The reason we talk of "System 1 and System 2" coexisting in the same agent is that they're deeply and richly connected in our cognition. (BTW I don't attribute any special status to "System 1 and System 2", they're just heuristics for thinking about thinking - that doesn't really matter here.) Lawrence's own argument about the poverty of communication channels such as speech also goes for our reception of information. However intelligent, unified or indeed devious AI becomes, it communicates with humans through narrow channels such as adverts, notifications on your smartphone, or selecting items to show to you. The "wall" between ourselves as agents and AI will be significant for a long time.
    • Direct brain-computer interfacing is a potential counterargument here, and if that technology were to develop significantly then it is true that our cognition could gain a high-bandwidth interface. I remain sceptical that such potential will be non-trivially realised in my lifetime. And if they do come to pass, they would dissolve human-human bottlenecks as much as human-computer bottlenecks, so in either case Lawrence's ontology does not stand.
  • Secondly AI/ML technologies are not unified. There's no one entity connecting them all together, endowing them with the same objective. Do you really think that Google and Facebook, Europe and China, will pool their machine intelligences together, allowing unbounded and unguarded communication? No. And so irrespective of how high the bandwidth is within and between these silos, they each act as corporate agents, with some degrees of collusion and mutual inference, sure, but they do not unify into an underlying substrate of our intelligence. This disunification highlights the true ontology: these agents sit relative to us as agents - powerful, information-rich and potentially dangerous agents, but then so are some humans.

Disturbingly Lawrence claims "Sytem Zero is already aligned with our goals". This starts from a useful observation - that many commercial processes such as personalised advertising work because they attempt to align with our subconscious desires and biases. But again it elides too much. In reality, such processes are aligned not with our goals but with the goals of powerful social elites, large companies etc, and if they are aligned with our "system 1" goals then that is a contingent matter.

Importantly, the control of these processes is largely not democratic but controlled commercially or via might-makes-right. Therefore even if AI/ML does align with some people's desires, it will preferentially align with the desires of those with cash to spend.

We need models; machines might not

On a positive note: Lawrence argues that our limited communication bandwidth shapes our intelligence in a particular way: it makes it crucial for us to maintain "models" of others, so that we can infer their internal state (as well as our own) from their behaviour and their signalling. He argues that conversely, many ML systems do not need such structured models - they simply crunch on enough data and they are able to predict our behaviour pretty well. This distinction seems to me to mark a genuine difference between natural intelligence and AI, at least according to the current state of the art in ML.

He does go a little too far in this as well, though. He argues that our reliance on a "model" of our own behaviour implies that we need to believe that our modelled self is in control - in Freudian terms, we could say he is arguing that the existence of the ego necessitates its own illusion that it controls the id. The argument goes that if the self-model knew it was not in control,

"when asked to suggest how events might pan out, the self model would always answer with "I don't know, it would depend on the precise circumstances"."

This argument is shockingly shallow coming from a scientist with a rich history of probabilistic machine learning, who knows perfectly well how machines and natural agents can make informed predictions in uncertain circumstances!

I also find unsatisfactory the eagerness with which various dualisms are mapped onto one another. The most awkward is the mapping of "self-model vs self" onto Cartesian dualism (mind vs body); this mapping is a strong claim and needs to be argued for rather than asserted. It would also need to account for why such mind-body dualism is not a universal, across history nor across cultures.

However, Lawrence is correct to argue that "sentience" of AI/ML is not the overriding concern in its role in our society; rather, its alignment or otherwise with our personal and collective goals, and its potential to undermine human democratic agency, is the prime issue of concern. This is a philosophical and a political issue, and one on which our discussion should continue.

| science | Permalink

social