Elon Musk’s Intellectual Laziness on AI

Elon is at it again. In a tweet on August 11, he reiterated his thoughts on artificial intelligence by claiming that “If you’re not concerned about AI safety, you should be. Vastly more risk than North Korea”. He also called for AI to be regulated, just like “everything (cars, planes, food, drugs, etc.) that’s a danger to the public”.

This was the latest exposition of his sensationalist thoughts on the topic, which if you are reading this you are familiar with by now.

He’s not the only one, of course: Bill Gates and Stephen Hawking have also raised concerns about unbridled AI. But none of them have been more vocal and unsubstantiated than Musk, who recently went as far as saying “we are summoning the demon” by pursuing super-intelligent AI.

Even his most detailed arguments are astonishingly crude.

For example, this one:

“the biggest threat about AI is not that it will develop a will of its own, but rather that it will follow the will of people who will establish its utility function, its optimization function. And that optimization function, if it is not well thought out, even if its intent is benign it could have quite a bad outcome. For example, if you were a hedge fund manager or a portfolio manager and you said: “all I want the AI to do is maximize the value of my portfolio, the AI could decide that the best way to do that would be to short consumer stocks, long defense stocks, and start a war. And that would obviously be quite bad”.

… and this one:

“If there’s a digital superintelligence, and its optimization or utility function is something that is detrimental to humanity, then it will have a very bad effect. It could be about getting rid of spam of something: the quickest way to get rid of spam is to get rid of humans”.

As Elon knows, it would be far more complex to “get rid of humans” than to maximize a stock portfolio or get rid of spam, and no real AI application would opt to pursue a high-complexity, low-probability scenario over a low-complexity, high-probability scenario. We have in fact gotten rid of spam (when was the last time a Viagra ad appeared in your inbox?), and there are many portfolio-maximizing AI applications out there.

Let me be clear: Elon Musk is by far the greatest entrepreneur, and one of the greatest minds, of our time. But I have yet to see evidence that Elon isn’t out of his depth on this topic, which makes his indictment of Mark Zuckerberg’s expertise a bit ironic.

First off, “AI” as a blanket term is a fuzzy concept. There are many levels of machine intelligence, depending on how broad the domain of their application is. We right now classify AI applications in two categories:

NARROW AI: 99.9% of today’s AI applications (including self-driving cars, DeepMind’s Alphago, and pretty much everything else you have seen or read about) are in the category of what we call “narrow AI” (or weak AI). These are AI applications which operate at or above human-level of intelligence but in a narrow domain. A self-driving car application can’t be ported over to drive a motorcycle, for example, let alone trade stocks. Even DeepMind’s Alphago was trained to play Go on a 19X19 Go broad, meaning that if the board had been bigger, smaller or even triangular, the applications would have failed.

ARTIFICIAL GENERAL INTELLIGENCE: this is what Elon (and all others) are referring to: AI applications that, contrary to narrow AI, are operating at or above human-level intelligence, and possess the ability to generalize across domains. So the same application or architecture can be used to drive a car, trade stocks, learn a language. Just like our human brains are one (extremely complex) paradigm of intelligence, capable of doing all of these things, often even concurrently.

So what Elon is warning about is AGI. And frankly, he’s not entirely wrong about the potential of AGI and its theoretical dangers. But he is definitely out of his depth, and perhaps even intellectually lazy about the reality, architecture, and inherent boundaries of Artificial General Intelligence (AGI). And his arguments betray interesting cognitive biases present throughout the tech industry: the tendency to oversimplify extraordinarily complex systems, which so often leads to startup failure.

He brushes aside the incredible difficulty of AGI. Don’t take my word for it, go straight to the only mathematical and engineering blueprint towards AGI: Ben Goertzel, Cassio Pennachin and Nil Geisweiller’s “Engineering General Intelligence” (https://www.amazon.com/Engineering-General-Intelligence-Part-Cognitive/dp/9462390266).

This is perhaps due to the Silicon Valley view of AGI as an interconnected pyramid of neural nets, which is the track that its team at OpenAI seems to have doubled down on. Many experts, including the people I work with at Novamente and Corto (the very people who created the OpenCog project, and in fact invented the term AGI) are confident is not the way to achieve AGI. Neural nets are too limited to deal with uncertainty, and cannot in and of themselves create machines that can think, or even reason from sparse and ambiguous data.

He assumes AGI applications will have unbounded rationality, which is rather strange considering that any and all AI applications rely on input data, and even in the far-off future the amount of data available about reality, and the universe in general, will always be considerably limited. Even if AGI applications don’t have an intelligence or computation problem, they’ll definitely have a vision problem and a data problem, due to the fact that our observable reality and universe is only a fraction of the available universe. Without a unifying view of our reality, AI applications will represent it in a very diverse manner, which will most likely prevent them from collaborating “flexibly and scalably enough” (to borrow from the excellent Yuval Hoah Harari) to fully dominate us.

He seems to conveniently step over the main scenario of human-AI relationship: the fact that the boundaries between biological and synthetic intelligence are getting blurrier and blurrier. How can synthetic intelligence turn against biological intelligence when there’s no difference between the two? After all, biological intelligence is most likely just a transition towards synthetic intelligence. Is that what he means? Does he mean we’ll lose certain foundational aspects of our human nature, such as emotions? This most likely will happen, but we will end up investigating deeper emotions more connected to the nature of the reality that we live in. The fact that Musk disregards this is puzzling since he just started a brain-computer interface company called Neurolink.

Most importantly, he seems to ignore that all intelligence – especially machine intelligence - operates within a normative framework (motivated by goals inside of larger goals inside of larger goals) that is not, and never will be finite, simply because nobody will ever have enough data to measure and validate the ultimate questions of “why?” and “what for?”. These questions have infinite answers. And machines can’t and won’t be able to see enough of the infinite to bind it or even invent it. Humans will have many opportunities to do so. As we develop truly thinking machines, we will have many opportunities to create this ultimate framework, of “why?” and “what for?”. Perhaps even by instituting ourselves as the recipients of an ultimate normative framework that can neither be measured nor observed, and thereby cannot be questioned by machines (many of us have been calling that framework “God”).

The main, most immediate, and most punishing consequence of developing human-level machine intelligence is that it will destroy millions of jobs. Let’s talk more about that, please.

 
3
Kudos
 
3
Kudos

Now read this

Learning vs Thinking Machines

Note: this was first published yesterday, January 8, 2017, in ET Centric. For updates on technology as it relates to the media and entertainment industries, subscribe here: http://www.etcentric.org/ As predicted, artificial intelligence... Continue →