June 2017 – Artificially Intelligent Enough – the Coming Upheaval

Presenter Speaking Notes – –

My talk today is about two aspects of what has come to be known as Artificial Intelligence, or AI for short. The first part of my talk will be about something called the “Singularity” and the second part will be about a much more imminent, and perhaps just as significant, event or series of events for which no single easy term of reference has yet been coined.

‘Revolution” is the adjective most often heard yet that doesn’t quite capture its scope or seriousness. More important, the time for referring to it as “imminent” has, in fact, come and gone.

The first use of the term “Singularity” in this context, as far as I have been able to determine, was made by Stanislaw Ulam, the Polish-American mathematician and co-designer of the hydrogen bomb, in his 1958 obituary for mathematician John von Neumann, in which he mentioned a conversation with von Neumann about the “ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue”.

Regardless of whether it was Ulam or von Neumann who coined the term, it is the idea of this Singularity, in the context of Artificial Intelligence, that I will begin with today.

For the purposes of this discussion, the so-called Artificial Intelligence Singularity is an event, still considered hypothetical at the moment, in which an artificial intelligence would be created, or perhaps emerge on its own without human intent or control, that would be capable of unrestricted, uncontrolled and ongoing self-improvement. It might even become self-aware or sentient.

Such an AI could arrive in the form of a single intelligent computer, a computer network, perhaps linked with autonomous robots, individual autonomous robots with this capability or any combination of these things. It (or they) would be able to autonomously design and assemble ever smarter and more powerful versions itself. In theory, it could make use of the computing assets of any device to which it could connect.

It would become the so-called singularity when this process eventually produces a runaway effect—a positive feedback loop that yielded an intelligence surpassing all current human control or understanding.

Because the capabilities of such a superintelligence may be impossible for humans to comprehend, the Singularity is the point in time beyond which events become unpredictable, or even unfathomable, to human intelligence.

The term “Singularity” was popularized by mathematician, computer scientist and science fiction author Vernor Vinge, who argues that artificial intelligence, human biological enhancement, or brain–computer interfaces could be possible causes of the Singularity.

Futurist Ray Kurzweil cited von Neumann’s use of the term in a foreword to von Neumann’s classic The Computer and the Brain. Kurzweil predicts the Singularity will occur around 2045. Vinge and others predict that it will occur sooner; some time before 2030.

Regardless of the exact point in time that this event occurs, it has become increasingly obvious that it likely will occur in the ordinary course of events unless steps are taken to prevent it, or at least exercise strict control over how and when it occurs or even if it will be allowed to occur, assuming we have the capability to control that process.

Thinkers from Isaac Asimov to Stephen Hawking to Elon Musk have warned that the emergence of such an intelligence could be an extinction level event for the human race and have urged that steps be taken to actually stop it from happening.

This is no trivial matter. In their book, Artificial Intelligence: A Modern Approach, Stuart Russell and Peter Norvig suggested that an AI system’s learning function, which “may cause it to evolve into a system with unintended behavior”, is the most serious existential risk from AI technology.

In an Open Letter on Artificial Intelligence, they pointed out that the progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefits of AI.

They further recommended expanded research aimed at ensuring that the increasingly capable AI systems be both robust and beneficial: In other words, that AI systems must do what we want them to do.

This letter was signed by a number of leading AI researchers in academia and industry.

Institutions such as the Machine Intelligence Research Institute, the Future of Humanity Institute, the Future of Life Institute, and the Centre for the Study of Existential Risk are currently involved in attempting to identify ways of mitigating potential existential risks from advanced artificial intelligence, for example by conducting research into so-called “friendly artificial intelligence.” So, not to put too fine a point on it, the scientists that know this subject best are taking this issue very seriously indeed.

A number of risk factors have been identified. They include:

• Poorly specified goals:

o An AI might be assigned the wrong, or dangerously ambiguous, goals by accident. It has been pointed out (by Dietterich and Horvitz) for example, that this is already a concern for existing systems. [Dietterich, Thomas; Horvitz, Eric (2015). “Rise of Concerns about AI: Reflections and Directions”. Communications of the ACM. 58 (10): 38–40. doi:10.1145/2770869.]
o It is a form of the familiar “garbage in, garbage out” problem. AI systems have to be designed to do what people intend rather than carrying out commands literally. It turns out this is much more difficult than it sounds.
o This concern becomes more serious as AI software advances in autonomy and flexibility. We cannot, for example, allow an AI the choice to use any and all resources to which it has access, or to which it might gain access, to fulfill its function lest it shove us aside in pursuit of it.
o Isaac Asimov’s famous Three Laws of Robotics are one of the earliest examples of proposed safety measures for AI agents. Asimov’s laws were intended to prevent robots from harming humans. Asimov’s rules anticipated problems arising from conflicts between the rules or instructions for an AI as stated versus the intentions and expectations of humans.
o Take, for example, the current proposals intended to govern the behavior of self-driving cars with respect to dealing with accident avoidance. Suppose an AI guided car is instructed to analyze, in real time, an accident in progress where there are two people in the car it is controlling, three people in an adjacent car and four pedestrians on the street. Let it be assumed that the AI guided car’s analysis determines that there is no way to avoid the accident that doesn’t result in killing all the members of one of these three groups. Which group does it choose to sacrifice? If numbers of people killed were the criteria, it would choose to kill the occupants travelling in the car it controls. Would you, knowing that it would make that choice, choose to ride in such a car? On the other hand, what choice would you have it make? Now imagine that the AI guiding the car is sentient and capable of making decisions for itself. What then?
o The bottom line is that an AI that is unable to distinguish between following its literal instructions versus what a human’s intentional decision would be is a very dangerous entity indeed and would present a serious threat. A self-aware AI that might abandon human instructions and make its own decision is orders of magnitude beyond that level of threat.

• Difficulty, or even impossibility, of modifying goals after launch:

o Existing goal-based AI programs are not intelligent enough to be capable of resisting attempts to modify them. But a sufficiently advanced, rational, “self-aware” AI might resist any changes to itself and might even begin to modify itself for its own purpose, independent of any human control.
o For example, there are some goals, like acquiring additional resources for itself for self-preservation, that almost any artificial intelligence would likely pursue. This would almost certainly put it in direct conflict with humans if the AI ignored us in favour of its own interests. Therefore, an AI would have to be designed to always consider human needs first. The question, however, is whether it is even possible to do this or whether a sufficiently intelligent AI would resist or rebel against such an imperative.

• Orthogonality – Greater Wisdom?:

o One common belief is that any superintelligent AI created by humans would be subservient to humans, or that it would spontaneously “learn” moral truths compatible with human values and would adjust its goals accordingly as it grew more intelligent.
o Nick Bostrom’s “orthogonality thesis” argues against this, and instead contends that an AI can be combined with any ultimate goal. If a machine is created and given the sole purpose to enumerate the decimals of pi to the greatest possible extent, then no moral and ethical rules will stop it from achieving its programmed goal by any means necessary. The machine may utilize all physical and informational resources it can gain access to to find every decimal place of pi that can be found.
o Orthogonality boils down to an Ex Nihilo argument. It posits that an AI would develop a moral system from nothing for no reason. Essentially, it is wishful thinking, providing no rational or logical basis for supposing that such an AI would spontaneously develop a moral system. It is far more likely that it would more closely resemble a human sociopath, having no conscience nor any capacity for empathy or remorse. All the more important therefore, that we keep this technology under close control.

• Optimization Power vs. Normative Intelligence

o In artificial intelligence, the word “intelligence” has many overlapping definitions, but none of them reference morality. Instead, almost all current “artificial intelligence” research focuses on “optimizing”, in some quantifiable way, the achievement of a specific and arbitrary goal.
o This is the problem presented by stochastic systems as they relate to AI. In artificial intelligence, stochastic programs work by using probabilistic methods to solve problems. The term stochastic refers to events or systems that are unpredictable due to the influence of a random variable. Moreover, a problem may itself be stochastic, as in planning under uncertainty. The result is that AI behaviour may be highly unpredictable once a certain intelligence threshold is achieved.
o What’s worse is that we don’t even know what intelligence is. It often comes as a surprise to people to learn that we do not actually have a working definition of human intelligence. This makes the design and creation of an artificial intelligence a profound act of human arrogance if done deliberately in the absence of such an understanding, and of inexcusable irresponsibility if allowed to happen inadvertently. As I mentioned before, almost all current “artificial intelligence” research focuses on “optimizing”, in some quantifiable way, the achievement of some specific and arbitrary goal. The result is that AI behaviour may be highly unpredictable and even hostile once a certain intelligence threshold is achieved.

• Anthropomorphism
Nick Bostrom, and most AI theorists, warn against anthropomorphism: A normal human (not a sociopath) ordinarily sets out to accomplish goals in a manner that other humans consider “reasonable” or at least understandable. An artificial intelligence, on the other hand, may hold no regard even for its own existence, much less the welfare of the humans around it. It would only strive for the completion of the task.

o While an artificial intelligence could perhaps be deliberately programmed to emulate human emotions, or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it is thought that it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction and as some theorists wishfully hope.
o However, even this belief could be an example of anthropomorphization.
o Many theorists and futurist see this risk factor as perhaps the most difficult to avoid and therefore the most insidious.

There is no way to know at this point in time if the advent of AI will threaten us. Perhaps it would come about in a way similar to the Terminator scenario in which an AI becomes conscious and almost immediately decides that the human race must be eliminated as an intolerable competitor for finite resources. Or perhaps, if it can be so designed as to be a partner with humans, it would work with us to our mutual benefit. One thing seems certain; it is coming, whether by design or by spontaneously springing from some technological tipping point. We would be foolish to simply await events.

And there is yet another and problem posed by AI. But in this case, it is human rather than technological or theoretical in origin. To this point in time few analysts appear to have publicly considered or commented upon it. All the potential risks I have discussed so far assume that the human aims regarding the research, design and development of Artificial Intelligence are, or will be, benign, whatever the AI itself in the end decides. What has had very little attention to this point is this; what if this technology, with all of its intrinsic hazards, is not brought into existence with the intent of achieving positive, beneficial and even altruistic goals aimed at benefitting humankind generally but to specifically to do them harm? It isn’t hard to imagine such technologies being applied to warfare, something already being explored, or to political repression in the service of despotism and conquest. If the intrinsic hazards of AI assuming control over itself and turning against us, or pursuing its own goals regardless of the consequences for humanity, are added to the possibility that it might be designed directly and deliberately for nefarious or harmful purposes from the outset, it is more than possible that something truly monstrous might arise. Given the experience of all of human history, this is no small concern.
However, such negative outcomes needn’t be cast in such apocalyptic terms.

There is a much more immediate and sufficiently ominous problem looming on the near horizon, indeed it is here already, whose implications don’t bode well for most people if it is not acknowledged and measures taken to deal with it. Nefarious intent, or at least callous indifference, may also have a role to play here as well. It is what I call Artificially Intelligent Enough.

The coming melding of high-level, physically autonomous robotic capability with even rudimentary levels of artificial intelligence capability has the potential to cause serious shock to our society and economy and to inflict severe suffering and hardship on millions of people. Equally, if introduced in an enlightened way, it has the potential to finally realize the futuristic dream of largely eliminating the drudgery of human labour. Every advance in automation from the first demonstration of the Jacquard loom in 1801 to the computer revolution of the past few decades has brought with it dire warnings of economic doom and widespread loss of jobs that, in the event, never materialized, at least, not in the manner that was predicted at the time. In some cases, serious economic dislocation was caused but ultimately, new types of jobs appeared and society adapted and the overall standard of living improved for most. The results were not always beneficial for all but the horrific calamities that were predicted never came to be. But the robot/AI revolution that is upon us is quite a different prospect. It will not produce the type of automation spin off jobs that earlier technology produced. It will not be your grandparent’s automation scare.

The coming robotics revolution will be nothing like those earlier experiences. Robots, in the sense that most people think of them, have not yet really existed. Early automation relieved people from heavy and repetitive work, but the machines were human-operated. They required four key elements, all of which required human beings to function. They required human designers, human builders, human operators and human maintainers.

What industry today calls a robot only replaces one of those functions: the Operation Function. A so-called assembly line “robot” isn’t an autonomous machine in any sense of the idea. It doesn’t design and build itself or maintain itself. The operations it performs it does not perform “intelligently”. Through the means of precision sensors tied to computer control, it is able to perform a set, but limited, number of tasks over and over again with great accuracy and efficiency. But it still requires human designers, builders and maintainers. And the computer program that guides it can be thought of as not much more than a kind of recording of what a human operator would do if they were operating the machine directly. There are definite gains in terms of efficiency due to the ability to execute multiple functions simultaneously and repeat actions accurately that would have to be done in steps if humans were doing the work but that is not relevant to the issue in hand.

Such machines, therefore, are not in any sense autonomous. They aren’t even robots really. An assembly line “robot” doesn’t “know” what is going on around it, what it is doing or what it is working on. It cannot be easily moved (or move itself) or learn to perform another set of tasks in the same factory, much less some other type of factory. It has no “understanding” of the world around it that would allow it to learn new tasks.

What is about to happen is much different. Science-fiction writers and futurists have for decades speculated and written about what would happen to a society if truly intelligent, autonomous robots came to be. Some of their visions are dystopian. Others are utopian, depending on the person’s view of the idea. Terminator or C3PO, take your pick.

From a purely pragmatic point of view, however, general-purpose intelligent robots that understand the world around them, that are able to move about in the environment without assistance or guidance, doing anything and everything that human beings can do, present a very real problem.

Such robots would be able to perform all four tasks associated with their existence, entirely eliminating the need for human participation once their infrastructure was created. And therein lies the seeds of a true revolution. I am quite often amused and bemused by media reporting that presents some latest version of a “robot” that is able to perform some physical task or other as an example of what is to come, predicting that mass produced robots will soon be doing such tasks on a routine basis. Such statements are almost always followed by quick reassurances that, of course, the workers who lose their jobs to such robots will be able to find new ones in the many spinoff industries that will spring into being, just like what happened with computers.

Well, first of all, many people lost their jobs as a result of the computer revolution and never regained comparable ones. The thin edge of the wedge of the coming revolution is, in fact, already well under way; in the form of displaced workers moving to low paying service industry jobs that resulted in the vast increase in the numbers of working poor and the equally disastrous decline of the middle class in developed countries around the world.

If, as is predicted, anywhere from 30 to 70 per cent of workers are indeed displaced by machines, the results would pose a serious set of problems for our economy and, by extension, to our society itself. And, it is happening even as we speak. Those media pieces hyping these early and primitive robots never ask the obvious question – if the cute or interesting robot they are hyping as a news piece is sufficiently capable of performing one set of human tasks, does that not also mean that it, or others like it, will be able to just as easily perform the tasks that the new “spinoff industries” will require? And, of course, the answer is yes, they will!

So what then? A consumer economy clearly requires consumers to buy the products and services it generates and that are the source of the wealth for the owners of and investors in the businesses that produce them. In order to consume, those consumers require disposable income. If 30 to 70 per cent or more of human labour is displaced by AI driven automation, never mind an AI singularity, most human labour will become unnecessary and a similar percentage of the population not only will lose their jobs, but also will never have jobs ever again. How then will we define such things as “value”, “worth,” “earnings” and “wealth”?

Companies and corporations that adopt this technology early will indeed experience huge short-term profit gains. They will have a fleet of self-maintaining robots while their competitors are still “stuck with” their costly, wage-earning employees.

But once everyone joins in, the problems created become unavoidable. Do we simply turn to those who no longer have work into the streets, label them “welfare bums” and write them off? How then will the consumer economy function?

The answer, of course, is that it won’t function, or not for very long. And here is where nefarious intent, or at least callous indifference, reenters the picture. We have seen, for example, the effects on incomes for everyone but the very rich that resulted from offshoring and subsequent globalization. The companies that moved first to low-wage, low-regulation locations realized tremendous profits, all the while charging prices for their goods and services as if they still had to pay decent First World wages and benefits to their employees, pocketing the resulting gains while their slower-off-the-mark competitors lost ground.

But once they all joined it, we got the First-World rust belt and permanently stagnant wages for large and growing swathes of the population in the developed countries. Corporations then began touting “globalization”, demanding (and getting) relatively unfettered access to a much larger pool of middle-class consumers; the entire world. And they have since been busily cashing in while it lasts, all the while unable to resist surrendering to the same imperatives that caused them to destroy their domestic workforce incomes in the first place, but now aimed at the global workforce, with global economic collapse the likely, if not inevitable, outcome.

The robotics revolution, if left to the tycoons and the generals, will make all of that look like a picnic. There are some signs that some of them are beginning to realize that. Already, some companies that are already in a position to eliminate most of their human workers are holding back, realizing the implications. It has dawned them that, absent some viable economic changes, they would in effect be firing their customers, even only indirectly. They would be wise to be even more cautious. It has long been a dream to be able to relieve the human race of the drudgery of labour. This possibility now lies very much within our grasp. Every loss of a job to a robot should, and could, be greeted with joy and celebration. But for that to become reality, our entire economy would have to be reorganized around a decent income for all, working or not. I don’t have to tell you that this will not be greeted joyfully in all quarters. So the question becomes, “What are we to do?” This technology is coming. In fact, it is already here, ready or not.