Sentireal® Blog

2014

LATEST

Artificial Intelligence – To be feared or encouraged?

01/05/2015
sentireal

The technical field of “artificial intelligence” (AI) has undergone several cycles of interest and expectation followed by ambivalence or disillusionment in the public eye over the course of several decades. Of late, a number of high-profile statements have been published by luminaries such as Stephen Hawking and Elon Musk, warning of the potentially-harmful effects on mankind of more-sophisticated artificial intelligence. Phrases such as "the development of full artificial intelligence could spell the end of the human race" and “artificial intelligence is our biggest existential threat” paint threatening images of future thinking machines that could easily have stepped out of the movies 2001, Terminator or Avengers : Age of Ultron. Are these leading (human) thinkers exaggerating the dangers of AI making decisions to the detriment of mankind or should we be as wary of developments in AI as we are in, say, medical research areas such as genetics or stem-cell research that have the potential to totally rewrite the terms of our existence?

It’s probably worth making the point that nearly all technological breakthroughs have a degree of moral ambiguity. From fire, gunpowder, the printing press, the internal combustion engine, nuclear energy and space technologies, history has shown mankind’s abilities to direct the same technologies to both beneficial and malevolent ends. However many of the fears about AI stem more from an accidental “loss of control” over the technology which, rather than unleashing the elemental destructive forces associated with a nuclear or space accident, could result in coldly-calculating thinking machines making a well-considered decision that humans are more trouble than they’re worth. Whilst it’s well worth having regular ethical debate about the potential effects of any technology, is this doomsday scenario particularly likely?

To discuss that question, let’s rewind to consider a few fundamental points. Firstly, what do we mean by “artificial intelligence”? Or “intelligence” for that matter? Oxford dictionaries define intelligence as “the ability to acquire and apply knowledge and skills”, indicating both the ability to learn and the ability to apply that learning to performing something useful. So, by that measure, the term artificial intelligence is a non-human entity (machine) capable of learning, improving skills and applying the results of that learning and skills improvement to the performance of some task or duty. Although the concepts are not formally captured in this definition, the process of learning typically involves assimilation of new knowledge gained from practical experience and feedback whilst the performance of the task or duty typically involves some level of autonomous decision-making or strategizing. Note that the definition does not insist that the machine thinks or acts exactly like a human, but does capture the need for the machine to think and act rationally. Other definitions of AI instead focus on the ability of a machine to think and/or act indistinguishably from a human, which will often be the same as thinking or acting rationally, but will be different under some conditions. Humans are not perfectly rational after all!

If we accept a definition of AI as perfect mimicry of a human then mankind is in no greater danger from excellent AI than it is from the worst-possible human placed in a position of most-destructive influence. In that sense, the required safeguards against destructive AI are no different than those required for destructive humans! If we accept a definition of AI as perfect rationality with respect to a learned set of principles about what it means to perform a task or duty “well” then the required safeguards against destructive AI are essentially the same as those used in deploying safety-critical technologies in areas such as automobiles, aircraft or nuclear power facilities. The question for the designer is - are the supplied operating principles sufficiently well-formed to allow the machine to learn and make rational adaptations and decisions whilst remaining within human ethical bounds and comfort factors that the new machine may not naturally comprehend? This is far from a trivial problem, but neither is it a new one, nor is it significantly different for AI when compared to many other technology types. Essentially, for either major definition of AI, the balanced approach seems to be to continue to advance the technology for the good of mankind whilst maintaining the stringent safety and ethical checks and balances reserved for similar disruptive technologies in other fields such as medicine or power generation.

As a final point, it should be noted that we have sufficient time to see that this balanced approach is implemented and conducted properly. Current evidence indicates that the state-of-the-art with AI is still fairly limited and the machine intelligence that rivals or exceeds human intelligence is still a long way off, regardless of what the media or popular culture might say. Whilst existing AI technologies take some inspiration from the operation of the human brain, to the extent that we understand this operation, the detailed structures and levels of performance from these machines are quite different and often inferior to the human brain. Of course the techniques used in AI are improving, in some cases quite rapidly, but not so rapidly that regular safety and ethical considerations are in inherent danger of being overlooked due to the speed of technical advancement.

Picture by Sean Davis and reproduced unaltered under licence (CC BY-ND 2.0)