stephen hawking on artificial intelligence -- 6/26/19
Today's selection -- from Brief Answers to the Big Questions by Stephen Hawking. Stephen Hawking on artificial intelligence:
"I think there is no significant difference between how the brain of an earthworm works and how a computer computes. I also believe that evolution implies there can be no qualitative difference between the brain of an earthworm and that of a human. It therefore follows that computers can, in principle, emulate human intelligence, or even better it. It's clearly possible for something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents.
"If computers continue to obey Moore's Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to overtake humans in intelligence at some point in the next hundred years. When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours. It's tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.
"For the last twenty years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in a particular environment. In this context, intelligence is related to statistical and economic notions of rationality -- that is, colloquially, the ability to make good decisions, plans or inferences. As a result of this recent work, there has been a large degree of integration and cross-fertilisation among Al, machine-learning, statistics, control theory, neuroscience and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion and question-answering systems.
"As development in these areas and others moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance are worth large sums of money, prompting further and greater investments in research. There is now a broad consensus that AI research is progressing steadily and that its impact on society is likely to increase. The potential benefits are huge; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide. The eradication of disease and poverty is possible. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. Success in creating AI would be the biggest event in human history.
"Unfortunately, it might also be the last, unless we learn how to avoid the risks. Used as a toolkit, AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers. While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans. The concern is that AI would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours. Others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world's problems will be realised. Although I am well known as an optimist regarding the human race, I am not so sure."