AI takeover
AI takeover refers to a hypothetical scenario in which artificial intelligence (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking control of the planet away from the human race. Possible scenarios include a takeover by a superintelligent AI and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.[1] Robot rebellions have been a major theme throughout science fiction for many decades (notably in the film series The Terminator) though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.
Plausibility of risk
While superhuman artificial intelligence is physically possible,[2] scholars debate how far off superhuman intelligence is, and whether it would pose a risk to mankind. A superintelligent machine would not be motivated by the same emotional desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from thwarting the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and additionally so that it can prevent humans from shutting it down or using those resources on things other than paperclips. If a dominant superintelligent machine were to conclude that human survival is an unnecessary risk or a waste of resources, the result would be human extinction.
Advantages of superhuman intelligence over humans
An Artificial General Intelligence (AGI) that is capable of doing every strategically relevant task as well as a human, but that continues to retain all the advantages of a computer, would have many advantages when competing with humans for control. A computer program, unlike a human, has the ability to quickly reproduce itself, to upgrade itself onto better hardware as it becomes available, and might have the capacity for perfect recall of a vast knowledge base.[3]
Strategically relevant domains
In addition, depending on its architecture, an AGI might have superhuman abilities in one or more of the following strategically relevant domains:[4]
- Intelligence amplification: A computer program with the same, or greater, programming abilities as a competent artificial intelligence researcher, would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind.
- Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones.
- Strategizing: A superintelligence might be able to simply outwit human opposition.
- Social manipulation: A superintelligence might be able to recruit human support,[4] or covertly incite a war between humans.[5]
- Economic productivity: As long as a copy of the AGI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the AGI to run a copy of itself on their systems.
- Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.
Sources of AGI advantage
A computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.[4]
A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".[4]
More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints; in contrast, you can add components to a supercomputer until it fills up its entire warehouse. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn off copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.[4]
Advantages of humans over superhuman intelligence
If a superhuman intelligence is a deliberate creation of human beings, theoretically its creators could have the foresight to take precautions in advance. In the case of a sudden "intelligence explosion", effective precautions will be extremely difficult; not only would its creators have little ability to test their precautions on an intermediate intelligence, but the creators might not even have made any precautions at all, if the advent of the intelligence explosion catches them completely by surprise.[4]
Boxing
An AGI's creators would have two important advantages in preventing a hostile AI takeover: first, they could choose to attempt to "keep the AI in a box", and deliberately limit its abilities. The tradeoff in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, "pulling the plug" on the AGI makes it useless, and is therefore not a viable long-term solution.) A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI's freedom, once built.
Instilling positive values
The second important advantage is that an AGI's creators can theoretically attempt to instill human values in the AGI, or otherwise align the AGI's goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this.
Possibility of unfriendly AI preceding friendly AI
Is strong AI inherently dangerous?
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[6]
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[4][7] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[8]
Necessity of conflict
For an AI takeover to be inevitable, it has to be postulated that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible.
The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[9] In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.
Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as The Matrix, claiming that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher Eliezer Yudkowsky has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Steve Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[10]
Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the asteroid belt. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially decomposing all life on earth into mineral components for consumption or other purposes.
Other scientists point to the possibility of humans upgrading their capabilities with bionics and/or genetic engineering and, as cyborgs, becoming the dominant species in themselves.
Warnings
Physicist Stephen Hawking, Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[11] Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believes that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers, in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories
“ | …believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.[12][13] | ” |
See also
- AI takeovers in popular culture
- Autonomous robot
- Effective altruism, the far future and global catastrophic risks
- existential risk from advanced artificial intelligence
- Future of Humanity Institute
- Global catastrophic risk (existential risk)
- Machine ethics
- Machine learning/Deep learning
- Machine rule
- Nick Bostrom
- Outline of transhumanism
- Self-replication
- Technological singularity
Notes
- ↑ Lewis, Tanya (2015-01-12). "Don't Let Artificial Intelligence Take Over, Top Scientists Warn". LiveScience. Purch. Retrieved October 20, 2015.
Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).
- ↑ Stephen Hawking; Stuart Russell; Max Tegmark; Frank Wilczek (1 May 2014). "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'". The Independent. Retrieved 1 April 2016.
there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains
- ↑ Warwick, Kevin (2004). March of the Machines: The Breakthrough in Artificial Intelligence. University of Illinois Press. ISBN 0-252-07223-5.
- 1 2 3 4 5 6 7 Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies.
- ↑ Baraniuk, Chris (23 May 2016). "Checklist of worst-case scenarios could help prepare for evil AI". New Scientist. Retrieved 21 September 2016.
- ↑ Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004
- ↑ Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
- ↑ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.
- ↑ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers Archived February 6, 2007, at the Wayback Machine. - Singularity Institute for Artificial Intelligence, 2005
- ↑ Tucker, Patrick (17 Apr 2014). "Why There Will Be A Robot Uprising". Defense One. Retrieved 15 July 2014.
- ↑ Rawlinson, Kevin. "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015.
- ↑ "The Future of Life Institute Open Letter". The Future of Life Institute. Retrieved 4 March 2015.
- ↑ "Scientists and investors warn on AI". The Financial Times. Retrieved 4 March 2015.
External links
- Automation, not domination: How robots will take over our world (a positive outlook of robot and AI integration into society)
- Machine Intelligence Research Insitute: official MIRI (formerly Singularity Institute for Artificial Intelligence) website
- ArmedRobots.com (tracks developments in robotics which may culminate in Cybernetic Revolt)
- Lifeboat Foundation AIShield (To protect against unfriendly AI)
- Ted talk: Can we build AI without losing control over it?