Technological singularity
From Free net encyclopedia
Image:PPTParadigmShiftsFrr15Events.jpg
In futures studies, a technological singularity represents an "event horizon" in the predictability of human technological development past which present models of the future cease to give reliable or accurate answers, following the creation of strong artificial intelligence or the amplification of human intelligence. Futurists predict that after the Singularity, humans as they exist presently will cease to be the dominating force in scientific and technological progress, replaced with posthumans, strong AI, or both, and therefore all models of change based on past trends in human behavior will be obsolete.
The concept, put forth primarily by mathematician Vernor Vinge and inventor and futurist Ray Kurzweil, predicts a drastic increase in the rate of technological change following the liberation of consciousness from the confines of human biology, allowing it not only to scale past the computational capacity of the human brain but also to interact directly with computer networks. Kurzweil considers this sharp increase to be part of an overall exponential trend in human technological development seen originally in Moore's Law and extrapolated into a general trend in Kurzweil's own Law of Accelerating Returns. A culture which experienced such a change would be fundamentally altered, and not be comprehensible or predictable by its pre-Singularity culture.
While some regard the Singularity as a positive event and work to hasten its arrival, others view the Singularity as dangerous, undesirable, or unlikely to occur. The most practical means for initiating the Singularity are debated, as are how (or whether) the Singularity can be influenced or avoided if dangerous.
Contents |
History and definitions
Though often thought to have originated in the last two decades of the 20th century, the idea of a technological singularity actually dates back to the 1950s:
- "One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." —Stanislaw Ulam, May 1958, referring to a conversation with John von Neumann
This quote is sometimes taken out of context and attributed to von Neumann himself, likely due to von Neumann's celebrity standing.
In 1965, statistician I. J. Good described a scenario more like the Singularity in that it emphasized the effects of superhuman intelligence:
- "Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make."
In his book "Mindsteps to the Cosmos" (Harpercollins, August 1983), Gerald S. Hawkins elucidated his notion of 'mindsteps', dramatic and irreversible changes to paradigms or world views. He identified five distinct mindsteps in human history, and the technology that accompanied these "new world views": the invention of imagery, writing, mathematics, printing, the telescope, rocket, computer, radio, TV... "Each one takes the collective mind closer to reality, one stage further along in its understanding of the relation of humans to the cosmos." He noted: "The waiting period between the mindsteps is getting shorter. One can't help noticing the acceleration." Hawkins' empirical 'mindstep equation' quantified this, and gave dates for future mindsteps. The date of next mindstep (5; the series begins at 0) is given as 2021, with two more successively closer mindsteps, until the limit of the series in 2053. His speculations ventured beyond the technological:
- "The mindsteps... appear to have certain things in common - a new and unfolding human perspective, related inventions in the area of memes and communications, and a long formulative waiting period before the next mindstep comes along. None of the mindsteps can be said to have been truly anticipated, and most were resisted at the early stages. In looking to the future we may equally be caught unawares. We may have to grapple with the presently inconceivable, with mind-stretching discoveries and concepts."
The Singularity was greatly popularized by mathematician and novelist Vernor Vinge. Vinge began speaking on the Singularity in the 1980s and first addressed the topic in print in the January 1983 issue of Omni Magazine. He later collected his thoughts in the 1993 essay "The Coming Technological Singularity", which contains the oft-quoted statement "Within thirty years, we will have the technological means to create superhuman intelligence. Shortly thereafter, the human era will be ended."
Vinge writes that superhuman intelligences, however created, will be even more able to enhance their own minds faster than the humans that created them. "When greater-than-human intelligence drives progress," Vinge writes, "that progress will be much more rapid." This feedback loop of self-improving intelligence, he predicts, will cause large amounts of technological progress within a short period of time.
Creating superhuman intelligence
Most proposed methods for creating smarter-than-human or transhuman minds fall into one of two categories: intelligence amplification of human brains and artificial intelligence.
The means speculated to produce intelligence augmentation are numerous, and include bio- and genetic engineering, nootropic drugs, AI assistants, direct brain-computer interfaces, and mind transfer. Radical life extension techniques, cryonics, and molecular nanotechnology are often advocated by transhumanists as means to live long enough to benefit from future medical techniques, thus allowing for open-ended lifespans and participant evolution.
Despite the numerous speculated means for amplifying human intelligence, non-human artificial intelligence (specifically seed AI) is the most popular option for organizations trying to directly initiate the Singularity, a choice the Singularity Institute addresses in its publication "Why Artificial Intelligence?" (2005).
George Dyson speculates in Darwin Among the Machines that a sufficiently complex computer network may produce "swarm intelligence", and that improved future computing resources may allow AI researchers to create artificial neural networks so large and powerful they become generally intelligent. Mind uploading is a proposed alternative means of creating artificial intelligence—instead of programming a new intelligence, one copies an existing human intelligence into a digital form.
Kurzweil's law of accelerating returns
- Main article: Law of Accelerating Returns
Image:PPTMooresLawai.jpg
Ray Kurzweil justifies his belief in an imminent singularity by an analysis of history from which he concludes that technological progress follows a pattern of exponential growth. He calls this conclusion The Law of Accelerating Returns. He generalizes Moore's law, which describes exponential growth in integrated semiconductor complexity, to include technologies from far before the integrated circuit.
Whenever technology approaches a barrier, he writes, new technologies will cross it. He predicts paradigm shifts will become increasingly common, leading to "technological change so rapid and profound it represents a rupture in the fabric of human history" (Kurzweil 2001). Kurzweil believes the Singularity will occur before the end of the 21st century, setting the date at 2045 (Kurzweil 2005). His predictions differ from Vinge's in that he predicts a gradual ascent to the Singularity, rather than Vinge's rapidly self-improving superhuman intelligence. The distinction is often made with the terms soft and hard takeoff. Image:PPTCanonicalMilestones.jpg
Before Kurzweil proposed his Law, many sociologists and anthropologists created social theories of sociocultural evolution. Some, like Lewis H. Morgan, Leslie White, and Gerhard Lenski, declare technological progress to be the primary factor driving the development of human civilization. Morgan's three major stages of social evolution can be divided by technological milestones. Instead of specific inventions, White decided that the measure by which to judge the evolution of culture is its control of energy, which he describes as "the primary function of culture." His model eventually led to the creation of the Kardashev scale. Lenski takes a more modern approach and declares the more information a given society has, the more advanced it is.
Since the late 1970s, others like Alvin Toffler (author of Future Shock), Daniel Bell and John Naisbitt have approached the theories of postindustrial societies similar to visions of near- and post-Singularity societies. They argue the industrial era is coming to an end, and services and information are supplanting industry and goods. Some more extreme visions of the postindustrial society, especially in fiction, envision the elimination of economic scarcity.
Theodore Modis and Jonathan Huebner have argued, from different perspectives, that the rate of technological innovation has not only ceased to rise, but is actually now declining. John Smart has criticized their conclusions. Others criticize Kurzweil's choices of specific past events to support his theory.
Desirability and safety of the Singularity
Some speculate superhuman intelligences may have goals inconsistent with human survival and prosperity. AI researcher Hugo de Garis suggests AIs may simply eliminate the human race, and humans would be powerless to stop them. Other oft-cited dangers include molecular nanotechnology and genetic engineering. These threats are major issues for both Singularity advocates and critics, and were the subject of a Wired Magazine article by Bill Joy, Why the future doesn't need us (2000). Oxford philosopher Nick Bostrom summarizes the potential threats of the Singularity to human survival in his essay Existential Risks (2002).
Many Singularitarians consider nanotechnology to be one of the greatest dangers facing humanity. For this reason, they often believe seed AI should precede nanotechnology. Others, such as the Foresight Institute, advocate efforts to create molecular nanotechnology, claiming nanotechnology can be made safe for pre-Singularity use or can expedite the arrival of a beneficial Singularity.
Advocates of Friendly artificial intelligence acknowledge the Singularity is potentially very dangerous and work to make it safer by creating AI that will act benevolently towards humans and eliminate existential risks. This idea is also embodied in Isaac Asimov's Three Laws of Robotics, intended to prevent artificially intelligent robots from harming humans, though the crux of Asimov's stories is often how the laws fail.
Neo-Luddite views
Some argue advanced technologies are simply too dangerous for humans to morally allow them to be built, and advocate efforts to stop their invention. Theodore Kaczynski, the Unabomber, writes that technology may enable the upper classes of society to "simply decide to exterminate the mass of humanity." Alternatively, if AI is not created, Kaczynski argues that humans "will have been reduced to the status of domestic animals" after sufficient technological progress. Portions of Kaczynski's writings have been included in both Bill Joy's article and in a recent book by Ray Kurzweil. However, Kaczynski not only opposes the Singularity but also supports neo-luddism. Many people oppose the Singularity without opposing present-day technology as Luddites do.
Along with Kaczynski, many other anti-civilization theorists, such as John Zerzan and Derrick Jensen, represent the school of anarcho-primitivism or eco-anarchism, which sees the rise of the technological singularity as an orgy of machine control, and a loss of a feral, wild, and uncompromisingly free existence outside of the factory of domestication (civilization). In essence, environmental groups such as the Earth Liberation Front and Earth First! see the singularity as a force to be resisted at all costs. Author and social change strategist James John Bell has written articles for Earth First! as well as mainstream science and technology publications, like The Futurist, providing a cautionary environmentalist perspective on the singularity, including his essays Exploring The "Singularity" and Technotopia and the Death of Nature: Clones, Supercomputers, and Robots. Also, the publication Green Anarchy, to which Kaczynski and Zerzan are regular contributors, has published articles about resistance to the technological singularity, e.g. A Singular Rapture, written by MOSH (which is in reference to Kurzweil's M.O.S.H., for "Mostly Original Substrate Human").
Responses:
Just as Luddites opposed artifacts of the industrial revolution, due to concern for their effects on employment, some opponents of the Singularity are concerned about future employment opportunities. Although Luddite fears about jobs were not realised, given the growth in jobs after the industrial revolution, there was an effect on involuntary employment: a dramatic decrease in child labor and the labors of the overaged. It can be argued that only a drop in voluntary employment should be of concern, not a reduced level of absolute employment (such a position is held by Henry Hazlitt). Economically, a post-Singularity society would likely have more wealth than a pre-Singularity society (via increased knowledge of matter and energy manipulation to meet human needs). One possible post-Singularity future, therefore, is one in which per capita wealth increases dramatically while per capita employment decreases.
Fictional depictions
Fictional depictions of the Singularity usually fall into one of four categories:
- AIs and technologically augmented humans (often still inferior to the AIs): Charles Stross, Jacek Dukaj, The Culture of Iain M. Banks, the Deus Ex computer games, the Halo video game series.
- AIs and baseline humans (sometimes referred to as a local Singularity): Colossus: The Forbin Project, The Matrix, Terminator (Skynet), Golem XIV and "TimeSplitters: Future Perfect" (video game)
- Biologically evolved humans ascending/ed: The Ancients of Stargate SG1/Atlantis, Sid Meier's Alpha Centauri, Shapers in Bruce Sterling's Shaper/Mechanist setting.
- Technologically augmented humans ascending/ed: The Gentle Seduction by Marc Stiegler, Mechanists in Bruce Sterling's Shaper/Mechanist setting.
In addition to the Vernor Vinge stories that pioneered Singularity ideas, several other science fiction authors have written stories that involve the Singularity as a central theme. Notable authors include William Gibson, Charles Stross, Karl Schroeder, Greg Egan, Greg Bear, Iain M. Banks, Neal Stephenson, Bruce Sterling, Damien Broderick, and Jacek Dukaj. Ken MacLeod describes the Singularity as "the Rapture for nerds" in his 1998 novel The Cassini Division. Singularity themes are common in cyberpunk novels, one of the most famous examples being the recursively self-improving AI Neuromancer from William Gibson's novel of the same name. Some earlier science fiction works such as Arthur Clarke's Childhood's End, Isaac Asimov's The Last Question, and John W. Campbell's The Last Evolution also feature technological singularities. A 1994 novel published on Kuro5hin called The Metamorphosis of Prime Intellect depicts life after an AI-initiated Singularity. A more dystopian version is Harlan Ellison's classic short story I Have No Mouth and I Must Scream.
The online science fiction world-building project Orion's Arm also features the Singularity, as well as some video games, including "TimeSplitters: Future Perfect", where the player battles alongside human forces against robots in the year 2243. "Halo" and "Deus Ex" also feature singularity themes. The computer game Sid Meier's Alpha Centauri also features the Singularity-like "Ascent to Transcendence."
One of the earliest and best references to technological singularity occurs in the short-short story, "Answer" written by the science fiction writer Fredric Brown in 1954.
Movies and television
One of the earliest examples of smarter-than-human AI in film is Colossus: The Forbin Project. In the 1969 film, a U.S. defense supercomputer becomes self-aware and unilaterally imposes peace on humanity. The Matrix is set in a world in which AI has dominated and subjugated humans to serve its own ends. In The Terminator, the AI Skynet becomes self-aware and launches nuclear weapons to exterminate humanity.
Anime has also explored Singularity-related themes proposed by Vinge and Kurzweil: Ghost in the Shell takes place in a world in which wetware is nearly ubiquitous and machine consciousness has begun to emerge, while Serial Experiments Lain explores the topic of downloading consciousness. In Bubblegum Crisis Tokyo 2040 AI emerges and gains a powerful ability to alter reality.
Accelerating change, transhumanism, and the Singularity are discussed at length by University of Austin chemist Eamonn Healy in the film Waking Life. He describes the acceleration of evolution by breaking it down into "two billion years for life, six million years for the hominid, a hundred-thousand years for mankind as we know it" then describes the acceleration of human cultural evolution as being ten thousand years for agriculture, four hundred years for the scientific revolution, and one hundred fifty years for the industrial revolution. He concludes with the notion that we will eventually create "neohumans" which will usurp humanity's present role in guiding sociotechnological evolution and allow the exponential trend of accelerating change to continue past the limits of human ability.
Organizations and other prominent voices
The Singularity Institute for Artificial Intelligence (SIAI) is a nonprofit research think tank and public interest institute for the study and advancement of beneficial artificial intelligence and ethical cognitive enhancement. Since cognitive ability influences how well difficult problems can be solved, they aim to further the safe and significant enhancement of cognition to make contemporary humanitarian challenges generally more solvable. They have the additional goal of fostering a broader discussion and understanding of Friendly Artificial Intelligence. They focus on Friendly AI, as they believe strong AI will enhance cognition before human cognition can be enhanced by neurotechnologies or somatic gene therapy. The Institute employs AI researcher Eliezer Yudkowsky as a research fellow for Friendly AI. Prominent Singularitarian writer Michael Anissimov serves as the groups's advocacy director.
The Acceleration Studies Foundation (ASF), an educational nonprofit, was formed to attract broad business, scientific, technological, and humanist interest in acceleration and evolutionary development studies. They produce Accelerating Change, an annual conference on multidisciplinary insights in accelerating technological change at Stanford University, and maintain Acceleration Watch[1], an educational site discussing accelerating technological change.
Other prominent voices:
- Robin Hanson has written on the economics of artificial intelligence.
- Bill Hibbard is a scientist at the University of Wisconsin - Madison working on visualization and machine intelligence.
- Mike Lorrey is a prominent Extropian transhumanist, and Libertarian political activist.
- Marvin Minsky is an American scientist in the field of artificial intelligence (AI), co-founder of MIT's AI laboratory, and author of several texts on AI and philosophy.
- Hans Moravec is a permanent resident research professor at the Robotics Institute of Carnegie Mellon University known for his work on robotics, artificial intelligence, and writings on the impact of technology.
- Max More, formerly known as Max T. O'Connor, is a philosopher and futurist who writes, speaks, and consults on advanced decision making and foresight methods for handling the impact of emerging technologies.
See also
- Clarke's three laws
- Doomsday argument
- End of civilization
- Omega point
- Outside Context Problem
- Technological evolution
- Techno-utopianism
- Tipping point
- Portal:Singularity
References
- {{cite book
| author = Broderick, D. | title = The Spike: How Our Lives Are Being Transformed by Rapidly Advancing Technologies | publisher = New York: Forge | year = 2001 | id = ISBN 0312877811 }}
- {{cite journal
| author = Bostrom, N. | authorlink = Nick Bostrom | title = Ethical Issues in Advanced Artificial Intelligence | journal = Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence | year = 2003 | volume = 2 | pages = 12-17 | url = http://www.nickbostrom.com/ethics/ai.html }}
- {{cite journal
| author=Bostrom, N. | authorlink=Nick Bostrom | title=Existential Risks | journal=Journal of Evolution and Technology | year=2002 | volume=9 | url=http://www.nickbostrom.com/existential/risks.html }}
- Good, I. J. (1965). "Speculations Concerning the First Ultraintelligent Machine", in Advances in Computers, vol 6, Franz L. Alt and Morris Rubinoff, eds, pp31-88, 1965, Academic Press.
- {{cite journal
| author=Joy, B. | authorlink=Bill Joy | title=Why the future doesn't need us | journal=Wired Magazine | year=April 2000 | volume=8.04 | url=http://www.wired.com/wired/archive/8.04/joy.html }}
- {{Citepaper
| Author=Kurzweil, R. | Title= The Law of Accelerating Returns | PublishYear=2001 | URL=http://www.kurzweilai.net/articles/art0134.html }}
- {{cite book
| author = Kurzweil, R. | title = The Singularity is Near | publisher = New York: Viking | year = 2005 | id = ISBN 0670033847 }}
- {{cite web
| author=Singularity Institute for Artificial Intelligence, Inc. | year=2005 | title=Why Artificial Intelligence? | url=http://www.singinst.org/intro/whyAI.html | accessdate=February 18 | accessyear=2006 }}
- Ulam, S. (1958). "Tribute to John von Neumann", Bulletin of the American Mathematical Society, vol 64, nr 3, part 2, May 1958, pp1-49.
- {{Citepaper
| Author=Vinge, V. | Title=The Coming Technological Singularity | PublishYear=1993 | URL=http://en.wikisource.org/wiki/The_Coming_Technological_Singularity }}
External links
Essays
- A Critical Discussion of Vinge's Singularity Concept
- Is a singularity just around the corner? by Robin Hanson
- Brief History of Intellectual Discussion of Accelerating Change by John Smart
- Michael Anissimov's Singularity articles
- One Half Of A Manifesto by Jaron Lanier — a critique of "cybernetic totalism"
- One Half of An Argument — Kurzweil's response to Lanier
- A Singular Rapture
Singularity AI projects
Portals and wikis
- KurzweilAI.net
- Acceleration Watch
- Accelerating Future
- The SL4 Wiki
- Singularity! A Tough Guide to the Rapture of the NerdsTemplate:Link FA
de:Singularität (Technologie) et:Tehnoloogiline singulaarsus es:Singularidad tecnológica fr:Singularité technologique it:Singolarità tecnologica he:סינגולריות (עתידנות) hu:Technológiai szingularitás nl:Technologische singulariteit pl:Technologiczna osobliwość ro:Singularitate tehnologică ru:Технологическая сингулярность fi:Teknologinen singulariteetti