Lentis/The Singularity

=Overview= The Technological Singularity, as defined by Ray Kurzweil, is "a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed". This hypothetical phenomenon has the following characteristics:  Technological event horizon - after this point, the rate of technological change is so rapid that current models predicting technological advancement no longer hold true. This is due in large part to:

Development of artificial superhuman intelligence – when machines transcend human intelligence (“wake up”) or when humans transcend their own natural intelligence through technological means. Either scenario leads to:

Intrinsic link between technology and biology – the human experience can no longer be a purely biological one; it is highly dependent on technology.  The phrase singularity was coined by Vernor Vinge, who found the breakdown of conventional laws predicting technological advancement after the singularity similar to the breakdown of laws of physics around a black hole.

Although singularity has become the accepted term to describe the event, there are many schools of thought regarding the event’s value and significance. Singularitarians, such as Ray Kurzweil, believe in the singularity and embrace its arrival. Others, such as the Neo-Luddites, fear the singularity and work to prevent its arrival. Still others, such as Rodney Brooks, do not believe a technological singularity, as defined, will occur. In this chapter, we will explore the viewpoints and scenarios that support and oppose the emergence of the ideal.

=Support=



Singularitarianrefers to a person who believes that the singularity is both possible and desirable. Ray Kurzweil defines a Singularitarian as a person who “understands the Singularity and who has reflected on its implications for his or her own life.” Kurzweil is an important figure within the Singularitarian community, as his book The Singularity is Near was important in bringing the viewpoint into the public domain.

Ray Kurzweil is both an interesting and polarizing figure. A computer prodigy from a young age, Kurzweil wrote his first computer program at age 15. At age 20, he sold a program he wrote for $100,000. Kurzweil has been at the forefront of pattern recognition and artificial intelligence development for over four decades. However, more recently Kurzweil has been recognized for his contributions towards and campaigns for the singularity. Starting with his 2004 book Fantastic Voyage: Live Long Enough to Live Forever, Kurzweil advanced the idea that humans may be able to transcend their physical limitations to reverse the aging process, overcome disease, and live forever. In Fantastic Voyage, Kurzweil proposes that these advances will begin to take hold before the year 2060. Kurzweil has further elaborated on these views in his 2009 book Transcend: Nine Steps to Living Well Forever.

Kurzweil first addressed the topic of the singularity in his 1990 book The Age of Intelligent Machines. He updated the ideas presented in 1999 with The Age of Spiritual Machines, and in 2005 with The Singularity is Near. Kurzweil bases his claims on the assumptions that the singularity is achievable, that technology is progressing at an exponential rate, and that the human brain is quantifiable in terms of processing power and equivalent technology. Kurzweil expands upon Moore’s Law to demonstrate that technology has been progressing at an exponential rate throughout human history. Kurzweil argues that once the singularity has occurred, humans and machines will intermingle to produce superhumans, thus the subtitle of the book, When Humans Transcend Biology. Although Kurzweil has been integral in the formation of the Singularitarian movement, he is far from the only contributor to its cause. With the cooperation of Google and NASA, Kurzweil and Dr. Peter H. Diamandis founded the Singularity University with a mission to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity’s grand challenges” . However, a recent controversy suggests that the management is seizing these inspirations for free intellectual innovations. The university’s slow transition from a non-profit organization to for-profit spurred new ideas by faculty members such as David Orban "that seizing more control of its intellectual property will help the university more rapidly and effectively respond to technological progress and social change" . They also believe the best way "to help humanity adapt to new technologies is to hold secrets close and to convert them into profits". Another non-profit organization promtes the ideals of Singularitarianism is the Singularity Institute. Eliezer Yudkowsky, an advocate for the emergence of Singularity and promotion of artificial intelligence, is one of the founder along with Brian Atkins and Sabine Stoeckel. The Singularity Institute for Artificial Intelligence works towards the development of machines that can bring about the Singularity, citing Einstein’s quote that “the significant problems we face cannot be solved at the same level of thinking we were at when we created them.” While less active, the Singularity Weblog writes that the singularity will bring “a better future, [and a] better you.”

A common motivation for Singularitarians is to improve the future through superintelligent machines. They believe that these machines will solve the larger problems plaguing humanity such as poverty, hunger, and disease. Some believe that it will result in complete mastery of our planetary resources, a Kardashev Type I civilization. However, as first noted by Vinge, it may be difficult to predict the actions of our intellectual superiors.

=Opposition=

Neo-Luddism
Unlike the Singularitarians, there are groups who view the occurrence of the Singularity as an event that would be detrimental to humanity. The Neo-Luddites, who are opposed to many forms of modern technology, are one such group. (It may be interesting to note that Luddism is not a new phenomenon-it has its roots in the 19th century .) Neo-Luddites often view the development of technology as coming at the expense of other forms of values, such as spirituality or a sense of community with fellow man. Their opposition to the Singularity can be viewed as a specific instance of a broader opposition to technological change.

Perhaps the most notorious Neo-Luddite is Theodore Kaczynski, better known as the Unabomber. Kaczynski opposed technology for a number of reasons, including the possibility that the Singularity would result in a small group of elites or super intelligent computers subjugating the world’s population. In Kaczysnki’s view, if the human race were to survive at all it would be reduced to the status of domesticated animals. While there are others that agree with Kaczynski’s arguments, his methods for promoting them were extreme. Over a period of almost 20 years, Kaczysnki operated a bombing campaign that killed 3 people and injured 23 others. Kaczysnki targeted computer scientists, geneticists, and a computer store owner with the goal of impeding technological progress and creating political tension. Nicholas Carr addresses Neo-Luddite concerns in a more conventional manner through his blog, "Rough Type", and several books. Carr, unlike Kaczysnki, expresses less urgency and uses humor to make his point. At times, he also mocks the Singularitarians directly, including Ray Kurzweil.

These two approaches highlight the diversity of opposition to the Singularity. Kaczysnki made an explicit argument that a few scientists’ lives were worth avoiding the enslavement or destruction of the human race. Carr, however, uses public discourse, not violence, to minimize the risks of technological progress. In both cases, the end goal is the same but the methods to achieve it have vastly different ethical implications.

Technology Researchers
Opposition to the Singularity is not limited to Neo-Luddites. Bill Joy, co-founder of Sun Microsystems and a principal developer of the Java programming language, is also wary of the Singularity. As a developer and proponent for computer technology for much of his life, Joy had always viewed his work as having a positive impact on the world. A conversation with Kurzweil made him aware of the Singularity and more concerned that unwatched technological progress could lead to disaster. His concern is that unintended consequences are inevitable in complex systems, e.g. the widespread use of antibiotics leading to the emergence of antibiotic resistant bacteria. Joy also coined a new term to explain his unease: Knowledge-enabled Mass Destruction, or KMD. KMD refers to the destructive potential of powerful new technologies which are often capable of self-replication (e.g. genetics or nanotechnology), coupled with the fact that these technologies are available to small groups or individuals. Joy believes the second attribute to be especially important and contrasts it to the case of conventional weapons of mass destruction (WMD) such as the atomic bomb, an undertaking which required the backing and resources of a nation-state. Because of these factors, Joy is concerned that advancing technology (e.g. the Singularity) may create the conditions for mankind to destroy itself. However, unlike the Neo-Luddite group, Joy's wish is not to turn back the clock, technologically speaking. He states that "I have always, rather, had a strong belief in the value of the scientific search for truth and in the ability of great engineering to bring material progress. The Industrial Revolution has immeasurably improved everyone's life over the last couple hundred years, and I always expected my career to involve the building of worthwhile solutions to real problems, one problem at a time". Instead he advocates research into new technological methods to keep genetics, nanotechnology, and robotics safe.

Scientists and engineers frequently have to consider the ethics of pursuing technological progress, for instance in the development of the atomic bomb and chemical warfare. Furthermore, although the development of computer intelligence is generally a peaceful and productive technology in its current sate, it has an enormous potential for both positive and negative unintended consequences. Joy’s response, like many others in the technology industry, was not to stop technological progress at all costs or disseminate information on the drawbacks of technology, but to work to mitigate its negative consequences.

Now Movement
Hylke Muntinga, the founder of the social singularity group known as the Now Movement, fears that "social shifts in combination of radical technology acceleration" will bring about the next singularity within five years. However the use of the word 'next' suggests that Muntinga has a different conception of the Singularity than the generally accepted version (if such a thing exists) where the Singularity is a specific turning point in history, not something with multiple occurrences. By witnessing the rapid changes, he states that:

The first birth signs of social singularity are readily apparent...social shifts in combination of radical technology acceleration will lead to an unimaginable era.

Muntinga calls for the integration of social media and crowd-source platforms in which the world is able to share the innovations, concepts, and projects with one another. This will create a smart net in which the explosion of social intelligence would enhance the "ability to anticipate, plan for, and adapt to change" caused by the Singularity.

=Skeptics=

Not everyone agrees that the Singularity can be reached. Rodney Brooks, Roboticist from Massachusetts Institute of Technology, and Chief Technical Officer of iRobot Corp says:

We need not fear our machines because we, as human-machines, will always be a step ahead of them, the machine-machines, because we will adopt the new technologies used to build those machines right into our own heads and bodies.

Brooks believes that because we will build machines incrementally, we will have the ability to choose which traits to instill in the machines. This will allow us to avoid building machines with "specific conditions necessary for a runaway, self-perpetuating artificial-intelligence explosion that runs beyond our control and leaves us in the dust." This concept, however, assumes that those creating these machines will have the awareness, ability, and desire to instill only benevolent traits in their creations.

Bloggers like Johnathan Stray, Joel Shepherd, and Amit Patel are even more wary to accept the inevitably of the Singularity and the evidence that Singularitarians like Ray Kurzweil provide. Johnathan proposes that the term Singularity is ill-defined. "Nobody knows what machine intelligence is. Or how we'd recognize one if we met it...The phrase 'machine conciousness' is even less useful, because we can't even define the word 'consciousness' for humans."

Similarly, Amit Patel criticizes Ray Kurzweil's staple graph "Canonical Milestones" by pointing out that the shape of the graph (and consequently, the conclusion reached) is highly dependent on the choice of events for the graph. Patel believes "that change is accelerating, but it's not accelerating as quickly as Kurzweil believes." As these bloggers show, the Internet has provided the ability for all people, not just published authors, to share their views globally.

=Approaching the Singularity=

In his 1993 paper "The Coming Technological Singularity: How to Survive in the Post-Human Era," mathematician and science fiction writer Vernor Vinge lists the four ways he expects the singularity could occur:

 The development of individual computers with superhuman intelligence An "awakening" of computers in a large network Dependence on and operation of computer technology so extensively that users can be considered more intelligent than humanly possible Advancement in the field of biological cognitive enhancement to the point where users are superhumanly intelligent</li> </ol>

These potential developments represent a wide range of expertise on the engineering spectrum. Linking them together is the common theme of blurring the line between man and machine. Although technology has not yet progressed to the singularity, there have been recent developments in several of the areas Vinge identified that may bring the singularity closer.

Biological Cognitive Enhancement
Biological Cognitive Enhancement is the broad term given to any artificial change in the biology or chemistry of a person that results in superior mental capabilities. Nootropics, also called smart-drugs, are the cognitive enhancements most often used. Some, such as Ritalin and Adderall, are used to treat behavioral disorders. Others, such as caffeine and tobacco, are common stimulants. Those who subscribe to the transhumanism movement look forward to the day when the use of these enhancements is widespread. They believe even a small increase in intelligence over a large population will greatly benefit society, with an enormous decrease in poverty and increase in standards of living. However, critics have raised ethical concerns. If these enhancements are expensive, only the wealthiest would be able to afford them. With increased cognitive function, those using the enhancements would presumably be able to outperform those not using them. The gap between the wealthiest and poorest may then widen dramatically, all around the world. On a smaller scale, some educators raise concerns about current levels of cognitive enhancement drug abuse. Adderall, Concerta, and Ritalin in particular have been singled out as problematic. While these drugs don't necessarily enhance intelligence, they do improve focus, allowing students to accomplish more with them than without. Seventy years after Ritalin was first prescribed, the debate over whether taking these pills is considered cheating continues.

Brain Computer Interfaces
Brain computer interface systems are a relatively recent development in the biomedical field. BCI systems contain a sensor that can analyze the brain waves of a patient and convert those waves to motion outputs. Currently, this technology is being developed to allow people recovering from strokes to regain their physical functionality, even with a damaged nervous system. Although BCI technology is still in its infancy, it has already proven moderately successful in clinical trials. While this technology has clear medical benefits, its use has raised some alarm. Some believe that the ability to analyze brain waves is too powerful a technology for anyone to use. Some worry that the technology may fall into the wrong hands, and are worried about the loss of freedom of thought. Others even fear that the technology may develop to the point where brain waves can not only be analyzed but controlled too.

=Technology Impacted by Fear of the Singularity=

Facebook Facial Recognition
In June 2011, the social networking website, Facebook, released a series of new features to the site that use facial-recognition technology. These features were designed to facilitate the task of tagging friends in photographs. According to Facebook this was a task many users reported as being a chore for a large number of photographs, and desired automation. Response to the new feature was immediate. Within three days of the release, the Electronic Privacy Information Center (EPIC) filed a "Complaint, Request for Investigation, Injunction, and Other Relief" with the United States Federal Tax Commission. The major complaints were:
 * 1) Unfair and deceptive practices
 * 2) Users being automatically "opted-in" without consent or warning to an "invasion of privacy".
 * 3) Collection of biometric data from children violates the Children's Online Privacy Protection Act of 1988

Even "tech-savvy" interest groups who historically exalt technological advances becoming available for use by the public, like technology magazine, PCWorld, released emotionally charged articles in opposition to Facebook's actions.

Outrage against the facial recognition features continued across the globe. Under a little known European Data Protection Law Austrian Law student, Max Schemes, requested a copy of his personal information from Facebook. The 1200 page document he received sparked a political and media uproar in Europe about how much private data Facebook was storing about its users, including the facial recognition profiles, with inadequate security that could allow the data to fall into malicious hands. This lead to an audit of Facebook's privacy and security practices by Ireland’s Data Protection Commissioner, who has jurisdiction over Facebook Europe's Dublin headquarters. This action was highly publicized and on October 15, 2011, Facebook officially terminated the facial recognition software for all European Users. Full story.

According to Denis Arter, of Columbia Audit Resources, the technological advances that will bring about the singularity are growing at an exponential rate. However, this rate is difficult for humans to detect because of the way our brain stores memories. Arter argues that because human brains have limited storage capacity for long-term memories, the brain relies heavily on recent memories when predicting the future. Arter illustrates this point with examples of technologies, such as Google, Wikipedia, MRI and CAT Scans, statistical analysis of crime, and contact data stored in mobile phones, that did not exist a decade ago, but have become so commonplace that people rely on them to make decisions without question. The number of Facebook's users makes this recent (2011) event a "recent memory" for a significant portion of the population, and thus valuable for study in relation to the Singularity.

If the singularity comes, it will likely be caused by a growth of many technologies and many social factors influencing the use of those technologies. The Facebook case gives a picture of what opposition to technologies that could bring about the Singularity, would look like. When people come face-to-face with technology they find unsettling or threatening in some way, the response can be immediate and powerful to destroy it at all costs.

=Ethical Concerns=

Robotics


Well, when it gets to the point where we humans empathize with a machine to the point of not wanting to turn it off, then in some sense what is the difference between that and it being living? We’re treating it as living.
 * -Rodney Brooks

Imagine a world...in which the boundary between our perceptions of robots and our perceptions of our fellow humans has become so blurred that most of us treat robots as though they are mental, social, and moral beings...I am convinced that this is how the world will be by the year 2050.
 * -David Levy, author of Love and Sex with Robots

Science Fiction writer Isaac Asimov’s Three Laws of Robotics are a famous example of trying to explicitly take into account the values that technology embodies. Many concerns about the Singularity stem from the idea that intelligent robots/other forms of technology may become capable of subverting human values in pursuit of their own goals (e.g. The Matrix or Terminator). Asimov’s Three Laws were an attempt to create a set of rules governing robots that when obeyed, would ensure that the robots acted in man’s interests, or at least not against them. Numerous stories have been written based on these laws, with many exploring the idea that the observance of these laws would itself create undesired consequences. One interesting angle that has been explored that is especially relevant to the Singularity hinges on the distinction between ‘human’ and ‘robot’. While this distinction may not need to be so subtle currently, come the Singularity it will be much harder to distinguish between the two. Indeed, it is a matter of debate as to whether with the advent of such a distinction would be desirable.

Surrogates, a Sci-Fi thriller by Jonathan Mostow, draws on this notion. In the future, humans are capable of harnessing their idealized forms by controlling androids from the safety of their homes. The main conflict represented is the meaning of identity. With these android clones, humanity is preserved from the crimes in the society; however, the value of an individual's personality is diminished and the soul of the person is demoralized by machine.

Aside from the technical issues involved with the feasibility of programming values into robots with artificial intelligence, there is another, just as formidable challenge. The values that people hold vary widely among cultures, within cultures, and across time. How are these differences to be reconciled? Can they be reconciled?

As machines come to resemble humans, a number of other ethical questions arise. Do human laws and subsequent punishments apply also to robots? Are robots protected under the law? Are machines held responsible for their actions? If the answer to any of these questions is yes, then when does a machine cross the threshold between merely a machine and a sentient being that commands human-like treatment?

Humanity+
Formerly known as the World Transhumanist Association, addresses the notion of sentience in regard with machines. The international non-profit organization that promotes the expansion of human capabilities through ethical use of technology. However, the question persists regarding the side effects of such integration that what if the machines retain the knowledge? According to the fundamental principle adopted by the organization: We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise
 * -Transhumanist Declaration

It is evident from the amendment that sentience, although exhibited by machines, has its own prominence in the view of the authors. this gives rise to the a social dilemma that considers the comfort of an individual in contact with an advance copy of themselves. It invokes the notion the uncanny valley, where there is a revulsion among human likeness.

=Conclusion=

The nebulous character of the Singularity makes it difficult to describe technically: what technology is necessary for the Singularity, when will it occur, what will the consequences be? Indeed, there is debate over whether it ever can occur. But this disagreement provides fertile ground for a sociotechnical analysis of the Singularity. Because the Singularity is only loosely-defined, people often project their own views of technology onto it. Those who advocate for efforts to bring the Singularity to realization may be thought of as having faith in technology’s improvement of the human condition. Others, such as the Luddites, take up a position that indicates they see technological progress as coming at the expense of other values. And then there is the middle ground, occupied by people such as Bill Joy, who see the potential for both sides and believe we must actively work to ensure that technology serves our interests. However, even within these groups there are differing viewpoints. From letter bombs to online blogs, these diverse views are also expressed in very different ways. As the projected date for the Singularity draws near, opposing groups will face further pressure to improve their methods of promoting their views. This suggests that the Singularity will continue to present complex sociotechnical issues.

=Further Analysis=

The following areas are good candidates for further research:
 * The social cost of restricting technological advancement. For example, the impact of technology mitigation on quality of life.
 * The ethical implications of research towards Singularity-enabling technologies being funded by military organizations.
 * The presentation of the Singularity in popular culture.
 * The different ways in which the Singularity can occur and their societal consequences.
 * What recent technologies, laws, policies, or other practices have been accused of contributing to the Singularity?
 * What groups created the initial accusation?
 * What groups found the accusation and influenced real action for or against the given technology, etc.?
 * What recent technologies, laws, policies, or other practices could contribute to the singularity but have been overlooked by researchers, policy makers, the public, and the media? Why was it overlooked?
 * Does how well-known a factor is influence how much it contributes to the Singularity?


 * Is the "creepiness factor" of a technology a good measure of how well it relates to the Singularity?
 * What are lawmakers doing about the Singularity directly? How are they indirectly or unknowingly affecting the approach of the Singularity?
 * The theological implications of the singularity. For example, the authors of the blog The Speculist pose the following four questions:
 * Does the Singularity bring us closer to God?
 * Does God show up at the Singularity?
 * Are we going to somehow create God?
 * Are we going to somehow become God?

=References=