A Deep Dive into the Fascinating and Controversial Concept of Technological Singularity

Zainab Mosunmola
5 min readJan 6, 2023

--

Welcome to the wild and wooly world of singularity, a concept that has been both celebrated and feared for its potential to fundamentally transform humanity as we know it. I, for one, enjoy the conversations and debates.

Singularity, the hypothetical point in which technological progress accelerates and transforms humanity, has long been fascinating and debated. While its occurrence is uncertain, the idea captures the imagination and raises questions about the limits of human knowledge and technology. In the context of science and technology, singularity is often linked to the creation of superintelligent artificial intelligence. Some see the event as a catalyst for rapid technological advancement, while others have concerns about the risks of surpassing human intelligence.

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultra-intelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

I. J. Good

Given we get to keep these machines “docile”, the intelligence explosion would do the world a lot of good. Superhuman intelligence, whether artificial or human-created, would significantly improve problem-solving and invention. Seed AI as it is called, has the potential to autonomously improve its software and hardware to design an even more advanced machine, leading to recursive self-improvement. This process could potentially continue indefinitely, resulting in dramatic qualitative changes. It is believed that this type of AI would surpass human cognitive abilities after numerous iterations.

The inherent characteristic of humans is evolution, which is our greatest strength. In the past, we never imagined that we would fly or that exoplanets existed, yet now we are achieving incredible things. I see these advancements as a continuation of evolution, a way in which we are enhancing and expanding our intelligence.

There is a debate about whether advances in artificial intelligence will lead to the creation of reasoning systems that surpass human cognitive abilities, or if humans will evolve or directly enhance their biology to become radically more intelligent. Some future study scenarios suggest that humans may merge with computers or upload their minds to computers, significantly increasing intelligence. — Wikipedia

Source: Human Life International

There are two main approaches to creating advanced or transhuman minds (a state in which humans have significantly extended lifespans and have acquired new physical and mental abilities): intelligence amplification of human brains and artificial intelligence. These approaches include using bioengineering, genetic engineering, nootropic drugs, AI assistants, brain-computer interfaces, and mind uploading to augment human intelligence. This use of AI to improve humans, otherwise known as transhumanism is called the non-AI singularity. Given that all of these methods are likely to be pursued, it increases the likelihood of a singularity occurring. There are two primary drivers of intelligence improvement: increases in computational speed and improvements to algorithms. These factors are independent, but they work together to enhance intelligence.

Apart from ultra intelligence and non-AI singularity, Immortality is another great feat to be achieved by singularity. The final moments of Lucy in “Lucy” capture this so well but in a fancy way, of course.

Drexler (1986), a pioneer in the field of nanotechnology, proposed using cell repair devices, including ones that operate within cells and utilize biological machines that are currently only hypothetical. According to Moravec (1988), it may be possible to “upload” the human mind into a human-like robot, achieving quasi-immortality through the transfer of the mind to successive robots as the old ones wear out. He also predicts an exponential acceleration of subjective experience of time, leading to a subjective sense of immortality. Kurzweil (2005) believes that medical advances will allow people to protect their bodies from the effects of aging, leading to an indefinite life expectancy. He argues that technology will enable us to continually repair and replace defective components in our bodies, extending life to an unknown age. Lanier, on the other hand, proposes “Digital Ascension,” in which people’s consciousness is uploaded to a computer after they die in the flesh, allowing them to remain conscious. — Wikipedia

Several potential problems have been proposed about singularity, whether it is triggered by artificial intelligence (AI) or other technological developments. Some of the main concerns that have been raised include the following:

  1. Loss of control: One concern is that if an AI becomes more intelligent than humans, it may be difficult or impossible for humans to control its actions. This could lead to unintended consequences and potentially harmful outcomes. Even though some think this is a result of too many movies, news of Google’s AI becoming sentient flooded the tabloids not too long ago.
  2. Unemployment: If machines become capable of performing tasks more efficiently than humans, there is a risk of widespread unemployment as humans are replaced by machines. I agree AI will take over a good percentage of jobs, but I still think we should see it as a means to superpowers that helps improve our productivity. We will forever be the ones with the ideas, not the machines.
  3. Inequality: If only a small group of people can access the technologies that enable singularity, it could significantly widen the gap between the haves and have-nots.
  4. Ethical concerns: There are also ethical concerns related to using technology to enhance human capabilities beyond their natural limitations, such as the potential for discrimination or creating a two-tiered society.
  5. Loss of human values: Some people are concerned that singularity could lead to the loss of human values and the dehumanization of society.
  6. Security risks: There is also the potential for security risks if an AI becomes too powerful, such as the possibility of cyber-attacks or the misuse of sensitive information.

All of these are still highly speculative; the benefits and threats, so it is difficult to predict the future. As we continue to explore the limits of science and technology, the concept of singularity will no doubt remain a source of intrigue and speculation. While we may never know for certain whether or when singularity will occur, it is a concept that forces us to think critically about the future of humanity and the role of technology in shaping that future.

--

--

Responses (1)