The speed of computer chips has been undergoing an exponential growth for several decades, commonly known as Moore's Law. In fact, all digital technologies are exponential, be it microprocessors, RAM memory, DNA sequencing, magnetic storage, the number of Internet hosts, Internet traffic, the resolution of imaging technologies, and so on.
In the last ten years or so the power of a $1000 computer chips passed from the equivalent of the computation power of an insect brain to that of a mouse brain. In 2016 supercomputers should be able to simulate a human brain, and around 2023 to replicate virtually all its processes. After that AI will become hundreds, then thousand, then millions, then billions of times more powerful than the human brain.
The singularity describes a tipping point, where the accelerating pace of technological progress leads to a hyperbolic and unstoppable growth in artificial intelligence, relegating humans to a secondary role for future scientific and technological developments.
It goes without saying that nobody knows what will happen once machines become zillions of times more intelligent than humans. So is that really a good idea to let AI get out of human control ? That's the subject of this article.
Shall we wish for the singularity to happen and could it happen without human intervention ?
In the last ten years or so the power of a $1000 computer chips passed from the equivalent of the computation power of an insect brain to that of a mouse brain. In 2016 supercomputers should be able to simulate a human brain, and around 2023 to replicate virtually all its processes. After that AI will become hundreds, then thousand, then millions, then billions of times more powerful than the human brain.
The singularity describes a tipping point, where the accelerating pace of technological progress leads to a hyperbolic and unstoppable growth in artificial intelligence, relegating humans to a secondary role for future scientific and technological developments.
It goes without saying that nobody knows what will happen once machines become zillions of times more intelligent than humans. So is that really a good idea to let AI get out of human control ? That's the subject of this article.
Shall we wish for the singularity to happen and could it happen without human intervention ?