Our brain is the most complex system in the known universe. Will it stay that way? How much longer?
Even Gordon Moore, the famous founder of Intel, said it back in 2007: Moore’s law, according to which the transistor density and thus the performance of a computer chip doubles approximately every 18 months, will soon no longer apply. Intel itself has not been planning for this since 2016 because the performance of conventional computers will no longer grow exponentially. The quantum mechanical limits will soon be reached. Through parallel computing, the performance continues to grow, but nowhere near as fast. Is this really the end of the computer revolution?
The decisive factor is the computing power, not the technology used. And there are great things ahead. Scientists have long been working on new computer concepts such as quantum and bio-computers for the post-silicon age.
Neurocomputers, whose computing architecture uses the information processing of the human brain as a biological model, are ushering in a new era in computer technology. Neural networks are being developed around the world with artificial neurons and synapses designed to mimic the human brain. Information processing is not based on high speed alone, but – as in our brains – primarily on massive parallelism.
The performance of the human brain with its 100 billion nerve cells and 100 trillion synapses still exceeds that of today’s computers many times over. This is mainly due to parallelism.
Small neural networks have been able to be simulated on ordinary hardware for some time, but at high energy cost and with fairly modest performance to date.
In the US, IBM researchers have unveiled ‘TrueNorth’, a neuromorphic processor inspired by the workings of the neocortex, which has 256 million synapses. The latest IBM success in this field is the development of artificial neurons made of phase-change material (link), which, like their biological models, can be stimulated by electrical impulses and store information in analog form.
These systems are still very far from the performance of the human brain. But what can we expect if progress is exponential here as well? Will computers then be able to think better?
The coupling of high computing speed and massive parallelism in information processing will lead to breakthroughs hardly imaginable today, especially in the field of artificial intelligence. Neuromorphic computers can learn from experience. They are not determined by pre-programmed algorithms.
This also has consequences for the profession of software developer: Future computers could largely program themselves in response to information and stimuli from the environment. At ETH Zurich, software has been developed that recognizes how attractive, or more precisely ‘howhot’, he or she is based on photos of random people. What is beautiful, the software had to work out for itself. This was based on millions of ratings of thousands of user profiles from the dating app ‘Blinq’.
In ten to fifteen years, neurocomputers could be in use virtually everywhere as specialists in pattern recognition. Based on data from billions of networked digital archives, devices and objects, they will make in decisions and discover hidden connections.
For example, they will help
- Analyze complex hazard situations and recommend course of action,
- optimize the traffic flow of autonomous vehicles,
- make far better diagnoses by evaluating and analyzing disease symptoms, and
- to make better weather forecasts.
By 2030, our brain could be in second place. Then it is no longer the most complex system in the known universe. Then it could be neurocomputers. Then the crown of creation could have abolished itself.
The more successful the systems become, the more we will entrust ourselves to them – and hand over responsibility to them in certain areas. In the medium term, neurocomputers will thus also compete with us on the labor market.
It would not only be a new era of computer technology, but above all a new era of man.
And now?
There is still plenty of time before we need to upgrade our brains. But: Make sure that you personally focus on such tasks that will remain reserved for humans in the future. These are mainly tasks with little routine and a lot of emotion and humanity.
And on your company not ignoring the emerging opportunities, but taking advantage of them in time to remain competitive.
More on the topic: Why does the machine still need humans?
Follow these links as well:
► The Future Strategy Program for SMEs
► Free video crash course THE FUTURE OF YOUR BUSINESS
► BUSINESS WARGAMING for robust business and future opportunities
► KEYNOTES by Pero Mićić for your employees and customers
Have a bright future!