Humans are currently essentially the most intelligent beings on the planet – the results of a protracted history of evolutionary pressure and adaptation. But could we some day design and construct machines that surpass the human intellect?

This is the concept of superintelligence, a growing area of research that goals to enhance understanding of what such machines may be like, how they could come to exist, and what they might mean for humanity’s future.

Oxford philosopher Nick Bostrom’s recent book Superintelligence: Paths, Dangers, Strategies discusses a wide range of technological paths that might reach superintelligent artificial intelligence (AI), from mathematical approaches to the digital emulation of human brain tissue.

And even though it feels like science fiction, a bunch of experts, including Stephen Hawking, wrote an article on the subject noting that “There is not any physical law precluding particles from being organised in ways in which perform much more advanced computations than the arrangements of particles in human brains.”

Brain as computer

The concept that the brain performs “computation” is widespread in cognitive science and AI for the reason that brain deals in information, converting a pattern of input nerve signals to output nerve signals.

Another well-accepted theory is that physics is Turing-computable: that whatever goes on in a specific volume of space, including the space occupied by human brains might be simulated by a Turing machine, a form of idealised information processor. Physical computers perform these same information-processing tasks, though they aren’t yet at the extent of Turing’s hypothetical device.

These two ideas come together to offer us the conclusion that intelligence itself is the results of physical computation. And, as Hawking and colleagues go on to argue, there isn’t a reason to consider that the brain is essentially the most intelligent possible computer.

In fact, the brain is proscribed by many aspects, from its physical composition to its evolutionary past. Brains weren’t chosen exclusively to be smart, but to generally maximise human reproductive fitness. Brains should not only tuned to the tasks of the hunter gatherer, but in addition designed to suit through the human birth canal; supercomputing clusters or data-centres haven’t any such constraints.

Synthetic hardware has quite a few benefits over the human brain each in speed and scale, however the software is what creates the intelligence. How could we possibly get smarter-than-human software?

Evolving intelligence 2.0

Evolution has produced intelligent entities – dogs, dolphins, humans – so it seems theoretically possible that humans could recreate the method. Methods often known as “genetic” algorithms enable computer scientists and engineers to utilise the facility of natural selection to find solutions or designs with incredible efficiency.

Evolutionary algorithms keep plugging away, exploring the choices, mechanically assessing what works, discarding what doesn’t, and thus evolving towards the researchers’ desired outcomes. In Superintelligence, for instance, Bostrom recounts a genetic algorithm’s surprising solution to a hardware design problem:

[The experimenters] discovered that the algorithm had, MacGyver-like, reconfigured [the] sensor-less motherboard right into a makeshift radio receiver, using the printed circuit board tracks as an aerial to select up signals generated by personal computers that happened to be situated nearby within the laboratory.

Of course, it’s substantially tougher to evolve a brain than a radio receiver. Bostrom takes the case of simulating the evolution of the central nervous system. A back of the napkin estimate argues that there are roughly 1025 (1 followed by 25 zeroes) neurons on our planet today and assumes that this population has been evolving for a billion years.

Current models of neurons that mimic the computation within the brain require as much as about 106 calculations per second, per neuron, or about 1013 per 12 months.

If we were to make use of these numbers to recreate evolution in (for instance) one 12 months of computation, it could require a pc that might perform about 1039 calculations per second – far beyond our present-day supercomputers.

It may be hard to place such large numbers into context, but the important thing point is that such raw computing power isn’t more likely to be available to us any time soon. Bostrom notes that “even a continued century of Moore’s law could be insufficient to shut this gap.”

But except for brute force there are other ways we could close the gap. Natural evolution is wasteful on this context, because it doesn’t select just for intelligence. It’s possible that we could find many shortcuts, even though it’s unclear exactly how much faster a human-directed process could arrive upon smarter-than-human digital brains.

The Star Trek vision of the long run of intelligence – robots that top out at the extent of mathematically-talented humans and go no further – is itself a failure of the human imagination.

In any case, the evolutionary approach is barely one possible strategy. Branches of machine learning, cognitive science, and neuroscience have used our limited understanding of the human brain together with algorithms to interrupt CAPTCHAs, translate books, and manage railway systems. Managing more abstract and strategic plans (including plans for developing AI) might be where we’re headed, and there’s little reason to consider that AI will come to an abrupt stop on the human level.

This article was originally published at theconversation.com