Sir Martin Rees wrote these words last year: “Few doubt that machines will gradually surpass more and more of our distinctively human capabilities—or enhance them via cyborg technology. Disagreements are basically about the timescale.”
Count astrobiologist Caleb Scharf as one of the so-called few doubters. In an Aeon piece, he argues that the Singularity may not be near (or even far) and an explosion of intelligence that dwarfs the Cambrian not exactly a done deal. Scharf believes we could possibly become a hive-mind state that settles rather than an exponential one that soars. Or maybe we opt for “turning away from machine fantasies, back to a quieter but more efficient, organic existence.” My very inexpert brain disagrees with both notions, but it’s an excellent essay.
An excerpt:
Superficially, the logic behind the conjectures about cosmic machine intelligence appears pretty solid. Extrapolating the trajectory of our own current technological evolution suggests that with enough computational sophistication on hand, the capacity and capability of our biological minds and bodies could become less and less attractive. At a certain point we’d want to hop into new receptacles, custom-built to suit whatever takes our fancy. Similarly, that technological arc could take us to a place where we’ll create artificial intelligences that are either indifferent to us, or that will overtake and subsume (or simply squish) us.
Biology is not up to the task of sustaining pan-stellar civilisations or the far-future human civilisation, the argument goes. The environmental and temporal challenges of space exploration are huge. Any realistic impetus to become an interstellar species might demand robust machines, not delicate protein complexes with fairly pathetic use-by dates. A machine might live forever and copy itself perfectly, unencumbered by the error-prone flexibility of natural evolution. Self-designing life forms could also tailor themselves to very specific environments. In a single generation they could adapt to the great gulfs of time and space between the stars, or to the environments of alien worlds.
Pull all of these pieces together and it can certainly seem that the human blueprint is a blip, a quickly passing phase. People take this analysis seriously enough that influential figures such as Elon Musk and Stephen Hawking have publicly warned about the dangers of all-consuming artificial intelligence. At the same time, the computer scientist Ray Kurzweil has made a big splash from books and conferences that preview an impending singularity. But are living things really compelled to become ever-smarter and more robust? And is biological intelligence really a universal dead-end, destined to give way to machine supremacy?
Perhaps not. There is quite a bit more to the story.•