“Human Redundancy May Follow The Creation Of Copyable Human Capital”

A collection of brief notes about the potential future of AI from the “Emerging Risks” section of the Global Challenges Report, which outlines species-threatening possibilities:

1. The advantages of global coordination and cooperation are clear if there are diminishing returns to intelligence and a plethora of AIs, but less clear if there is a strong first mover advantage to the first group to produce AI: then the decisions of that first group are more relevant than the general international environment.

2. Military AI research will result in AIs built for military purposes, but possibly with more safeguards than other designs.

3. Effective regulatory frameworks would be very difficult without knowledge of what forms AIs will ultimately take.

4. Uncontrolled AI research (or research by teams unconcerned with security) increases the risk of potentially dangerous AI development.

5. “Friendly AI” projects aim to directly produce AIs with goals compatible with human survival.

6. Reduced impact and Oracle AI are examples of projects that aim to produce AIs whose abilities and goals are restricted in some sense, to prevent them having a strong negative impact on humanity.

7. General mitigation methods will be of little use against intelligent AIs, but may help in the aftermath of conflict.

8. Copyable human capital – software with the capability to perform tasks with human-like skills – would revolutionise the economic and social systems.

9. Economic collapse may follow from mass unemployment as humans are replaced by copyable human capital.

10. Many economic and social set-ups could inflict great suffering on artificial agents, a great moral negative if they are capable of feeling such suffering.

11. Human redundancy may follow the creation of copyable human capital, as software replaces human jobs.

12. Once invented, AIs will be integrated into the world’s economic and social system, barring massive resistance.

13. An AI arms race could result in AIs being constructed with pernicious goals or lack of safety precautions.

14. Uploads – human brains instantiated in software – are one route to AIs. These AIs would have safer goals, lower likelihood of extreme intelligence, and would be more likely to be able to suffer.

15. Disparate AIs may amalgamate by sharing their code or negotiating to share a common goal to pursue their objectives more effectively.

16. There may be diminishing returns to intelligence, limiting the power of any one AI, and leading to the existence of many different AIs.

17. Partial “friendliness” may be sufficient to control AIs in certain circumstances.

18 .Containing an AI attack may be possible, if the AIs are of reduced intelligence or are forced to attack before being ready.

19. New political systems may emerge in the wake of AI creation, or after an AI attack, and will profoundly influence the shape of future society.

20. AI is the domain with the largest uncertainties; it isn’t clear what an AI is likely to be like.

21. Predictions concerning AI are very unreliable and underestimate uncertainties.•