Existential Risks

Defined as risks that could cause our extinction and destroy the potential for millions of human generations ahead of us.[1]

Humanity has been its own worst enemy. Advances like Fossil Fuels or Machine Intelligence created unforeseeable risks.

Types of Risk


Our future depends on how we deal with these challenges.

Natural risks

Such as asteroid strikes, pandemics or climate cycles have been around for a while and were responsible for the extinction of past species.

Anthropogenic Risks

Are a more recent phenomenon, caused by humans (technology, climate change, governance). The biggest existential risks will probably arise with new technological developments such as advanced artificial intelligence and decreasing boundaries for bad actors, such as terrorists, to cause harm on a massive scale. [2]

RISK ASSESSMENT

To define existential risks, Nick Bostrom proposed this graph with the dimensions scope (number of people at risk) and severity (how badly each individual would be affected). Another useful measure is an estimate of the probability (hover shows probabilities by experts).[2]

Of particular importance are the most neglected, less well known risks. Even small reductions in those existential risk have enormous expected value for humanity.
xrisk graph

History


Human Progress

Humanity has made incredible historical progress: from the agricultural revolution to science and technology. While technological progress holds the possibility of enormous gains, it also comes with enormous risks. Our present period is the most prosperous in human history, but plausibly also the most dangerous. The first destructive technology of this kind was nuclear weapons.[3]

Near Extinctions in the past

Nuclear close calls, which describes incidents that could could have lead to a nuclear war, have occurred a number of times since 1945 and are still a looming threat. They highlight how close we have been to a nuclear war before. [4]
human progress

Cognitive Limits


Biases

We are affected by cognitive bias. Our brain is not optimised to think about existential risk. We are wired to make sense of linear correlations, but most of our greatest risks are non-linear: beyond a certain threshold, change can be rapid and even exponential. Most of us extremely under-estimate the probabilities or overestimate due to a range of cognitive biases.

Underestimating Risk

Researcher Greenberg surveyed Americans [5] on their estimate of the chances of human extinction within 50 years. The results was that many think the chances are extremely low, with over 30% guessing they are under one in ten million. Our mechanism to deal with these risk has been historically shaped by trial-and-error (eg. Chernobyl, smallpox, nuclear war) without much foresight or preventive action.

Inherent uncertainty

Emerging technological risks from artificial intelligence or synthetic biology might seem far-fetched, but only 100 years ago, climate change or nuclear risks were not even discovered. Developing a more rigorous understanding of upcoming risk and taking action determines our future.

Risk

Estimated probability for human extinction before 2100
Overall probability 19%
Molecular nanotechnology weapons 5%
Superintelligent AI 5%
All wars (incl. civil wars) 4%
Engineered Pandemic 2%
Nuclear Wars 1%
Nanotechnology accident 0.5
Natural Pandemic 0.05
Nuclear Terrorism 0.03
Table source: Future of Humanity Institute, 2008 [6]

Fields

Nuclear Warfare
Artificial Intelligence
Climate Change
Pandemics
Biological & Chemical Warfare
Molecular Nanotech
Tensions between nuclear states have greatly diminished since the end of the Cold War and disarmament efforts have reduced arsenals, but the prospect of a nuclear war remains present.

The dust released from nuclear explosions would lead to dramatic cooling of the planet with severe ecological consequences and agricultural collapse.

To reduce the risk of nuclear warfare we need better global conflict management with continuing attention to arsenal reduction and geopolitical tensions.
Rapid​ ​progress has been made ​in​ ​the​ ​last​ ​decade​ ​due​ ​to​ ​​better​ ​algorithms,​ ​increased ​computing​ ​power​ ​and​ ​massive amounts of ​data.

AI is already​ ​integrated​ widely ​in​ ​everyday​ ​technologies and used​ ​to predict​ ​everything​ from our perfect partner to professional opportunities.

AI could be the most positive development for humanity, advancing us in every domain and helping us solve gigantic problems; but this also comes with enormous risks of maligned AI. The field of developing beneficial and aligned AI is growing rapidly with new organisations such as OpenAI or DeepMind leading the field.

The danger of entities more intelligent than us can be understood by considering the power we humans have drawn from being the smartest creatures on the planet.

If the goals of powerful AI systems are misaligned with ours or the architecture is mildly flawed, they might harness extreme intelligence towards purposes that turn out to be catastrophic for humanity. This is particularly concerning as most organizations developing artificial intelligence systems today focus on functionality much more than ethics.

Autonomous weapons could lead to a number of existential threats. They could be designed to be extremely difficult to simply “turn off,” so humans could plausibly lose control of such a situation. This risk is even present with narrow AI, but grows as levels of AI intelligence and autonomy increase.

Even with the AI is programmed to do something beneficial, it could develop a destructive method for achieving its goal: this can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If a superintelligent system is tasked with an ambitious societal project, it might wreak havoc as a side effect, and view human attempts to stop it as a threat to be met. [7]


Public​ ​agencies,​ ​such​ ​as​ ​those​ ​responsible​ ​for​ ​criminal​ ​justice,​ ​healthcare, welfare,​ ​and​ ​education​ ​(e.g​ ​“high​ ​stakes”​ ​domains)​ ​should​ ​no​ ​longer​ ​use​ ​“black​ ​box” AI​ ​and​ ​algorithmic​ ​systems.​ ​

Before​ ​releasing​ ​an​ ​AI​ ​system,​ ​companies​ ​should​ ​run​ ​rigorous​ ​pre-release​ ​trials​ ​to ensure​ ​that​ ​they​ ​will​ ​not​ ​amplify​ ​biases​ ​and​ ​errors​​ ​​due​ ​to​ ​any​ ​issues​ ​with​ ​the​ ​training data,​ ​algorithms,​ ​or​ ​other​ ​elements​ ​of​ ​system​ ​design.[8]

Expand​ ​AI​ ​bias​ ​research​ ​and​ ​mitigation​ ​strategies​ ​beyond​ ​a​ ​narrowly​ ​technical approach.​​ ​Bias​ ​issues​ ​are​ ​long​ ​term​ ​and​ ​structural,​ ​and​ ​contending​ ​with​ ​them necessitates​ ​deep​ ​interdisciplinary​ ​research.​

The median estimate was that there is a 50% chance we will develop high-level machine intelligence in 45 years, and 75% by the end of the century.
The last 50 years of human activity have pushed us away from the environmental stability of the past 12,000 years.

Runaway global warming

As global temperature continues to rise, the unforeseen possibilities of catastrophic runaway global warming increases. Its scientific consensus that climate change is a non-linear phenomenon where tipping points play a crucial role. When warming rises above a certain threshold, self-reinforcing feedback loops set in, and the concentration of greenhouse gases increases rapidly. (eg. melting ice sheets not only absorb less of the sun’s heat and energy, but also release more harmful methane)

Discussions of climate change typically focus on low- to mid-range scenarios, with temperature increase of 1°C to 3°C. These would have severe consequences, with potentially devastating effects on the environment and human societies.

'Tail-end' Risk

Non-negligible and less often considered ‘tail-end’ risk that temperatures might rise even further, causing unprecedented loss of landmass and ecosystems. In mid-range scenarios, entire ecosystems would collapse, much agricultural land would be lost, as would most reliable freshwater sources, leading to large-scale suffering and instability. In high-end scenarios, the scale of destruction is beyond our capacity to model, with a high likelihood of human civilization coming to an end

These estimates give a 10% chance of warming over 6 degrees, and perhaps a 1% chance of warming of 9 degrees. That would render large fractions of the Earth functionally uninhabitable, requiring at least a massive reorganisation of society. It would also probably increase conflict, and make us more vulnerable to other risks.

global warming tail risks [9]

Geoengineering

One possible solution. Manipulate the atmosphere, with the potential to reduce climate risk:
- carbon removal, directly remove carbon dioxide from the atmosphere (scale, price)
- solar geoengineering, reflecting light and heat from the sun back into space, particularly through the injection of aerosols or other particles into the stratosphere. today only computer models, should be more rigorously tested now.
Historically, pandemics and epidemics spread several times affecting approximately 15% of the global population over the course of a few decades. Vaccination allowed us to eradicate Smallpox, while Guinea Worm and Polio are close to being eradicated.

The biggest threat is the emergence of a new infectious disease which could cause a major outbreak, with particularly high mortality and rapid spread in our dense and interconnected world.

THE DEADLIEST PANDEMICS IN HISTORY

1. 165-180: the Antonine Plague

outbreak lasted for 15 years, killing an estimated 5 million people.

2. 541-542: the Plague of Justinian

took 25 million lives, or about 13% of the global population at the time.

3. 1347-1351: The Black Death

caused at least 75 million deaths from a global population of 450 million – with some estimates putting the figure as high as 200 million deaths.

4. 1918-1919: The Spanish Influenza

is estimated to have killed more than 50 million out of a global population of 1.6 billion.

5. 1970s-present: HIV/AIDS

so far, has killed more than 25 million people. [11]

Toxicity

Toxic chemicals could be aerosolized or placed into water supplies, eventually contaminating an entire region. Biological weapons possess greater catastrophic potential, as released pathogens might spread worldwide, and cause a pandemic.

Warfare Agents

Biological warfare agents such as bacteria or viruses can be lethal, or cause serious illness. Besides the human population, it can also be directed at crops and livestock. There are more than 180 pathogens that have been researched or employed as biological weapons**. The risk becomes existential due to technological capabilities to create both highly lethal and highly infectious agents. They are also much easier to attain and can be developed at a comparatively low cost. [12]
Molecular manufacturing may pose several risks of global scope and high probability. Building nanoscale structure and machines at the atomic level, which are are able to guide molecular chemical processes could pose an existential threat.

The release of nanoscale structure and particle into the environment could cause inadvertent harm.

In general, molecular nanotechnology may advance or augment other technologies and thus contribute to the risk they present (AI, biological/chemical weapons) or help alleviate existential risks such as climate change. The greatest near-term threats appears to be war (fought with nano weapons). [13]

What now?

We need to raise awareness and take a proactive approach by researching open questions and possible solutions.

Preventive Action

Trial-and-error doesn't work for existential risk. Our greatest risks today would seem unimaginable to people in 1900. Todays unimaginable risks might pose an even bigger risk than the ones we already have on our map. [14]

It's surprising how little research and action has been taken despite its importance for humanity. We need to act with foresight and take preventive action to ensure a bright future for upcoming generations.

Raising the profile of existential risks

We need more public awareness and discussion of the subject and the various risks to facilitate actions and countermeasures.

We need more resources devoted to research of existential risks: detailed studies of particular aspects of specific risks as well as more general investigations of associated ethical, methodological, security and policy issues.

Governance

We can't rely on our current institutions, norms and attitudes. We need new global organisations that are effectively tackling existential risks.

The most important action seems to be building strong international organisations under umbrellas such as the United Nations to ensure that existential risks are collectively better understood and addressed with countermeasures (such as the IPCC or IAEA). This seems especially urgent in the case of artificial intelligence or geoengineering, where there seems to be a lack of global political organisations.

No one global institution can address all the dimensions of existential risk governance. Governance must be bottom-up as well as top-down, and span processes and institutions in interconnected ways. Civil society, the private sector and philanthropy must all work together on reducing existential risk.

One of the most logical steps to reduce existential risk is colonising other planets. By spreading humanity to other planets we reduce the existential risk of every single planet. Elon Musk has famously made this his priority with SpaceX.

Our species' time on Earth is very likely to be finite, we should be doing everything we can to colonize nearby worlds — particularly Mars — to increase our odds of survival. Putting a permanent colony of humans on Mars would be an insurance policy against a civilization-ending catastrophe here at home.

If we let the opportunity to colonise other planets pass without taking advantage, we will be doomed to remain on Earth where we will eventually go extinct.

Get involved

Help raise awareness by sharing this post or other resources.
¬ email a friend


Donate

Donate to the leading organisations in this space. Even small amounts make a huge difference.

Future of Humanity (Oxford)
Centre for the study of Existential Risk (Cambridge)
Machine Intelligence Research Institute
Leverhulme Centre for the Future of Intelligence
Future of Life (MIT)
Far Future Fund (Effective Altruism Foundation)
Centre for Effective Altruism
80,000 Hours


    References

  1. Benjamin Todd ¬ "Why despite global progress, humanity is probably facing its most dangerous time ever"
  2. Nick Bostrom ¬ "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards"
  3. Phil Torres ¬ Morality, Foresight, and Human Flourishing [ISBN: 1634311426]
  4. List of nuclear close calls
  5. Spencer Greenberg, Social Science as Lens on Effective Charity: results from four new studies
  6. Future of Humanity Expert Survey
  7. Tim Urban ¬ "The AI Revolution: The Road to Superintelligence"
  8. AI Poliocy Challenges and Recommendations
  9. Roman Duda ¬ "Climate Change: Extreme Risks"
  10. Global Catastrophic Risk Report 2018 (PDF)
  11. Wikpedia ¬ "Pandemic"
  12. Matthew E. Smith; Michael A. Hayoun. ¬ "Biologic Warfare Agent Toxicity"
  13. Nick Bostrom, Global Catastrophic Risk [ISBN: 0199606501]
  14. Jaan Tallinn ¬ "Existential Risk"



    Credits

  1. codeChris "Particles globe"
  2. Damien Franco "MARS"
  3. Shelaine "Nat Geo Highlight Effect"
  4. Sascha Michael Trinkaus "Atom"
  5. Prophet Font by ABC Dinamo