The greatest existential risks over the coming decades or century arise from certain, anticipated technological breakthroughs that we might make in particular, machine super intelligence, nanotechnology and synthetic biology. Each of these has an enormous potential for improving the human condition by helping cure disease, poverty, etc. But one could imagine them being misused, used to create powerful weapon systems, or even some kind of accidental destructive scenario, where we suddenly are in possession of some technology that's far more powerful than we are able to control or use wisely.
Nick BostromWe should not be confident in our ability to keep a super-intelligent genie locked up in its bottle forever.
Nick BostromThe first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Nick BostromIt’s unlikely that any of those natural hazards will do us in within the next 100 years if we’ve already survived 100,000. By contrast, we are introducing, through human activity, entirely new types of dangers by developing powerful new technologies. We have no record of surviving those.
Nick BostromOur approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach - see what happens, limit damages, and learn from experience - is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.
Nick Bostrom