Find Us


Will humanity survive this century? This astronomer predicts 'a bumpy ride' ahead

Martin Rees
Martin Rees is Britain's Astronomer Royal and director of the Institute of Astronomy at the University of Cambridge. Copyright Shivani Khattar
Copyright Shivani Khattar
By Denise Chow with NBC News Tech and Science News
Share this articleComments
Share this articleClose Button

"There is the idea that we should despair and evacuate this planet and go somewhere else. That's a dangerous delusion."


Humanity is under threat. At least according to Sir Martin Rees, one of Britain's most esteemed astronomers.

In his new book, "On the Future," Rees turns his focus closer to home, examining the existential threats that face humanity over the next century. From cyber attacks to advances in biotechnology to artificial intelligence to climate change, Britain's Astronomer Royal says we are living at a critical juncture — one that could define how the human species fares.

To learn more about which technologies worry him most, his prospects for humanity's survival and why he thinks we'll eventually enter a period of "post-human evolution," MACH recently sat down with Rees in New York City. This interview has been edited for clarity and brevity.

MACH: Why did you write this book now?

Rees: Over the last few years, I've had an opportunity to interact with different technologies and more science policy people. I was concerned that they weren't worrying enough about some potential threats and some potential opportunities. I thought it was good to try and tie together my various thoughts, which I'd expressed in lectures and articles, in a book which I hope is fairly small and readable.

In your 2003 book, "Our Final Hour," you gave humanity about a 50-50 chance of surviving the 21st century. Where do you think we stand today?

Well, we survived 18 years so far, but I do think we will have a bumpy ride through the century. I think it's unlikely we'll wipe ourselves out, but I do feel that there are all kinds of threats which we are in denial about and aren't doing enough about. I'm thinking about climate change and the associated loss of biodiversity. We are not urgently dealing with that. Also, we need to contend with the fact that the world population is getting larger. It will be at least 9 billion by mid-century. It is going to be a big stress on resources as well as on food production.

Apart from those two predictable trends — a warming world and a more crowded world — there are also other concerns. We have new technologies, which are wonderful and powerful, but which involve risks as well. I'm thinking of biotechnology, cyber technology and artificial intelligence. Biotechnology is wonderful. It's allowed us to grow more food. It's improved health. It's allowed us to eliminate some diseases, but it also has a downside. It allows us to modify viruses like the influenza virus to make it more virulent and more transmissible. It allows us to change human beings and animals in ways that might be ethically unacceptable. All these new technologies are developing so fast that we are not sure that we can really cope with them well.


It's important to get a wider audience to take these seriously. Although these are scientific questions, the way science is applied is really a matter for all the public. Scientists shouldn't be the only people to decide how science is used.

What technologies worry you most right now?

I think at the moment the main worries are cyber technology and biotechnology. The problem of both cyber threats and misuse of biotechnology is that this can be done by just a few people. It doesn't need huge specialist facilities, like making an atomic bomb.

There was a report by the United States Defense Department in 2012 which said that a cyber attack on the electricity grid on the east coast of the U.S. was so serious that it might, I quote, "demand a nuclear response." That's the level of threat that we are under from cyber attacks.

These two kinds of technologies enable just a few people to have a hugely wide-ranging and maybe even global cascading effect. This leads to big problems of governance because you'd like to regulate the use of these things, but enforcing regulations worldwide is very, very difficult. Think how hopeless it is to enforce the drug laws globally or the tax laws globally. To actually ensure that no one misuses these new technologies is just as difficult. I worry that we are going to have to minimize this risk by actions which lead to a great tension between privacy, liberty and security.

Do you see ways that we can use and develop these technologies in a responsible way?

We've got to try. We can't put the genie back in the bottle. We've just got to make sure that we can derive benefits and minimize risks. When I say we have a bumpy ride, I think it is hard to imagine that there won't be occasions when there are quite serious disruptions caused by either error or by design using these new powerful technologies.

You identified climate change as one of your big worries. What do you make of the idea that we should colonize another planet as an insurance policy against global warming?

There is the idea that we should despair and evacuate this planet and go somewhere else. That's a dangerous delusion. I know it's been promoted by Elon Musk and also by my late colleague Stephen Hawking, but I think there's no Planet B. The world's problems can't be solved by escaping from the world. They've got to be tackled here.


Although a few pioneers are going to live on Mars [in the future], I think we are going to have to ensure that the bulk of humanity is able to live safely and comfortably here on this planet. It's a dangerous delusion to think otherwise, because terraforming Mars is much, much harder than ensuring we have a sustainable situation here and avoid massive climate change.

But you think we'll eventually have humans living on Mars?

I think there is a likelihood that by the end of the century there will be a community of people living on Mars. I think they will be people who are thrill-seeking adventurers rather than normal people. I think they will go there, not through a NASA program, but through one of these private space endeavors, like Elon Musk's SpaceX or Jeff Bezos's Blue Origin. I don't think they'll be followed by large numbers.

These people on Mars — I think they will be important for the far future of the 22nd century and beyond, because they will be in an environment to which they're ill adapted. They will have every incentive to use bio-modification and maybe cyborg techniques — linking to electronic machines — to adapt to their alien environment. They will quite quickly become like a new species.

You talk about this prospect of "post-human evolution" in your book. What does that mean?


It's a fundamentally new development in that it's a kind of evolution [that] is not the Darwinian natural selection, which over three-and-a-half billion years have led simple life to us human beings. It will be a version of secular intelligent design [where] we design entities which may have greater capacity than humans. This could happen in a few centuries rather than a few thousand centuries, which Darwinian selection requires in order to produce a new species. The key question is to what extent it will be flesh-and-blood, organic intelligence and to what extent it will be electronic.

If that happens, would we still be considered human?

This, of course, raises all kinds of philosophical questions. People imagine that we can download human brains one day into electronic machines. The question is: is that really still you? If you were told your brain had been downloaded, would you be happy to be destroyed? What would happen if, for instance, many copies were made of you? Which would be your personal identity?

Also, the question of consciousness. We know that what's special about us is not only that we can do all kinds of things that demand intelligence, but we are self aware. We have feelings and emotions. It's a big uncertainty whether an electronic intelligence, which manifests the same capabilities as us, will necessarily have self-awareness. It could be that self-awareness emerges in any entity that's especially complex and is plugged into the external world. It could be [that] it's something which is peculiar to hardware made of flesh and blood, like we are, and would not be replicated by electronic machines.

What's your overall message for humanity?


This century is crucial because if you're very pessimistic, you can imagine that we will misuse powerful technology and snuff ourselves out or foreclose a bright, longer-term future. On the other hand, if we use technology wisely, then it allows us to perhaps jump start an even more exciting kind of civilization here on Earth and far beyond. That's why even though the Earth has existed for 45 million centuries, and will go on existing for many million more centuries, this century is special because it's the one which is seeing the transition from natural evolution to maybe artificial evolution — whether biological or cyborg — and also the era when, for the first time, we can escape from this planet and perhaps start exploring others.

Share this articleComments

You might also like

What was the worst year in history? Scientists determine the answer

Apple launches faster chips, MacBook Pro laptops and cheaper Airpods - what are the upgrades?

What is the metaverse and why is Facebook betting big on it?