Margins
If Anyone Builds It, Everyone Dies book cover
If Anyone Builds It, Everyone Dies
Why Superhuman AI Would Kill Us All
2025
First Published
4.50
Average Rating
272
Number of Pages

"May prove to be the most important book of our time.”—Tim Urban, Wait But Why The scramble to create superhuman AI has put us on the path to extinction—but it’s not too late to change course, as two of the field’s earliest researchers explain in this clarion call for humanity. In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next. For decades, two signatories of that letter—Eliezer Yudkowsky and Nate Soares—have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us—and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn’t even be close. How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive. The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies. “The best no-nonsense, simple explanation of the AI risk problem I've ever read.”—Yishan Wong, Former CEO of Reddit

Avg Rating
4.50
Number of Ratings
2
5 STARS
50%
4 STARS
50%
3 STARS
0%
2 STARS
0%
1 STARS
0%
goodreads

Authors

Eliezer Yudkowsky
Eliezer Yudkowsky
Author · 20 books

From Wikipedia: Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California. Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there. Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular). Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem". Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality". The Sequences on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing.

548 Market St PMB 65688, San Francisco California 94104-5401 USA
© 2025 Paratext Inc. All rights reserved