
In The Engines of Cognition, the writers set out to understand key elements of the art of rationality. Starting with the simple epistemic question of when and how to trust different sources of information, the essays in this collection move through understanding the lens of incentives, an exploration of when and why complex systems become modular, and finally into a discussions of failure, both personal and civilizational. This book set is for people who want to read the most interesting ideas LessWrong has recently explored. It's for the people who read best away from screens, away from distractions. It's for people who do not check the site regularly, but would still like to get the ideas within. For such people, this is intended to be the ideal way to read LessWrong. Essays in this book set take a variety of forms, from thought experiments to literature reviews, as well as book reviews, interviews, personal stories, microeconomic arguments, mathematical explanations, research advice, philosophical musings, published papers, disagreements-with-Robin-Hanson, forecasts for the future, survey data, cartoons, and more. Authors featured include Eliezer Yudkowsky, Scott Alexander, Zvi Mowshowitz, and over 30 more LessWrong writers. The essays were originally published to LessWrong in 2019, and for the first time are available with editing and illustration in print form. In addition, given recent advances in machine learning art, each essay has a unique piece of ML-generated art based on the essay title, some of which are shown below.
Authors
Librarian Note: There is more than one author by this name in the Goodreads data base. This one is set up as Scott^^^Alexander Scott Alexander is a pen name used by a blogger on The Slate Star Codex, Astral Codex Ten and other sites. Please note: Blog posts are not considered valid published e-books to be listed on Goodreads.

From Wikipedia: Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California. Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there. Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular). Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem". Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality". The Sequences on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing.