Margins
Timeless Decision Theory book cover
Timeless Decision Theory
2010
First Published
3.65
Average Rating
117
Number of Pages

Disputes between evidential decision theory and causal decision theory have continued for decades, and many theorists state dissatisfaction with both alternatives. Timeless decision theory (TDT) is an extension of causal decision networks that compactly represents uncertainty about correlated computational processes and represents the decisionmaker as such a process. This simple extension enables TDT to return the one-box answer for Newcomb’s Problem, the causal answer in Solomon’s Problem, and mutual cooperation in the one-shot Prisoner’s Dilemma, for reasons similar to human intuition. Furthermore, an evidential or causal decision-maker will choose to imitate a timeless decision-maker on a large class of problems if given the option to do so. PDF: http://intelligence.org/files/TDT.pdf

Avg Rating
3.65
Number of Ratings
20
5 STARS
25%
4 STARS
30%
3 STARS
35%
2 STARS
5%
1 STARS
5%
goodreads

Author

Eliezer Yudkowsky
Eliezer Yudkowsky
Author · 20 books

From Wikipedia: Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California. Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there. Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular). Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem". Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality". The Sequences on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing.

548 Market St PMB 65688, San Francisco California 94104-5401 USA
© 2025 Paratext Inc. All rights reserved