
"You wouldn't expect that a man of such great power and wickedness would be in the business of helping any person who requested it. But whether it makes any sense or not, that's the reputation the Dark Lord If you approach the Dark Lord for help, he'll give you an answer and your goal will be achieved. The price is that his answer might violate the rules of righteous conduct. To put it another way, he's like an ancient wisewoman who lives in a high mountain cave and speaks in riddles, except that he's a villainous lord." The country of Santal is perishing, and nobody knows why. His country's plight has driven Prince Nama over far roads to consult the famed Dark Lord for answers. On arriving there, he finds a mightily muscled sage in black armor, just as the stories say. And, chained to the Dark Lord's throne, a mysterious slave-woman with round ears. Round ears? There's nowhere in the world where people have round ears, is there? Content sexual abuse, economics.
Author

From Wikipedia: Eliezer Shlomo Yudkowsky is an American artificial intelligence researcher concerned with the singularity and an advocate of friendly artificial intelligence, living in Redwood City, California. Yudkowsky did not attend high school and is an autodidact with no formal education in artificial intelligence. He co-founded the nonprofit Singularity Institute for Artificial Intelligence (SIAI) in 2000 and continues to be employed as a full-time Research Fellow there. Yudkowsky's research focuses on Artificial Intelligence theory for self-understanding, self-modification, and recursive self-improvement (seed AI); and also on artificial-intelligence architectures and decision theories for stably benevolent motivational structures (Friendly AI, and Coherent Extrapolated Volition in particular). Apart from his research work, Yudkowsky has written explanations of various philosophical topics in non-academic language, particularly on rationality, such as "An Intuitive Explanation of Bayes' Theorem". Yudkowsky was, along with Robin Hanson, one of the principal contributors to the blog Overcoming Bias sponsored by the Future of Humanity Institute of Oxford University. In early 2009, he helped to found Less Wrong, a "community blog devoted to refining the art of human rationality". The Sequences on Less Wrong, comprising over two years of blog posts on epistemology, Artificial Intelligence, and metaethics, form the single largest bulk of Yudkowsky's writing.