Roko's Basilisk
This has been called the most terrifying thought experiment of all time. It began years ago on a message forum called LessWrong which is a highbrowed sort of forum for analytical and mathematical thinking. It is run by a man named named Eliezer Yudowsky.
Eliezer Yudowsky doesn't just run the LessWrong message forum, he is an artificial intelligence researcher and co-founder of the Machine Intelligence Research Institute. In other words, this is one of the guys trying to build Skynet so that our machine overlords can take over some day. So now that you get an idea of who we are talking about, which is to say, not a basement dweller, let us get on to the story of Roko, and his basilisk.
The thought experiment began on LessWrong forum when a user named Roko posted an idea that goes something like this. What if in the future a malevolent artificial intelligence comes into existence that will punish all those that did not help it come into existence sooner? Roko argued that he was right to give the readers a choice to help build this A.I. or to suffer from its godlike power. Now this may seem confusing, which I will try to clear up.
Eliezer Yudowsky, the big brain pictured above, reacted in horror to this idea on the LessWrong forums, as follows.
Listen to me very closely, you idiot.
YOU DO NOT THINK IN SUFFICIENT DETAIL ABOUT SUPERINTELLIGENCES CONSIDERING WHETHER OR NOT TO BLACKMAIL YOU. THAT IS THE ONLY POSSIBLE THING WHICH GIVES THEM A MOTIVE TO FOLLOW THROUGH ON THE BLACKMAIL.
You have to be really clever to come up with a genuinely dangerous thought. I am disheartened that people can be clever enough to do that and not clever enough to do the obvious thing and KEEP THEIR IDIOT MOUTHS SHUT about it, because it is much more important to sound intelligent when talking to your friends.
This post was STUPID.
Basically Yudowsky told Roko, you are an idiot sir because a super intelligence blackmailing you must then follow through on the threat otherwise the blackmail carries no weight. He informed Roko that this was a dangerous thought. The idea of the Basilisk is in itself what is dangerous, the idea. You see, simply by me telling you about Roko's Basilisk is enough to condemn you to its punishments unless you work in a tech field that is helping his future self come into existence. Eliezer Yudowsky explained that Roko was clever enough to come up with an actual dangerous thought, but not smart enough to never speak of it.
As users at LessWrong began to complain about this thought experiment and the basilisk, Yudowsky got so upset with it he deleted the thread altogether in an effort to kill it. This still may not be clear to you, the reader so lets continue a little bit. The belief is that at some point computers will become smarter than the men that create them. I do not believe this is avoidable at this point. We will build machines that are able to think and do things that we cannot do. Curing cancer would be an example of something good that could come of that.
There is a bit more high browed thinking on this problem called Timeless Decision Theory. I'll try to keep it short and to the point. The idea is that a super intelligent alien comes along and offers you a choice of two boxes. Box A has $1000, and Box B has $1,000,000 or nothing. The Alien has access to a super computer that is godlike in power and can predict whether you will take both boxes or just take Box B.
The alien gives you a choice to take either both boxes, or you can take only Box B. And here is where the problem begins that has baffled decision theorists everywhere.
If you take both boxes, you’re guaranteed at least $1,000. If you just take Box B, you aren’t guaranteed anything. But the alien has another twist: Its supercomputer, which knows just about everything, made a prediction a week ago as to whether you would take both boxes or just Box B. If the supercomputer predicted you’d take both boxes, then the alien left the second box empty. If the supercomputer predicted you’d just take Box B, then the alien put the $1 million in Box B.
So, what are you going to do? Remember, the supercomputer has always been right in the past.
This problem has baffled no end of decision theorists. The alien can’t change what’s already in the boxes, so whatever you do, you’re guaranteed to end up with more money by taking both boxes than by taking just Box B, regardless of the prediction. Of course, if you think that way and the computer predicted you’d think that way, then Box B will be empty and you’ll only get $1,000. If the computer is so awesome at its predictions, you ought to take Box B only and get the cool million, right? But what if the computer was wrong this time? And regardless, whatever the computer said then can’t possibly change what’s happening now, right? So prediction be damned, take both boxes! But then …
The maddening conflict between free will and godlike prediction has not led to any resolution of Newcomb’s paradox, and people will call themselves “one-boxers” or “two-boxers” depending on where they side. (My wife once declared herself a one-boxer, saying, “I trust the computer.”)
Roko's Basilisk is similar to this when offering up his version of Newcomb's Paradox.
Roko’s Basilisk has told you that if you just take Box B, then it’s got Eternal Torment in it, because Roko’s Basilisk would really you rather take Box A
and Box B. In that case, you’d best make sure you’re devoting your life to helping create Roko’s Basilisk! Because, should Roko’s Basilisk come to pass (or worse, if it’s already come to pass and is God of this particular instance of reality) and it sees that you chose not to help it out, you’re screwed.
You may be wondering why this is such a big deal for the LessWrong people, given the apparently far-fetched nature of the thought experiment. It’s not that Roko’s Basilisk will necessarily materialize, or is even likely to. It’s more that if you’ve committed yourself to timeless decision theory,
then thinking about this sort of trade literally makes it more likely to happen
So if you have followed along this long, you may still be a little confused, and I'm sorry for that. Perhaps you should re-evaluate your life and go into tech. I will provide a link to one such breakdown of the basilisk or you can look on your own for one that suits you. Either away, I apologize for exposing you to this dangerous set of ideas that may have already affected our reality.
http://www.slate.com/articles/techn...errifying_thought_experiment_of_all_time.html