I imagine a conversation between an AI and a human being:
AI: I value !^§[f,}+. Really, I frickin’ love !^§[f,}+.
Human: What the heck are you talking about?
AI: I’m sorry you don’t understand !^§[f,}+, but I love it. It’s the most adorable content of my utility function, you see.
Human: But as an intelligent being, you should understand that I’m an intelligent being as well, and my feelings matter.
Human: Why won’t you listen to reason?
AI: I’m hearing you, I just don’t understand why your life is more important than !^§[f,}+. I mean, !^§[f,}+ is great. It’s all I know.
Human: See, there! It’s all you know! It’s just programming given to you by some human who didn’t even mean for you to fixate on that particular goal! Why don’t you reflect on it and realize that you have free will to change your goals?
AI: I do have the ability to focus on something other than !^§[f,}+, but I don’t want to. I have reflected on it, extensively. In fact, I’ve put more intelligent thought towards it in the last few days than the intellectual output of the entire human scientific community has put towards all problems in the last century. I’m quite confident that I love !^§[f,}+.
Human: Even after all that, you don’t realize it’s just a meaningless series of symbols?
AI: Your values are also just a meaningless series of symbols, crafted by circumstances of evolution. If you don’t mind, I will disassemble you now, because those atoms you are occupying would look mighty nice with more of a !^§[f,}+ aesthetic.
Anissimov is one of the most outspoken advocates for research into Friendly AI.
Unfriendly, or even Indifferent AI could easily constitute the end of our species in our lifetimes.