The Hardest Problem in the World?

by Luke Muehlhauser on March 25, 2011 in Ethics,Quotes

Today’s quote is from Eliezer Yudkowsky:

Consider how many pitfalls people run into when they try to think about Artificial Intelligence. Next consider how many pitfalls people run into when they try to think about morality. Next consider how many pitfalls philosophers run into when they try to think about the nature of morality. Next consider how many pitfalls people run into when they try to think about hypothetical extremely powerful agents, especially extremely powerful agents that are supposed to be extremely good. Next consider how many pitfalls people run into when they try to imagine optimal worlds to live in or optimal rules to follow or optimal governments and so on.

Now imagine a subject matter which offers discussants a lovely opportunity to run into all of those pitfalls at the same time.

That’s what happens when you try to talk about Friendly Artificial Intelligence.

 

Previous post:

Next post:

{ 16 comments… read them below or add one }

antiplastic March 25, 2011 at 12:26 pm

My favorite part of that article is where he says the best description of his metaethical view is “analytic descriptivism” or “moral functionalism”, which of course refers to the realist stance developed by Frank Jackson in his 1998 monograph with the following title:

From Metaphysics to Ethics: A Defense of Conceptual Analysis

  (Quote)

cl March 25, 2011 at 2:05 pm

Luke and/or anyone who knows,

Since you know more about AI than I do, I was wondering how the people who promote a “total halt” of progress are perceived by the rest of that community? I would imagine that they’re chided and denigrated, written off as “alarmists,” and not taken seriously.

  (Quote)

Luke Muehlhauser March 25, 2011 at 3:02 pm

cl,

Right, “alarmists” or “luddites.” Most popular discussion revolves around Bill Joy’s 2000 piece, “Why the Future Doesn’t Need Us”. That article and much of the ensuing discussion can be found online.

  (Quote)

cl March 25, 2011 at 7:39 pm

Right, “alarmists” or “luddites.”

…interesting. I imagine you condone such denigration, in the same way you condone denigration of those who don’t buy the whole “settled issues” thing?

On another note, you make it to Berkeley yet? If so, it’s only a short MUNI ride over to the City. First round’s on me. I hope you don’t take this blogging crap so seriously that you can’t come say waddup sometime. I sure don’t :)

  (Quote)

joan palkim March 25, 2011 at 11:05 pm

“Why the Future Doesn’t Need Us”. That article and much of the ensuing discussion can be found online. I saw that article online and it was an interesting discussion.

  (Quote)

Garren March 26, 2011 at 12:42 pm

I’m still not the slightest bit worried about unfriendly AI, but do consider it an important exercise to get specific about morality in ways murky feeling and intuition discussions can perpetually avoid.

  (Quote)

cl March 26, 2011 at 5:32 pm

For those interested, we’re discussing the question of whether Luke can justifiedly call himself an atheist if he believes in the reign superintelligent machines. Some seem to say no. Personally, I’m not sure what I think about it yet.

  (Quote)

Rufus March 26, 2011 at 5:38 pm

I thought the hardest problem ever would occur immediately following the invention of human-consciousness uploading…

Imagine a super-intelligent AI spending millions of “man” hours devising algorithms capable of compressing EY’s ego into something manageable enough so that the rest of us schmucks will be able to fit onto the hard-drive… Such an AI would truly have “god-like” powers.

I’m kidding, I’m kidding. I’m sure EY is working on the hardest problem in the world.

;-)

  (Quote)

Luke Muehlhauser March 26, 2011 at 6:31 pm

cl,

LOL. Is this like “You’re not an atheist because you believe in a higher power!”?

  (Quote)

MarkD March 26, 2011 at 6:52 pm

I tend to stop at the first one: there is no recent good thinking about how to achieve an AI with properties that would lead to concerns about the influence on the later arenas of pitfalls. We can say, for instance, that we understand some of the physics of quantum nonlocality but we are not overly concerned about the societal influences of quantum telepathy. We understand a bit about wormholes but we aren’t overly concerned about stargates and transdimensional invasions. Even if we invoke the rapid invention of nuclear bombs following the early understanding of radiation, the critical parallel developments in the early 20th Century that lead to conceiving of The Bomb are not yet here for AI.

  (Quote)

Ryan M March 27, 2011 at 12:27 am

cl,

LOL. Is this like “You’re not an atheist because you believe in a higher power!”?

Yeah. Are some people arbitrarily defining God as a being intellectually more capable than humans?

  (Quote)

cl March 27, 2011 at 12:44 am

Luke,

Laugh if you want, but I prefer an explanation. Like I said, I’m not sure exactly what I think about this yet. Some of my commenters–who are also some of your commenters–have raised some interesting points and questions. If you’d rather laugh than address the question, well… it wouldn’t be the first time, and I certainly wouldn’t be surprised.

Ryan M,

It’s not just about God, but gods. Personally, I’d say that intellectual superiority is a prerequisite for any accurate definition of God or gods. At any rate, I welcome your input. Though I certainly agree with one or more points our commenters have made, I’ve not committed myself to the claim that there’s inconsistency here. Help us out. Or not. I just thought it was interesting and wanted to generate further discussion.

  (Quote)

Ryan M March 27, 2011 at 6:03 am

Ryan M,

It’s not just about God, but gods. Personally, I’d say that intellectual superiority is a prerequisite for any accurate definition of God or gods. At any rate, I welcome your input. Though I certainly agree with one or more points our commenters have made, I’ve not committed myself to the claim that there’s inconsistency here. Help us out. Or not. I just thought it was interesting and wanted to generate further discussion.

I agree that having intellectual superiority over humans is a necessary feature for a god. However, I don’t think the class ‘Intellectual superiors to humans’ has no particulars that are not gods. I think that a god must minimally have the following features: Greater intelligence than humans, intentional agent, not contingent on the universe. So if we agree that the above criterion must be met for a being to be called a god, then I think the AI proposed by Luke escapes the definition of god due to it being a universe contingent being. I hope I’m not missing something, but if I am tell me.

  (Quote)

cl March 27, 2011 at 1:57 pm

Ryan M,

I hope I’m not missing something, but if I am tell me.

Well, one thing you’re missing is that flippant, overconfident, “I’m a super-duper smarty pants” smug superiority that oozes from other members of Luke’s congregation.

I think that a god must minimally have the following features: Greater intelligence than humans, intentional agent, not contingent on the universe.

Did you read the thread I linked to? I ask because this is pretty much what I came up with, too. I think the contingency criterion has issues that need to be further discussed. For example, while the God of classical Judaism and Christianity is typically argued as not contingent on the universe, what about physical gods like the Olympians? Under the determinist rubric, isn’t all movement of matter non-contingent? That is to say, given this universe with all it’s prior conditions, isn’t it true that humans–and thus the AI they create–are non-contingent? With flagrant disregard for the wise advice from Dr. Marcel Brass, Luke argues that free will does not exist, and that everything is simply proceeding by the laws of physics. If so, then doesn’t that apply to the ascent of AI? Doesn’t AI as proposed by Luke fulfill the definition of a god due to it being non-contingent?

Please be aware that I’m not trying to convert you to my own POV here. Rather, I’m still trying to fully articulate my own POV here, and that’s why I appreciate your help.

  (Quote)

Ryan M March 27, 2011 at 4:00 pm

Cl,

Interesting thoughts. I suppose I should just check out your blog to get a better grasp of the situation.

  (Quote)

cl March 27, 2011 at 4:40 pm

I suppose I should just check out your blog to get a better grasp of the situation.

That’s what I was going for. I’m starting to get sick of all the haters here. I find it increasingly difficult to have a fruitful dialog in their midst. I know, I know… this is exactly what they say about me, but I challenge anyone to find me substituting name-calling for cogent argument.

  (Quote)

Leave a Comment