News Bits

by Luke Muehlhauser on November 20, 2011 in News

New Less Wrong post: Existential Risk.

Draft of a book chapter I wrote with Louie Helm: The Intelligence Explosion and Machine Ethics.

I’ve begun posting snippets from a forthcoming Friendly AI FAQ for feedback. I’m doing the same thing with an overview analysis of intelligence explosion.

From Twitter:

Previous post:

Next post:

{ 14 comments… read them below or add one }

Reginald Selkirk November 20, 2011 at 7:48 am

Flicking your eyes back and forth can improve your memory, but only if you’re strongly right-handed.

This sounds like the kind of BS that gives science a bad name.

  (Quote)

g November 20, 2011 at 11:15 am

Reginald — And therefore it’s especially interesting if it actually turns out to be true.

  (Quote)

Reginald Selkirk November 20, 2011 at 11:57 am

The linked article is from 2009. Has the result been replicated yet?

  (Quote)

MarkD November 21, 2011 at 1:44 am

A few notes on reading The Intelligence Explosion and Machine Ethics.

(1) It’s not something I would typically read because it lacks relevance to mainline problems in my purview–therefore I am an ideally neutral reviewer ;-) In general, I found it quite readable and skimmable, with no obvious technical problems.

(2) The tank example is a lesson about overfitting. I think you grasp that, but the solutions to overfitting are well understood: (a) bottlenecking in the learning process and/or (b) expanding the exemplars in the training epochs. The example can remain, but I would expand the discussion to mention that the unintended consequences can be obviated by larger training data pools.

(3) I don’t understand “reflective equlibrium” (p. 14) and, without further elaboration, should be dropped. Or supply further elaboration.

(4) There is a mention of the problem of self-modification (or even human modification) as a short-circuit method for making the ethical calculus work out according to whatever decision tree and optimization plan might be at work for the AI. That course of action always looms in the background. Indeed, we can do the same kind of thing by ablating parts of our brain right now. Some might even do this (a lobotomy, for instance), but I think it is the threat of cognitive decoherence and pain that prevent us from doing it en masse. Why would not a similar problem exist for these superoptimizers? We probably can’t bank on it, but since we haven’t a clue currently how to produce AIs with human-like behavioral plasticity, the most likely future AI that achieves at least human intelligence must have human-like developmental evolution. Does that also mean a problematic path towards self-modification that also preserves self-identity?

(5) AI makes philosophy honest is at the sharp tip of the spear regarding turning ideas into logical procedures. I suppose it can be applied to the domain of moral decision making, but I think we can easily envision that a Turing-machine equivalent can process any logical procedure, so once we specify a consequentialist calculus and a data-gathering framework, we know it can be computed. So, as is noted towards the end, it is specification of the logical procedure that is lacking and I think it is safe to circle back to arguing about different ethical systems rather than worrying about AI ones that don’t yet exist.

  (Quote)

WitheredWillardWomper November 21, 2011 at 8:32 am

The writing style is amateurish. “Let us consider the implications of programming a machine superoptimizer to implement
particular moral theories.”

This conversational style used here destroys what little relation this paper has with academic papers. You don’t need these “Let us consider” statements. You back into your points and your Bibliography is simply amateurish rather than grabbing just the precisely relevant you belabor it with all manner of questionable manuscripts such as EYs long list of unpublished tom-foolery.

You should never cite unpublished works, also talking of an unpublished idea as a seriously considered theory as you do with CEV shows just how out of touch you are with reality.

  (Quote)

Reginald Selkirk November 21, 2011 at 12:20 pm

This is not common sense atheism:

Meet the Raelians

Luckily, says Claude, the Raelians are working on a solution to both problems. Their scientists are close to perfected clones with accelerated growth. That way, they can produce a fully adult human clone, with no mind of his or her own.

The Raelian scientists are also close to being able to transfer someone’s mind into a new body, making true immortality possible.

The main subject of today’s Raelian meeting is the political philosophy known as Paradism, which is described as being like Communism — except that the Proletariat are replaced with robots and nanobots, who do all the work for humanity.

  (Quote)

a/k/a Casey Barnes November 21, 2011 at 1:27 pm

Discussions of “GOD” and “IF”, “WHO”, “WHAT”, or “NOT” may invoke some people.
Whether or not the definition: “Anything that is worshiped can be termed a god, inasmuch as the worshiper attributes to it might greater than his own and venerates it.” Is acceptable to anyone…
Whether or not this is all feckless frop, it is not for me to pontificate.
However, you appear to worship your own mind and line of thinking. For you to acclaim this personal god ship, it leaves the question open: What are you, or what do you have to appeal to common sense that you should be so esteemed that entitlement?
Oh, yes, I am just a fool. And yes, I can utilize sophomoric, elitist rhetoric and that gives me nothing of value to confront your God-Profile.
Now, I’m going to have another drink and silently laugh myself to sleep.

“Answer someone stupid according to his foolishness, that he may not become someone wise in his own eyes.”

  (Quote)

Luke Muehlhauser November 21, 2011 at 7:23 pm

WitheredWillardWomper,

The chapter is written for a book in Springer’s Frontiers Collection, so the tone is matched with the tone of the books in that series that are nearest in content (e.g. the ones on SETI and BCIs). If anything, our chapter is less chatty.

As for the bibliography, I spoke with the editors and they were fine with us citing unpublished manuscripts and even blog posts. Also: Journal articles regularly cite unpublished manuscripts in every field of science and philosophy with which I am familiar.

Simply, you are uninformed.

  (Quote)

a/k/a Casey Barnes November 22, 2011 at 8:04 am

WitheredWilliamWomper,

It seems you send your comment to the wrong person….LOL
The following is my worthless remark: [It does not seem to be the same thread as yours.]
{Discussions of “GOD” and “IF”, “WHO”, “WHAT”, or “NOT” may invoke some people.
Whether or not the definition: “Anything that is worshiped can be termed a god, inasmuch as the worshiper attributes to it might greater than his own and venerates it.” Is acceptable to anyone…
Whether or not this is all feckless frop, it is not for me to pontificate.
However, you appear to worship your own mind and line of thinking. For you to acclaim this personal god ship, it leaves the question open: What are you, or what do you have to appeal to common sense that you should be so esteemed that entitlement?
Oh, yes, I am just a fool. And yes, I can utilize sophomoric, elitist rhetoric and that gives me nothing of value to confront your God-Profile.
Now, I’m going to have another drink and silently laugh myself to sleep.

“Answer someone stupid according to his foolishness, that he may not become someone wise in his own eyes.”}

Please excuse me for not prostrating myself to you… I don’t feel religious today.

  (Quote)

a/k/a Casey Barnes November 22, 2011 at 11:14 am

WitheredWilliamWomper,

Please stop public masturbation of your ego in my pitiful pressence!

  (Quote)

Adito November 22, 2011 at 4:29 pm

Why should we make an AI rather than modify ourselves so that we slowly become superior preference optimizers? This would be less efficient because of our hardware but it seems like it completely avoids all the potential problems with creating a moral machine. It also voids some of the benefits such as immediately having a perfect world formed for us but that’s a price that may be worth paying.

Given a coin toss with maximum positive utility on one side (an AI made paradise) and maximum or near maximum negative utility on the other (an AI made hell) the best choice is to not flip the coin.

  (Quote)

a/k/a Casey Barnes November 22, 2011 at 4:46 pm

All you have to offer is feckless frop and the flip of a coin?
Your selling out… not me.. I got it all! [Keep the change!]
a/k/a Casey Barnes November 22, 2011 at 12:11 pm
Each person has a minimum capacity to think, observe, test out and form a conclusion.
That is yours, with whatever results it brings.
Judging you is not an entitlement of any human. It is not possible to know all the dynamics that make you “you”.
There is one thing that I feel on solid ground. Religions, from Babylon to today is a ruthless scheme. There is much more to this conclusion. It would be a burdensome to say more to you.
However, I know where I have been, and what I have experienced, and why, and what is happening now, and where I am going and how to succeed to get there. I am not fooled by extravagant thinking or any person or group or philosophy or agenda. I do not permit what is so-called “faith” to touch me. All things needed are completely and reliably clear to me!
I have no delusions or personal agenda. As I read a bit of you discourse, I felt a quiet calmness. I already have what you say you have been lookeing for! I would be a fool to try to fathom out what is impeding you or to offer you anything.
You have your next breath and your mind and your heart. Like it is said at the beginning, how you use them is your entitlement, along with the results.
All that you have studied, pondered and reasoned on has nothing to do with any of my conclusions. Yet, uncorrupted confidence is mine. Little you say makes me think it would be appropriate to say anything more to you. If you think you have a solid reason, then ask. My response depends on you. Only direct contact by email will be available to you. I will not revisit this site.
~a/k/a Casey Barnes KC_Barnes@MSN.com

  (Quote)

a/k/a Casey Barnes November 22, 2011 at 4:48 pm

Adito November 22, 2011 at 4:29 pm
You present as the same feckless frop as all the others.
I have nothing in common with you.
Bye
a/k/a Casey Barnes November 22, 2011 at 12:11 pm
Each person has a minimum capacity to think, observe, test out and form a conclusion.
That is yours, with whatever results it brings.
Judging you is not an entitlement of any human. It is not possible to know all the dynamics that make you “you”.
There is one thing that I feel on solid ground. Religions, from Babylon to today is a ruthless scheme. There is much more to this conclusion. It would be a burdensome to say more to you.
However, I know where I have been, and what I have experienced, and why, and what is happening now, and where I am going and how to succeed to get there. I am not fooled by extravagant thinking or any person or group or philosophy or agenda. I do not permit what is so-called “faith” to touch me. All things needed are completely and reliably clear to me!
I have no delusions or personal agenda. As I read a bit of you discourse, I felt a quiet calmness. I already have what you say you have been lookeing for! I would be a fool to try to fathom out what is impeding you or to offer you anything.
You have your next breath and your mind and your heart. Like it is said at the beginning, how you use them is your entitlement, along with the results.
All that you have studied, pondered and reasoned on has nothing to do with any of my conclusions. Yet, uncorrupted confidence is mine. Little you say makes me think it would be appropriate to say anything more to you. If you think you have a solid reason, then ask. My response depends on you. Only direct contact by email will be available to you. I will not revisit this site.
~a/k/a Casey Barnes KC_Barnes@MSN.com

P.S.: I have way more that some cheap coin-flip.

  (Quote)

Beelzebub November 24, 2011 at 3:36 am

I have heard of Foldit before, but this spurred me to download and try it. Very fascinating. One of my pet ideas is of a software project that would allow crowd sources of general problem solving. I’m wondering how feasible it would be to enlist masses of online people to actually do, for instance, web-based information gathering and research, deductive or inductive argument, or perhaps even democratic voting to solve difficult problems.

  (Quote)

Leave a Comment