News Bits

by Luke Muehlhauser on December 4, 2011 in News

New posts at Facing the Intelligence Explosion: Why Spock is Not Rational and The Laws of Thought.

New at Less Wrong: Hack Away at the Edges and Why study the cognitive science of concepts?

From Twitter:

Previous post:

Next post:

{ 13 comments… read them below or add one }

Alexander Kruel December 4, 2011 at 4:57 am

That laser stuff is very dangerous!

Infrared lasers are particularly hazardous, since the body’s protective “blink reflex” response is triggered only by visible light. For example, some people exposed to high power Nd:YAG laser emitting invisible 1064 nm radiation, may not feel pain or notice immediate damage to their eyesight. A pop or click noise emanating from the eyeball may be the only indication that retinal damage has occurred i.e. the retina was heated to over 100 °C resulting in localized explosive boiling accompanied by the immediate creation of a permanent blind spot.

Many cheap lasers emit scattered infrared light and can cause damage to the eye even if you don’t stare into it directly.

Lasers are no toys. There are already too many kids running around with lasers that can blind you instantly.


Reginald Selkirk December 4, 2011 at 4:43 pm

A new research institute in Japan meant for generalist researchers.

I don’t know about that. Being at an institute where every one else is a generalist might not be best. It seems to me a generalist would want to have lots of specialists around to collaborate with.


Reginald Selkirk December 4, 2011 at 4:45 pm

Chocolate: have you tried Valrhona?


Luke Muehlhauser December 5, 2011 at 12:27 am

Re: chocolate. No.


Tarun December 5, 2011 at 5:52 pm


Given your criticism of philosophers’ use of intuition as support for argument, I was struck by some of the things you say in The Laws of Thought. The justifications you offer for Bayesianism and utility maximization are essentially, “These strategies are uniquely rational if you accept a few really really intuitive assumptions.” But beware the allure of really really intuitive assumptions! Their intuitiveness is often just an artifact of presentation. As anyone who has taught philosophy to undergraduates is probably aware, if you are clever enough in how you phrase your claims and cherry-pick your thought experiments, you can basically get students to accept that any thesis is intuitively obvious.

Von Neumann and Morgenstern’s continuity axiom is an example of this dark art. Phrased as it usually is, it does not seem particularly problematic, but what if you explicitly state that it rules out lexically ordered preferences. You’re offered the following gamble: I will roll a p-sided die. If it shows a 1, I will kill an innocent human being. If it shows any other number, I will give you a cheeseburger. If the V-M axioms are supposed to characterize a rational decision-maker, it is irrational to refuse this gamble for all values of p. A rational human being should be willing to risk some miniscule probability of an innocent dying in order to acquire a cheeseburger. Does the axiom still sound obvious? I could make similar criticisms of Cox-style derivations of Bayesianism.

Does this mean I think orthodox decision theory and Bayesian epistemology are not normative? No! What I’m complaining about is a common strategy in elementary presentations of these theories: the confident assertion that they follow from totally innocuous axioms. I think a better presentation would say, “Look, these things can be derived from axioms that many have found convincing, but there are others who fully understand the axioms and are not convinced.” Don’t use the rhetoric of authority to convince people that your axioms are unproblematic.


Scott December 5, 2011 at 8:07 pm

I’m surprised you haven’t posted this video yet:


Forrest December 5, 2011 at 11:37 pm


I don’t know, it still seems pretty intuitively obvious to me that for some value of p you should accept that deal and although I can’t speak for what most people take to be intuitively obvious, I am willing to say that most people take deals like this for some value p all the time. Every time you pay to eat a cheeseburger, you have a nonzero probability of dying from a heart-attack or dying from e. coli, so there is some p-sided die you are already willing to roll in which you die if you roll a one and you have to PAY for a cheeseburger if you roll anything else. So a situation where you roll a one and someone dies, roll anything else and you get a FREE hamburger is even better than that (assuming you care about a random stranger less than or equal to how much you care about yourself). So yeah, I wouldn’t hesitate to roll that die for some sufficiently large value p anymore than I would hesitate to eat a cheeseburger. Sorry random innocent stranger.


Tarun December 6, 2011 at 12:22 am


There is some value of p for which I would take the deal, too. That’s not the point. The point is, if someone wouldn’t take the deal for any value of p, say because they believed there is an absolute deontological commandment against gambling with another’s life for personal gain, it seems weird to call them irrational. Now you might say it is impossible to actually live in modern society without undertaking gambles of just this sort. Perhaps that is true. But a person may still have the principle that one should minimize the number of such gambles one accepts, so that when confronted with a completely avoidable gamble, the right thing to do is always to refuse. Such a person would have a set of values different from mine, but nothing about them seems clearly irrational.


Tarun December 6, 2011 at 12:24 am

Oh, and on the fact that you’re gambling with your own life when you eat a cheeseburger, I don’t think it’s unreasonable to have stronger strictures against gambling with another’s life than with one’s own, even if one does care about oneself more than others. I care about my car more than my dad’s car, but there are risks I will take when driving my car that I would never take in my dad’s car, and not just because I’m worried he’d be mad at me.


Forrest December 6, 2011 at 2:00 am


Fair enough but when you get in your car and drive to get a cheeseburger you have a nonzero probability of getting in a lethal car accident and killing another person; everyone still takes this risk. I am sympathetic to the view you outlined of minimizing the gambles one accepts, but this seems to me to be a separate question from whether there are always some odds at which you should take the gamble. I think firing a gun into a dark hallway where someone may or may not be standing is an unnecessary gamble that should be avoided, but I’d happily fire a gun into the hallway if doing so (somehow) cured cancer.

We agree, I think, that there would be a point where we, ourselves would take these bets. I think our disagreement might not necessarily be about our conceptions of rationality, but might boil down to what we are tactically comfortable with calling irrational. For me at least, if someone refused to drive to get a cheeseburger because they thought there was literally no point at which they should risk someone else’s life for a cheeseburger, I would feel pretty comfortable calling that irrational.


Reginald Selkirk December 6, 2011 at 9:59 am

The Turing Test

The phrase “The Turing Test” is most properly used to refer to a proposal made by Turing (1950) as a way of dealing with the question whether machines can think.

The strangest part of Turing’s paper is the few paragraphs on ESP. Perhaps it is intended to be tongue-in-cheek, though, if it is, this fact is poorly signposted by Turing. Perhaps, instead, Turing was influenced by the apparently scientifically respectable results of J. B. Rhine. At any rate, taking the text at face value, Turing seems to have thought that there was overwhelming empirical evidence for telepathy (and he was also prepared to take clairvoyance, precognition and psychokinesis seriously). Moreover, he also seems to have thought that if the human participant in the game was telepathic, then the interrogator could exploit this fact in order to determine the identity of the machine—and, in order to circumvent this difficulty, Turing proposes that the competitors should be housed in a “telepathy-proof room.”

Turing, A. (1950), “Computing Machinery and Intelligence,” Mind, 59 (236): 433–60.


Stephen R. Diamond December 7, 2011 at 4:44 pm

I am sympathetic to the view you outlined of minimizing the gambles one accepts, but this seems to me to be a separate question from whether there are always some odds at which you should take the gamble.

Isn’t there a problem that arises before you can even reach the question regarding the existence of “some odds at which you should take the gamble”: the problem of whether there are always “some odds.” Perhaps whether there are *ever* “some odds.” Bayesian decision theory assumes that there *is* a way to assign probabilities dictated by logic, but this is a simplification of reality, and the question is whether it is an over-simplification Strictly speaking, any attempt to apply Bayesian theory in a rigorous fashion to a decision problem fails because of a vicious regress. While the regress expresses itself at all points in the likelihood estimation process, its vicious character reveals itself at the last step in any justification of likelihood estimates: there is *some* probability that we are wrong in our belief that Bayesian analysis is even mathematically coherent. At the very least, the analysis ends in assigning a zero probability to the hypothesis that Bayesianism is invalid logically. Yet, we know that’s wrong and can only hope that it’s trivially wrong, that the probability is so small we can simply disregard it.

I suppose that to avoid this impasse, you approach Bayesianism as a foundation for conclusions rather than as a hypothesis, but that isn’t to say you have the epistemic right to do so. And the end point of the regress only clarifies the problem; it doesn’t exhaust it. If we do an actual Bayesian application, we assume, for example, that we’ve entered the data without error.

The point is that Bayesian decision theory *must* assume certain background knowledge. Of course, that’s the way we in fact acquire knowledge, by accepting background assumptions, but what’s contradicted is that we could ever (again, speaking rigorously) actually apply such a theory rationally. We should factor in the probability that we erred in entering the data, we *do* factor it in, but we don’t (can’t) do it in Bayesian fashion. Our heuristics for likelihood estimation must be more basic than Bayesian principles. Of course, anything that *contradicts* Bayesian axioms must be false (this we know as surely, at least, as we know anything). But what isn’t clear is that any actual decision actually invokes operations on probabilities; in that case, there’s nothing for Bayesianism to contradict.


Luke Muehlhauser December 7, 2011 at 10:55 pm


Good point.


Leave a Comment