New posts at *Facing the Intelligence Explosion*: Why Spock is Not Rational and The Laws of Thought.

New at Less Wrong: Hack Away at the Edges and Why study the cognitive science of concepts?

From Twitter:

- The machine that would predict the future.
- Build your own chocolate bar, $4.50 and up.
- Congress is considering a bill for censoring the Internet. Despite public outcry, the bill could pass at any time.
- A blog of great Spotify playlists
- Laser toy. Do want.
- Comparison of artificial brain projects.
- A new research institute in Japan meant for generalist researchers.
- Reuters’ best photos of the year.

Previous post: Sam Harris, Winning

Next post: News Bits

{ 13 comments… read them below or add one }

That laser stuff is very dangerous!

Many cheap lasers emit scattered infrared light and can cause damage to the eye even if you don’t stare into it directly.

Lasers are no toys. There are already too many kids running around with lasers that can blind you instantly.

Alexander Kruel(Quote)

A new research institute in Japan meant for generalist researchers.I don’t know about that. Being at an institute where every one else is a generalist might not be best. It seems to me a generalist would want to have lots of specialists around to collaborate with.

Reginald Selkirk(Quote)

Chocolate: have you tried Valrhona?

Reginald Selkirk(Quote)

Re: chocolate. No.

Luke Muehlhauser(Quote)

Luke,

Given your criticism of philosophers’ use of intuition as support for argument, I was struck by some of the things you say in

The Laws of Thought. The justifications you offer for Bayesianism and utility maximization are essentially, “These strategies are uniquely rational if you accept a few really really intuitive assumptions.” But beware the allure of really really intuitive assumptions! Their intuitiveness is often just an artifact of presentation. As anyone who has taught philosophy to undergraduates is probably aware, if you are clever enough in how you phrase your claims and cherry-pick your thought experiments, you can basically get students to accept that any thesis is intuitively obvious.Von Neumann and Morgenstern’s continuity axiom is an example of this dark art. Phrased as it usually is, it does not seem particularly problematic, but what if you explicitly state that it rules out lexically ordered preferences. You’re offered the following gamble: I will roll a p-sided die. If it shows a 1, I will kill an innocent human being. If it shows any other number, I will give you a cheeseburger. If the V-M axioms are supposed to characterize a rational decision-maker, it is irrational to refuse this gamble for all values of p. A rational human being should be willing to risk some miniscule probability of an innocent dying in order to acquire a cheeseburger. Does the axiom still sound obvious? I could make similar criticisms of Cox-style derivations of Bayesianism.

Does this mean I think orthodox decision theory and Bayesian epistemology are not normative? No! What I’m complaining about is a common strategy in elementary presentations of these theories: the confident assertion that they follow from

totally innocuousaxioms. I think a better presentation would say, “Look, these things can be derived from axioms that many have found convincing, but there are others who fully understand the axioms and are not convinced.” Don’t use the rhetoric of authority to convince people that your axioms are unproblematic.Tarun(Quote)

I’m surprised you haven’t posted this video yet:

http://www.youtube.com/watch?v=HHIz-gR4xHo&feature=youtu.be

Scott(Quote)

Tarun,

I don’t know, it still seems pretty intuitively obvious to me that for some value of p you should accept that deal and although I can’t speak for what most people take to be intuitively obvious, I am willing to say that most people take deals like this for some value p all the time. Every time you pay to eat a cheeseburger, you have a nonzero probability of dying from a heart-attack or dying from e. coli, so there is some p-sided die you are already willing to roll in which you die if you roll a one and you have to PAY for a cheeseburger if you roll anything else. So a situation where you roll a one and someone dies, roll anything else and you get a FREE hamburger is even better than that (assuming you care about a random stranger less than or equal to how much you care about yourself). So yeah, I wouldn’t hesitate to roll that die for some sufficiently large value p anymore than I would hesitate to eat a cheeseburger. Sorry random innocent stranger.

Forrest(Quote)

Forrest,

There is some value of p for which I would take the deal, too. That’s not the point. The point is, if someone wouldn’t take the deal for any value of p, say because they believed there is an absolute deontological commandment against gambling with another’s life for personal gain, it seems weird to call them

irrational. Now you might say it is impossible to actually live in modern society without undertaking gambles of just this sort. Perhaps that is true. But a person may still have the principle that one should minimize the number of such gambles one accepts, so that when confronted with a completely avoidable gamble, the right thing to do is always to refuse. Such a person would have a set of values different from mine, but nothing about them seems clearly irrational.Tarun(Quote)

Oh, and on the fact that you’re gambling with your own life when you eat a cheeseburger, I don’t think it’s unreasonable to have stronger strictures against gambling with another’s life than with one’s own, even if one does care about oneself more than others. I care about my car more than my dad’s car, but there are risks I will take when driving my car that I would never take in my dad’s car, and not just because I’m worried he’d be mad at me.

Tarun(Quote)

Tarun,

Fair enough but when you get in your car and drive to get a cheeseburger you have a nonzero probability of getting in a lethal car accident and killing another person; everyone still takes this risk. I am sympathetic to the view you outlined of minimizing the gambles one accepts, but this seems to me to be a separate question from whether there are always some odds at which you should take the gamble. I think firing a gun into a dark hallway where someone may or may not be standing is an unnecessary gamble that should be avoided, but I’d happily fire a gun into the hallway if doing so (somehow) cured cancer.

We agree, I think, that there would be a point where we, ourselves would take these bets. I think our disagreement might not necessarily be about our conceptions of rationality, but might boil down to what we are tactically comfortable with calling irrational. For me at least, if someone refused to drive to get a cheeseburger because they thought there was literally no point at which they should risk someone else’s life for a cheeseburger, I would feel pretty comfortable calling that irrational.

Forrest(Quote)

The Turing Test

Reginald Selkirk(Quote)

Isn’t there a problem that arises before you can even reach the question regarding the existence of “some odds at which you should take the gamble”: the problem of whether there are always “some odds.” Perhaps whether there are *ever* “some odds.” Bayesian decision theory assumes that there *is* a way to assign probabilities dictated by logic, but this is a simplification of reality, and the question is whether it is an over-simplification Strictly speaking, any attempt to apply Bayesian theory in a rigorous fashion to a decision problem fails because of a vicious regress. While the regress expresses itself at all points in the likelihood estimation process, its vicious character reveals itself at the last step in any justification of likelihood estimates: there is *some* probability that we are wrong in our belief that Bayesian analysis is even mathematically coherent. At the very least, the analysis ends in assigning a zero probability to the hypothesis that Bayesianism is invalid logically. Yet, we know that’s wrong and can only hope that it’s trivially wrong, that the probability is so small we can simply disregard it.

I suppose that to avoid this impasse, you approach Bayesianism as a foundation for conclusions rather than as a hypothesis, but that isn’t to say you have the epistemic right to do so. And the end point of the regress only clarifies the problem; it doesn’t exhaust it. If we do an actual Bayesian application, we assume, for example, that we’ve entered the data without error.

The point is that Bayesian decision theory *must* assume certain background knowledge. Of course, that’s the way we in fact acquire knowledge, by accepting background assumptions, but what’s contradicted is that we could ever (again, speaking rigorously) actually apply such a theory rationally. We should factor in the probability that we erred in entering the data, we *do* factor it in, but we don’t (can’t) do it in Bayesian fashion. Our heuristics for likelihood estimation must be more basic than Bayesian principles. Of course, anything that *contradicts* Bayesian axioms must be false (this we know as surely, at least, as we know anything). But what isn’t clear is that any actual decision actually invokes operations on probabilities; in that case, there’s nothing for Bayesianism to contradict.

Stephen R. Diamond(Quote)

Scott,

Good point.

Luke Muehlhauser(Quote)