How Cults Work

by Luke Muehlhauser on September 23, 2011 in Video

Sound familiar?

Previous post:

Next post:

{ 45 comments… read them below or add one }

Tristan Vick September 23, 2011 at 5:12 am

Ha ha. Yeah. Sounds familiar. A little too familiar.

  (Quote)

Steve September 23, 2011 at 8:35 am

yeh, sounds like the Institute of Scientific Divinity (Machine Intelligence Research Institute) fits in just nicely.

  (Quote)

Valerie September 23, 2011 at 10:05 am

“Sound familiar?”

LessWrong?

  (Quote)

James Babcock (jimrandomh) September 23, 2011 at 10:17 am

Nope, not familiar at all. I sometimes hear “cult!” from people who only read Less Wrong and have never met anyone from the community. But I’ve actually met the prominent contributors, and attended SingInst’s rationality boot camp, and it really didn’t bear that out at all. A cult is a kind of power dynamic; they disempower members to increase their leader’s influence over them. Rationality boot camp was all about doing *the exact opposite*. Most of the people in the community are fiercely individualistic and would never accept strong control over them. And there’s no one with anything remotely close to enough power to call them a cult leader.

Which is too bad; a few dozen cultists would be very useful. Does anyone know of a good cult SingInst could appropriate?

  (Quote)

Rorschach September 23, 2011 at 11:46 am

This is probably the introductury course of the discovery institute.

  (Quote)

Friedman September 23, 2011 at 11:58 am

Most of the people in the community are fiercely individualistic

LULZ.

Except when it comes to upvoting everything Big Yud writes on LW.

And there’s no one with anything remotely close to enough power to call them a cult leader.

Big Yud on LW. Upvote, down vote, isolate dissenters, etc.

Where is cl and bossmanham to provide some honest criticism of these unjustified assumptions?

  (Quote)

DaVead September 23, 2011 at 1:07 pm

“… we have the secrets to self-improvement, you can join us and be special, join our elite mission to save the world, we can teach you special powers, personal power…”

LOL.

  (Quote)

James Babcock (jimrandomh) September 23, 2011 at 1:13 pm

The ability to moderate a web site is insignificant next to the power of the force.

  (Quote)

Newton Baines September 23, 2011 at 1:26 pm

The ability to moderate a web site is insignificant next to the power of the force.

One must have the force to moderate a site.
The force is a white belt.
Website moderation is a black belt.
Tell me now, which is greater?

You must upgrade to a Black Belt Bayesian to understand this.
Such knowledge is not for all, it is special and belongs only to a true member of LessWrong and MIRI.

  (Quote)

Patrick who is not Patrick September 23, 2011 at 1:30 pm

You know, I think the whole Institute thing is pretty silly, but its not exactly tough to see that it has none of the markers of a cult. I’m going to assume that people calling it a cult are literally sitting at home in their pajamas, thinking to themselves, “Hey, an opportunity to insult Luke and maybe make him feel bad! I can’t pass THIS up! Its what Jesus would want!”

  (Quote)

Bill Maher September 23, 2011 at 1:43 pm

What Patrick Who is not Patrick said. Flaming on the internet just makes you sound like a basement dwelling virgin.

  (Quote)

Alrenous September 23, 2011 at 1:50 pm

I find Less Wrong very closely matches the popular conception of ‘creepy,’ though I don’t like the connotations of that particular word. I went looking for logical fallacies to see if I was seeing things, but founds lots of them.

In fact, the perception of individuality and the declaration of (empowering members)/(questioning authority) are extraordinarily useful in preventing those exact things. Precisely because it allows cultish techniques to work in people who would never accept strong control. Superficial empowerment and questioning allows fundamental control to escape unnoticed.

The main problem with LW in particular is that Yudkowsky doesn’t realize he’s kooky. That’s an unfalsifiable hypothesis for him, because he won’t consider it a possibility. His hypocrisy circuits do all the cult-work for him.

The other problem is that LW’s mission is necessarily very similar to a cult. If indeed the world is overrun with ‘irrationality’ – better seen as epistemic incompetence – then many cultlike behaviours are the common-sense response. (And I agree on the point.) Epistemic incompetence is more like anti-competence that actively tries to spread, like a virus, and intellectual quarantine is…a cult, basically.

Thirdly, it makes sense. If you think Yudkowsky is a kook, how much time will you spend on LW? Ergo, it attracts solely Yuddites in the first place.

I think the falsification on this is straightforward; set up a LW competitor. How does Yudkowsky respond? Does he celebrate the advent of more people utterly dedicated to truth, or does he attack them for criticizing his nonsense on e.g. many worlds?

Patrick:
Principle: humans natively make excellent observations, and these observations must be explained. (Otherwise your assertions are bare and pointless.)
In your case, I can see your analysis must be broken. From here.

A) Any group that says you must belong to be saved is a cult.”
“Be rational or be pointless.” How does HPMOR put it? You’re a muggle?

Do you have to support LW to not be a muggle?

B) Character assassination is a sign of a cult.

C) Healthy organizations are not threatened by openly debating ideas. [Of the leadership.]
Reducing dissent by direct punishment is passe. Now it’s all done implicitly, for the purposes of plausible deniability. In the case of LW, it is probably subconscious. This kind of suppression is endemic and you can pick up the techniques by osmosis. And, as a matter of fact, LW lacks serious dissent – the thing is accomplished.

D) Instructed to not read stuff critical of the group.
I don’t know if this happens or not. However, if it is in fact accomplished, it doesn’t matter how it happens.
But your insults are one kind of way this happens implicitly.

E) Never ending compulsory meetings and tasks are a sign of the cult.
Well…they’re not compulsory. But you get more LW browie points for attending. Positive, not negative reinforcement.

Of course, there are also many key points which LW does not conform to. But the evidence strongly suggests that you’re rejecting unfavourable evidence. Either that or you’re pig ignorant. (I would say it nicely, but there’s no nice way to say that.)

Moreover, the principle works in both directions, and the intuition of LW as culty must be accounted for. Fact is, it appears culty to many people. Why is that? (Hint: look above.)

Bill, pwned.

  (Quote)

Newton Baines September 23, 2011 at 1:55 pm

You know, I think the whole Institute thing is pretty silly, but its not exactly tough to see that it has none of the markers of a cult.

How is the Institute silly?
Can you demonstrate?
Have you mastered the Core Sequences and the Yudkowsky-Hanson AI Foom Debate?

  (Quote)

AarobED September 23, 2011 at 4:01 pm

You know, I think the whole Institute thing is pretty silly,

Computing existential risks is silly? Helping to save humanity is silly?
O RLY?
Since when?

  (Quote)

Mark Tanner, PhD September 23, 2011 at 4:25 pm

I must admit that it is pretty funny to encounter people that have no knowledge of Solomonoff Induction,
Kolmogorov Complexity, and Bayesian updating. I mean, the people in this video and some of the commenters on this thread are in need of some rationality updating!

  (Quote)

Anonymous September 23, 2011 at 6:40 pm

…it sounds a lot like grad school… with subtle adjustments (e.g. there are many other menial tasks that thwart real intellectual thought).

  (Quote)

Steve September 23, 2011 at 11:11 pm

In the 19th century there was Comte and the various “religions” of science; in the 20th there was Marxism and its “scientific” deciphering of history and Man; in the 21st there are these millenial faiths in technology and reason. The circle turns and returns. As with Marxism et al the goals are to “save” civilization by reordering Man in accord with “correct” scientific principles. The formal and intolerant ideolgy of the SI, the breathless enthusiasm and hyper-intellectualised language of its adherents, with its own “inner” terms, codes and referents, resemble to a t the Marxist groups (Leninists, Trotskites) of 50 years ago. They were only useful as negative markers for what could in fact actually be achieved, in reality, by we humans condemned as we are under the enertia of history.

Had a friend who was an officer in the army, grew disillusioned and resigned. Soon after his enthusiasm rebloomed and he joined the Jesuits. Or was it the other way round?

  (Quote)

PDH September 24, 2011 at 7:39 am

Alrenous wrote,

B) Character assassination is a sign of a cult.

Interesting.

  (Quote)

whoreOfdeath September 24, 2011 at 8:06 am

Reminds me of orthodox Judaism.

  (Quote)

joseph September 24, 2011 at 6:01 pm

Reminds me of my employment

  (Quote)

orthonormal September 25, 2011 at 11:57 am

If LW/MIRI were actually a cult, then Luke’s decision to post this video (and follow it up with “Sound familiar?”) would be the weirdest Xanatos Gambit I’ve ever seen.

  (Quote)

Tu Brut September 25, 2011 at 3:43 pm

If LW/MIRI were actually a cult, then Luke’s decision to post this video (and follow it up with “Sound familiar?”) would be the weirdest Xanatos Gambit I’ve ever seen.

One more person that can’t detect sarcasm. Shocking that CSA posters can’t pick up on such things.

Luke, please just focus on LW posts. CSA was great but the people that comment on this blog are frightening.

  (Quote)

Tige Gibson September 25, 2011 at 11:05 pm

Since most people are in families wholly in the cult, this information sounds very backwards. The video tells you that it sounds like your family doesn’t really love you, but the reality is if you want to leave the cult, your family really will act like they don’t love you.

  (Quote)

jamesm September 26, 2011 at 7:41 am

Wait, I really did invent air

  (Quote)

Aris Katsaris September 27, 2011 at 3:06 am

Except when it comes to upvoting everything Big Yud writes on LW.

Friedman, I’ve seen comments of Yudkowsky downvoted several times, or staying at zero points. Just from the last couple pages of his comments (http://lesswrong.com/user/Eliezer_Yudkowsky) you can find a comment of his that’s at -9 and another comment that’s at -6.

So your claim is false. Will you now revise your opinion?

  (Quote)

God September 27, 2011 at 5:24 am

So your claim is false. Will you now revise your opinion?

Silence, sinner.

Only I can make those demands of humans.

  (Quote)

Alrenous September 27, 2011 at 12:06 pm

Interesting point, Katsaris.

The -9 comment is now at page 4 and is all-caps, violating netiquette. I could not find the -6.
The zero comments are contentless, such as ‘agreed’ or questions, except two which involved points of esoteric math.

Your claim is misleading. Will you now revise your opinion?

  (Quote)

Aris Katsaris September 27, 2011 at 3:08 pm

The -9 comment is now at page 4, and is all-caps, violating netiquette.

Of course: he deserved those downvotes. My point was that he both deserved them and *got* them — so he’s not just being upvoted mindlessly no matter what he does.

You can set your preferences to display 25 comments per page instead of 10 per page — so it was at page 2 for me. Either way, sorry, I should have just pointed you to it instead of making you search for it.

I could not find the -6.

The -6 is at http://lesswrong.com/lw/6lq/followup_on_esp_study_we_dont_publish_replications/4i8h

But more important to whether he’s downvoted or not (which he is), is the fact that disagreement with his opinion, or even wider criticism against him, is also often upvoted. — e.g. in the responses to the comment I just linked above.

My point is — this doesn’t look like a cult to me. Eliezer has high status in the LessWrong community, sure. But it’s a far cry from being a “cult leader”.

  (Quote)

Sonia September 27, 2011 at 8:15 pm

My point is — this doesn’t look like a cult to me. Eliezer has high status in the LessWrong community, sure. But it’s a far cry from being a “cult leader”.

You failed at making your point. Will you now revise your opinion?

  (Quote)

Aris Katsaris September 28, 2011 at 12:55 am

Of course, Sonia — and already done: I’ve severely revised downwards my opinion of the average participant here. and of the potential usefulness of trying to discuss with people in this place.

The lack of serious counterargument to any of my points, also makes for a small update upwards of my estimation of LessWrong (but since the quality of the discussion here was so low, it’s a very small increase)

  (Quote)

Tim September 28, 2011 at 7:18 am

Of course, Sonia — and already done: I’ve severely revised downwards my opinion of the average participant here. and of the potential usefulness of trying to discuss with people in this place.

You are so wise and mighty! Do you have a shrine where we can worship you and never question what you say?

  (Quote)

Alrenous September 28, 2011 at 10:07 am

Why is it that Yudkowsky doesn’t have a single downvoted article?
Why is it that none of his substantial comments are downvoted? It’s great that counter-arguments are upvoted…except your personal opinion isn’t much as evidence, especially as all Yudkowsksy’s score<1 comments are one line aside from the math ones. (And I don't even know what the -6 is supposed to mean.)

I can guarantee that I cannot both be honest and consistently write non-downvoted articles on LW. Your(plural) response will, of course, be to 'severely revise downward your opinion' of my ability to write about truth. 'People who disagree with me are just stupid' works logically but isn't falsifiable. Conversely, 'everyone at LW agrees with Yudkowsky because he's just that good,' is the the same structure.

I can admit it's reasonable that it doesn't look like a cult to you. Will you admit it's reasonable to say that effective 100% success is suspicious?

  (Quote)

Aris Katsaris September 28, 2011 at 11:33 am

Ooph, I made a long answer, Alrenous, but I don’t see it having been posted, so I’m afraid it may have been lost.
In summary of what I wrote:
– Eliezer does have atleast some downvoted articles (e.g. http://lesswrong.com/lw/18/slow_down_a_little_maybe/ at -3, http://lesswrong.com/lw/ku/natural_selections_speed_limit_and_complexity/ at -1) — so really all the evidence you claimed are now disproven, and if you honestly claimed them as evidence, then you ought admit that the evidence now points the *other* way of what you claimed.
– Lots of contributors to LessWrong don’t have a single downvoted article — e.g. Yvain, Wei_Dai, Alicorn. Not having downvoted articles is relatively easy, though it’s not something Eliezer himself has managed to do.
– Some of these heavily upvoted people have contrary opinions to lots of LessWrong in several key issues — e.g. Alicorn who’s a non-consequentialist, or calcsam who’s a believing Mormon; so upvotes and downvotes are not about agreement with the “cult leader” either.
– Where’s this 100% success you’re talking about? People still disagree with Eliezer that 3^^^^3 people getting dustspecks is worse than a single person being tortured for 50 years. His AI-Box experiment was successful only 3 out of 5 times. His decision theory (TDT) has gained interest, but people have been just as eager to call it flawed and propose their own alternative decision theories instead (e.g. UDT). Eliezer hasn’t even convinced people that his latest My Little Pony omake was a good piece of work. Not everyone believed it’s his metabolism that’s to blame for his weight problems either.

So where’s this 100% success you claim he has? Not in convincing LessWrong members about *anything* really.

  (Quote)

Aris Katsaris September 28, 2011 at 11:57 am

Also: get lost, Tim. If you don’t want to know how I revise my opinion given new information, people always have the option of *not asking* — but once asked, and if I choose to respond, I’ll definitely respond honestly.

People were even *persistent* on having me respond, by repeating the question– so it’s doubly uncivil of you that you now choose to pretend offense at the honest content of said response. You surely would have found a different offense to pretend at if I hadn’t responded at all.

  (Quote)

Yoshimoto September 28, 2011 at 12:35 pm

Also: get lost, Tim. If you don’t want to know how I revise my opinion given new information, people always have the option of *not asking* — but once asked, and if I choose to respond, I’ll definitely respond honestly.

I have studied Bayes, Solomonoff and meditation techniques for many years.

The way of Bayes is to walk humbly and respond only if a response is worth offering.
You have violated both of these ways.

Scholarship is akin to being a Monk. One must spend years training their character. I advise you to do the same.

  (Quote)

Aris Katsaris September 28, 2011 at 1:24 pm

Point taken, and thanks for the advice, Yoshimoto.

  (Quote)

Alrenous September 28, 2011 at 1:26 pm

Now I have downvoted Yudkowsky articles to work with, I can analyze the difference. My goal was never to change your mind because I honestly believe it is impossible. (Falsifiable.) My goal was to have an opportunity to change my own by provoking you into giving me evidence. Goal succeeded.

Again, ‘people’ disagree. Okay, which people? The only substantial disagreement here is about the dust specks vis a vis torture. And even that I can explain by noting that pre-LW or meta-LW social norms are overriding LW norms.

The first article was extremely short and a question. I asked if you can see how it’s reasonable to predict LW looks cultlike, and this post reminded me by doing exactly that.

As for the second, “(Warning: A simulation I wrote to verify the following arguments did not return the expected results. See addendum and comments.)” Why need I say more? (I did like the LW wiki article on that subject. Thanks.)

Is this really the best you can do?
The point is there’s no serious disagreement with Yudkowsky. If he put forth on the weather and got downvoted, it doesn’t falsify the hypothesis. 97%+-3% is not meaningfully different from 100%.
Instead, these tiny downvote subjects are exactly the fig leaves I predicted above. Disagreement about specks makes it look like you can disagree on how to be rational.

As for Alicorn etc, I said -and- be honest. I could parrot variations on Yudkowsky and probably do quite well. (I’ve successfully executed this strategy to test that it works; just not on LW.) All this proves is that Alicorn is good at following social norms. The idea was to prove that LW has at most weak social norms. I seriously doubt her non-consequentialism is a core issue. This calcsam fellow proselytizes Yuddism, not Mormonism – though there are places that would fault him just for being a Mormon, so LW gets points for that. (I’ve personally tested this as well.)

Of course overall I don’t care if LW is a cult or not because I’m not likely to join, and it does reveal contradictions in my moral system, where I say ‘specks’ one moment and ‘torture’ the next, depending on context.

But it makes a good illustration, because now I’ve solved it (with help) I have a perspective that is (almost) completely outside the LW universe. (I skimmed the entire comment thread. http://lesswrong.com/lw/kn/torture_vs_dust_specks/ Tim7 gets it right, is ignored). The answer is specks, because of consent.
Would the torturee consent to being tortured to save the people from dust specks? No.
Would the people consent to blinking to save the torturee from specks? Yes.
Well, that was easier than I expected. And that’s assuming you can’t just ask. Once again, I find that philosophy is just the art of not playing silly buggers with yourself.
Just to get entirely out of the LW universe: generally, the operation ‘adding’ is not valid for qualia.

More specifically I want to see someone doubt the singularity and not receive the equivalent of the torches and pitchforks treatment.

  (Quote)

Aris Katsaris September 28, 2011 at 1:27 pm

Though I’m not sure I would call it the way of Bayes — surely that label fits better to epistemic rationality than to utilitarian rationality, which is what I violated?

  (Quote)

Aris Katsaris September 28, 2011 at 3:27 pm

But it makes a good illustration, because now I’ve solved it (with help) I have a perspective that is (almost) completely outside the LW universe. (I skimmed the entire comment thread. http://lesswrong.com/lw/kn/torture_vs_dust_specks/ Tim7 gets it right, is ignored. The answer is specks, because of consent.

Despite what you think, that’s hardly a novel or unexplored position in LW. Besides Tim which you mentioned, it also forms the core of Alicorn’s preference for specks in http://lesswrong.com/lw/12b/revisiting_torture_vs_dust_specks/wpj ), and Phil_H says something similar in http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/u6m Probably more people, but those are the ones that came on top when I used the search engine for it.

A counterargument is what Nick_Tarleton says at http://lesswrong.com/lw/n9/the_intuitions_behind_utilitarianism/u6n : “A sufficiently altruistic person would accept 25 years of torture to spare someone else 50, but that doesn’t mean it’s better to torture 3^^^3 people for 25 years (even if they’re all willing) than one person for 50 years.”

So the issue of consent isn’t *sufficient* to definitively solve the problem. (And of course in the least convenient world, you wouldn’t have said consent either way, but that’s a different counter argument)

My position falls on the side of torture — my own argument is that we have to consider the *repeated* issue, where a billion intelligent agents have to make that decision independently once each. By choosing the specks, they’ll effectively have collectively condemned whole universes of beings into effectively permanent eyeloss, though individually they only contributed a single dust speck each.

As for the “Singularity”, I’m not sure which exactly of the claims you want to see challenged: That it’s possible to create an AI that will have grow to have greater general intelligence than the human beings that made it? That such an AI may be able to self-modify for even faster intelligence and produce a hard take-off of its intelligence in a span of ten years or less?

  (Quote)

Alrenous September 28, 2011 at 3:48 pm

Your links to Alicorn and Phil only illustrate how unique my position is.

Your reference to altruism confirms that you don’t understand my point.

  (Quote)

Aris Katsaris September 28, 2011 at 3:51 pm

If you do have a point, indeed I don’t understand it, Alrenous.

  (Quote)

Alrenous September 28, 2011 at 3:52 pm

Was that intended to be insulting?

  (Quote)

Aris Katsaris September 28, 2011 at 3:53 pm

No. I confess to not understanding any point you may have potentially had.

  (Quote)

Alrenous September 28, 2011 at 3:56 pm

And I confess to be unwilling to explain myself, so the blame is at least partially mine.

  (Quote)

gwern December 23, 2011 at 7:17 pm

Except when it comes to upvoting everything Big Yud writes on LW.

Big Yud on LW. Upvote, down vote, isolate dissenters, etc.

If anyone is curious, a full ranking of Yudkowsky comments can be found at http://www.ibiblio.org/weidai/lesswrong_user.php?u=Eliezer_Yudkowsky – including the negative-scored ones.

  (Quote)

Leave a Comment