The Ethics of Rapture

by Luke Muehlhauser on June 2, 2011 in Ethics,Guest Post

Today’s post on ethics is written by Alonzo Fyfe of Atheist Ethicist. (Keep in mind that questions of applied ethics are complicated and I do not necessarily agree with Fyfe’s moral calculations.)

cloud_break

It appears as if the Seattle Atheists are formally calling for something that I suggested on my own blog a week ago . . . an examination into the possibility of fraud having been committed by Harold Camping’s “Family Radio” company.

(See: Seattle Atheists Call For Fraud Investigation)

Harold Camping is the self-professed prophet who predicted that the Rapture would take place on May 21st, followed by the end of the world on October 21st. The first prediction did not play out so well, but Camping is now arguing for the end of the world on October 21st. (A merciful god has apparently decided to spare us the five months of torment between the two events.)

Family Radio is the company responsible for advertising this event and, importantly in this context, collecting and organizing the contributions of believers.

The accusation of fraud asks whether responsible agents actually believed what they were claiming. Were they motivated by a desire to report to others what they thought to be true? Or were they motivated by a desire to acquire money and in-kind contributions and believed that these claims, though false, were a useful means to that end?

This invites us to ask how we can determine what they believed. The answer to this comes from an application of the principle that intentional agents act to as to fulfill their desires given their beliefs.

We look at their actions and determine the belief-desire set that best explains those actions.

Did these agents make any contributions to a 401(k) plan or an IRA account?

Did they make plans to attend conventions or any other gathering after May 21?

Did they purchase bonds or make other loans that had a maturity date after May 21?

Are there emails in which they gave private assurances to friends – this they actually liked and decided not to defraud – not to worry about these predictions?

These types of actions would allow us to conclude that the responsible agents did not actually believe their own claims.

Like I said, I do not know what can be required by law. However, people often willingly do that which they are not required by law to do. If the decision makers at Family Radio have nothing to hide, then they could voluntarily release their financial records and uncensored emails and demonstrate to the world that there is no evidence of fraud. If they do not do so . . . why not?

Answer: Because if they could get others to believe these claims then those others can be convinced to give up their money and to donate untold hours convincing still others to give up money as well.

Their pitch puts any potential victim in a very uncomfortable spot.

Only those who truly believe will enjoy the benefits of rapture. If you hold on to any wealth or spend your time on any activity that implies you will still be here after May 21st, then you do not really believe. At the same time (not coincidentally) we are here asking for your contributions so that we can continue our efforts. Your contributions will help to buy you a ticket on board the rapture train.

Given this pitch, a certain percentage of the population – given to fear and self-doubt and insufficiently well trained or well equipped to practice the science of reason, and raised on a diet of “faith is a virtue” – are ripe fruit ready to be harvested by anybody willing to make such a pitch.

I am not a lawyer, and I do not know if the law allows the state to investigate these actions. It may take a lawsuit on the part of an aggrieved party – a victim willing to step forward in the name of justice, even though she may face ridicule by those who would say, “You deserved what you got for being such an idiot.” (Con artists typically depend on their victims’ sense of shame and embarrassment for protection. It is one of the shields they hide behind.)

It’s possible that the decision-makers at Family Radio actually did believe what they said and there was no fraud. But that would not let them off the moral hook.

The sentiments that people generally have many and strong reasons to promote would almost certainly include some type of an adverse reaction to giving people guarantees that cause them harm.

If I were to guarantee an investment opportunity to my friends, family, and those who trusted me – if, with my encouragement, they invested everything they had in this opportunity while I stood with them saying, “Go ahead. You have nothing to lose. Put it all in.” – and they lost everything, I would feel terrible. I should feel terrible – which means that people generally have many and strong reasons to trigger the reward-learning system in others so as to promote such a terrible feeling. And the reasons they have for promoting such a feeling is to prevent the harms that those who lack such a feeling inflict on others.

To make matters worse, we would have to assume that I did not just guarantee that they would make a profit if they trusted me in making this investment. Our analogy would also have to include claims that they would – guaranteed – suffer hell on earth if they did not. As it turns out, those who did not listen to me lost nothing. Now, I would feel even worse.

Now, let’s add the fact that I have over $100 million in assets. Furthermore, I acquired some of these assets from people now left destitute because they trusted my advice.

Here, again, the sentiments that people generally have many and strong reason to promote say that those whose failure led others to suffer harm should make some effort to compensate those who were harmed. If I failed to use a good portion of my $100 million in assets to help those whose lives I destroyed, it would prove that I cared more about money than about the people that trusted me.

Instead, the people at Family Radio seem to have decided to say, “No, wait! The big payoff is on October 21st. Just hold on a little bit longer and you’ll share in the big payoff. I guarantee it. Bail out now – show your lack of belief, betray your lack of faith – and you will suffer the consequences.”

After October 21st, they set a new date, and string their victims along as long as they can, extracting as much money and in-kind contributions as they can, profiting as much as they can and, in the end, giving nothing in return.

“No. Wait! Wait! You’ll get the big payoff after you die. I guarantee it. If you discover that I am wrong, then you can come back and I promise I will make it up to you. Somehow. Just . . . trust me. Have faith. You will get your just reward in the end. I promise.”

- Alonzo Fyfe

Previous post:

Next post:

{ 22 comments… read them below or add one }

cl June 2, 2011 at 8:47 am

I’m with you on this one, Alonzo. Along these lines, we can also ask, “Was Camping following the teachings of Scripture in good faith?” To me, the answer seems to be an unequivocal “no,” he was not. After all, the Bible says that nobody knows the date or time–not even Jesus. People like Camping bring grief on the remainder of Christians who actually do take the Bible seriously in this regard.

BTW–glad to see you at least alluding to evidence in this one. This post stands out from your others in that regard. Much less preachy.

  (Quote)

Zeb June 2, 2011 at 10:45 am

So Luke, why the change in boilerplate at the top?

  (Quote)

Rufus June 2, 2011 at 7:05 pm

I wonder if Ray Kurzweil will issue an apology on January 1, 2046 after it becomes evident that the Singularity has not arrived save us all from ourselves.

  (Quote)

Alonzo Fyfe June 3, 2011 at 4:03 am

Rufus

You opt to ignore the fact that the issue of fraud has nothing to do with the fact that Harold Camping was wrong. It has a lot to do with the fact that a business enriched itself by making claims that its own people did not believe. Or, even if fraud cannot be demonstrated, they made guarantees that others acted on in ways that resulted in significant harms.

Is there any evidence that Ray Kurzweil sought to enrich himself by making claims that he did not believe, or that he made predictions that caused people to act in ways that resulted in significant harm?

If not, why are you seeking to change the subject?

  (Quote)

Rufus June 3, 2011 at 5:30 am

Mr. Fyfe,

You opt to ignore the fact that the issue of fraud has nothing to do with the fact that Harold Camping was wrong. It has a lot to do with the fact that a business enriched itself by making claims that its own people did not believe. Or, even if fraud cannot be demonstrated, they made guarantees that others acted on in ways that resulted in significant harms.

Is there any evidence that Ray Kurzweil sought to enrich himself by making claims that he did not believe, or that he made predictions that caused people to act in ways that resulted in significant harm?

If not, why are you seeking to change the subject?

You are correct. The two cases are not perfectly analogous. I would concede that it is more likely that Camping did not believe his predication than it is that Kurzweil does not believe his prediction. Both men have made money off of their predictions, Kurzweil has done so through the free market and not through donations, which is to say that Camping is, in my estimation, a complete parasite. At least Kurweil has produce inventions that are of value. So now the question as to why I brought up Kurzweil at all…

I bring this up for two reasons: 1) to point out that futurists, who pride themselves on having perfected the science of reasoning, are not immune to utopian “rapture-like” claims (and even setting dates to the event to create a sense of urgency) and 2) to suggest there really are people who may be throwing away a great deal of resources, time, and talent in the effort to bring this fantastic* event about. IMHO those resources could be better utilized in the service of humanity in other ways. Historically, we have seen many attempts at utopia through a combination of technology and political revolution. They have all proved to be destructive and oppressive (think of the French Revolution, the communist revolutions, the fascist revolutions). This is not to say that technology is bad in itself. Rather, technology combined with a belief in the perfectibility of the human estate has proved to be an extremely lethal combination. So I think any attempt to bring about utopia through the Singularity is at best a complete waste of time and resources and at worst will lead to consequences that are extremely harmful to humanity. Kurzweil’s prediction has raised awareness about Singularity-belief and has made it more likely that people will donate to organizations and researchers trying to bring this event about. It is in this way that I think Kurzweil’s prediction is analogous to that of Camping.

*I think the event is fantastic not so much because I think super-intelligent AI is unlikely to be brought about, but because I think utopia is impossible.

  (Quote)

DaVead June 3, 2011 at 8:26 am

I don’t understand why Christians kept citing Matthew 24:36 in response to this guy. I even heard one pastor say on the news, “As soon as someone makes a prediction about when the Lord will return, God says, ‘Nope, not that day!’” I think this response conflates knowledge with true belief. The Bible does not say no man will truly believe that Jesus will return on a particular day.

Although, Camping was making knowledge claims, so that would be an appropriate way to refute that claim on Christian grounds… but it would not further guarantee that Jesus was not going to come back on said day.

In this way, I disagree with you, cl, that in making a prediction he was not following the teachings of Scripture in good faith. If you refrain from a knowledge claim, there doesn’t seem to be anything Biblical against claiming, “Probably, Jesus will return on day X.”

  (Quote)

Jeff H June 3, 2011 at 12:10 pm

So DaVead, in other words, if I just keep predicting day after day that Jesus will return, I could prevent the rapture from ever happening? Sweet!

Hey everyone, Jesus is going to come back tomorrow!

:)

  (Quote)

Lorkas June 3, 2011 at 1:48 pm

I think that’s the opposite of what DaVead said, Jeff H.

  (Quote)

cl June 3, 2011 at 2:42 pm

Does anyone else find it odd that Fyfe takes such issue with Rufus? Okay, so the analogy wasn’t 100% spot-on. Big deal. At least Rufus isn’t out there preaching against claims without evidence, then making claims without evidence. IMO, baseless moral crusading is a far more flagrant offense to rational thought.

Rufus,

IMHO those resources could be better utilized in the service of humanity in other ways.

I agree. Funny thing is, elsewhere, Luke — Mr. “intuition should not be relied upon in the search for truth” — claims the direct opposite with no evidence whatsoever. Remember when he said that friendly AI research was the “single most important” thing we could do with our charity dollars? I guess all those sermons against intuition don’t apply when it comes time for Luke to preach his ethos of choice, eh?

Davead,

In this way, I disagree with you, cl, that in making a prediction he was not following the teachings of Scripture in good faith. If you refrain from a knowledge claim, there doesn’t seem to be anything Biblical against claiming, “Probably, Jesus will return on day X.”

Yeah, but the problem is, Camping didn’t refrain from a knowledge claim. He didn’t say “probably.” He claimed absolute certainty, so, my criticism stands.

  (Quote)

mojo.rhythm June 4, 2011 at 6:09 am

Cl,

Does anyone else find it odd that Fyfe takes such issue with Rufus? Okay, so the analogy wasn’t 100% spot-on. Big deal. At least Rufus isn’t out there preaching against claims without evidence, then making claims without evidence. IMO, baseless moral crusading is a far more flagrant offense to rational thought.

You are starting to sound like a broken record. Is there a person on the other end, or is this an automatic response generator?

  (Quote)

cl June 4, 2011 at 10:06 am

Nah, it’s actually just some PHP code that sniffs out inconsistency and crafts responses thus… ;)

  (Quote)

Rufus June 4, 2011 at 12:31 pm

cl,

I agree. Funny thing is, elsewhere, Luke — Mr. “intuition should not be relied upon in the search for truth” — claims the direct opposite with no evidence whatsoever. Remember when he said that friendly AI research was the “single most important” thing we could do with our charity dollars? I guess all those sermons against intuition don’t apply when it comes time for Luke to preach his ethos of choice, eh?

I do remember that post and found it shocking. There are so many great ways to give back to one’s community, but Luke has advocated that the lion’s share should go towards AI research. It appears that he has also taken the Singularity claims serious enough to redirect his career path and dedicate his time and talent to an organization that hopes to bring about “friendly” AI. I have to say that to dedicate one’s life to something like that takes a tremendous amount of faith. It is a faith that I don’t share. I suppose that I am too skeptical to think that major philosophical issues necessary to believe in the possibility of utopia and friendly “AI” are settled (and that we urgently need to move forward). When you are dealing with technology that could potentially wipe out humanity, and all other sentient life, you might want to make sure you’ve carried all of your philosophical “ones”, as it were.

Pax!

  (Quote)

cl June 5, 2011 at 8:41 am

mojo.rhythm,

To give you a more serious answer, I realize people probably roll their eyes and go, “Oh, there’s cl criticizing Fyfe for making claims without evidence again,” but I’m not really in this to maintain good favor amongst the local cadre of internet atheists. I will repeat my criticism over and over, until Alonzo Fyfe takes some responsibility and admits that “people generally” have reason to promote an aversion to real-world claims without evidence–himself included. He owes us an explanation. You know it, I know it, and several others know it. No offense intended, it just seems odd that you jest me instead of holding Fyfe accountable. We all know that if I was the one making claims without evidence here, y’all would be having my ass for it.

Rufus,

I do remember that post and found it shocking.

What did you find shocking? The lack of even a single iota of evidence for Luke’s claim? The fact that it appears to be sheer intuition while Luke rallies against intuition every other day of the week? Screw 2046; the fact that Luke is apparently wholly unconcerned with C/2010 1X, which–if the scientific evidence is to be believed–has the potential to wallop us with a cataclysmic event this fall? Yeah, color me shocked, too. If anything, now might be the time to run away from the Bay Area! At least ’til this thing passes over [pun intended].

I have to say that to dedicate one’s life to something like that takes a tremendous amount of faith. It is a faith that I don’t share.

Exactly. You see, we can’t escape it: we all live by faith. The epistemological veil forces us to do so. Luke and other atheists would have us think they’re above this, but they’re not.

  (Quote)

Rufus June 5, 2011 at 11:38 am

cl,

What did you find shocking? The lack of even a single iota of evidence for Luke’s claim? The fact that it appears to be sheer intuition while Luke rallies against intuition every other day of the week?

I think it is the certitude. I mean, talk about putting all of your eggs in one basket. What about shelter for the homeless, food for the hungry, medicine for the sick, what about those who suffer today? Should we divert most of our resources to bring about some technocratic utopia in the far future?

  (Quote)

cl June 5, 2011 at 12:52 pm

Rufus,

What about shelter for the homeless, food for the hungry, medicine for the sick, what about those who suffer today?

Screw ‘em! We’ve got robots to program with desirism!

Should we divert most of our resources to bring about some technocratic utopia in the far future?

I don’t know, I’d refer you to Alonzo Fyfe on that one. He’s apparently got the knowledge and computing power available to make calculations that lead to conclusions such as, “We’d be better of without spectator sports and reality TV.”

  (Quote)

Michael June 5, 2011 at 3:00 pm

cl,

I think it is the certitude.I mean, talk about putting all of your eggs in one basket.What about shelter for the homeless, food for the hungry, medicine for the sick, what about those who suffer today?Should we divert most of our resources to bring about some technocratic utopia in the far future?

Wouldn’t be so surprised. Under many forms of consequentalism, this is exactly the means-justifies-ends type of reasoning that is used.
The standard response is that we could never actually know whether the end result would be worth it so you can’t make that judgement, but then you have an epistemological objection against the theory on your hands.

  (Quote)

woodchuck64 June 5, 2011 at 6:15 pm

Rufus,

When you are dealing with technology that could potentially wipe out humanity, and all other sentient life, you might want to make sure you’ve carried all of your philosophical “ones”, as it were.

That seems very much like the point. “Friendly AI” is not just computer science but ethics and philosophy. If we appreciate how quickly technology is moving, how potentially powerful a strong AI could be, how potentially dangerous an AI without ethics could be, you want to use all the tools at your disposal now.

  (Quote)

Rufus June 5, 2011 at 7:43 pm

woodchuck64,

That seems very much like the point. “Friendly AI” is not just computer science but ethics and philosophy. If we appreciate how quickly technology is moving, how potentially powerful a strong AI could be, how potentially dangerous an AI without ethics could be, you want to use all the tools at your disposal now.

I agree with you that technology is moving fast. It is sadly moving much more quickly than our moral progress. Until we figure out the proper moral theory, I hope they have the good sense to program in Asimov’s “Three Laws” and a fail safe switch into every AI.

  (Quote)

Bob Seidensticker June 6, 2011 at 7:04 am

The Freedom From Religion Foundation has made a formal request that the California attorney general investigate this matter.

http://ffrf.org/news/releases/ffrf-calls-for-fraud-probe-into-rapture-campaign/

  (Quote)

woodchuck64 June 6, 2011 at 8:11 am

Rufus,

I agree with you that technology is moving fast. It is sadly moving much more quickly than our moral progress. Until we figure out the proper moral theory, I hope they have the good sense to program in Asimov’s “Three Laws” and a fail safe switch into every AI.

Agreed. That’s why I think Luke might be correct that AI ethics is the most important task facing humanity today. Are Asimov’s “Three laws” and a fail safe enough? I have doubts about that simply because AIs will likely be software and the temptation to allow AIs to improve their own software is too great. We need to figure out the proper moral theory for AIs and that requires understanding our own morality.

  (Quote)

cl June 6, 2011 at 11:19 am

woodchuck64,

That’s why I think Luke might be correct that AI ethics is the most important task facing humanity today.

Well sure, he “might” be correct, but Christians “might” be correct, Scientologists “might” be correct, and any other host of ideas “might” be correct. For me, that’s not the point. Luke espouses this standard by which we are to eschew claims based on intuition, right? Well, when it suits him, he does the same damn thing. It’s a matter of holding oneself to the same standard they hold others. That’s the issue, at least for me, and it irks me to see others apparently unconcerned with this inconsistency. It’s as if atheists would rather make excuses for Luke than hold him to the same standard he holds theists.

  (Quote)

Rufus June 6, 2011 at 6:50 pm

woodchuck64,

Agreed. That’s why I think Luke might be correct that AI ethics is the most important task facing humanity today. Are Asimov’s “Three laws” and a fail safe enough? I have doubts about that simply because AIs will likely be software and the temptation to allow AIs to improve their own software is too great. We need to figure out the proper moral theory for AIs and that requires understanding our own morality.

I have my doubts that such things would be enough too. If the Singularity is possible, it will arrive before we have figured out a perfect moral system. You can take that to the bank.

In the immortal words of Dr. Ian Malcolm:

…[S]cientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.

In the past 2,400 years of debate, we have not figured out an infallible moral theory. Do I think a few computer geeks will crack morality for us? They will program something flawed and we will not be able to figure out exactly what the consequences of those programs will be until it is beyond our power to control. So, I don’t think we should pursue this sort of research. But anyone who says this will be labeled “anti-technology” and “backward”. People will do it anyways. You can’t stop “progress.”

  (Quote)

Leave a Comment