The Nature of Mind

by Luke Muehlhauser on July 20, 2009 in General Atheism,Reviews

I’m blogging my way through Sense and Goodness Without God, Richard Carrier’s handy worldview-in-a-box for atheists. (See the post index for all sections.) In my previous post, I talked about Carrier’s sections on abstract objects and reductionism. Today I begin section III.6 The Nature of Mind.

The only evidence for souls we have is really just evidence of minds, and recent science has shown that there is never a mind without a brain. So, your mind is really just your brain in action. As Steven Pinker wrote, “The mind is what the brain does.”

Carrier writes:

The word “mind” refers to a particular pattern of brain content and activity, which includes not only the input and processing of data (sensation), but also the recognition of patterns in that data (perception), the analysis of relationships among those patterns (reason), and above all, the ability to recognize a particular pattern: that of a self.

A simple brain (like a worm’s) seems to only perform very simple calculations (“move toward light”). But an advanced brain, like a dog’s, can simulate a realistic hunt while the dog is dreaming – so realistic that the dog reacts to it as though it is real.

This capacity probably evolved because it’s useful to be able to simulate certain situations. It’s useful to train in a flight simulator before flying a real plane. And it’s useful for a dog to simulate a hunt and imagine different possible outcomes and decisions without exposing itself to the danger and energy expense of a real hunt.

This is how we reason, how we notice and simulate the “inner self,” how we imagine and plan, how we use symbols (including words) to stand for things in the real world.

John Searle claims to have “disproved” all this with his Chinese Room thought experiment, but Carrier ably rebuts it. Likewise with Searles’s assertion that “syntax is not sufficient for semantics.” I won’t dwell on those arguments here.

Mind as Machine

According to Carrier (and myself), the mind is a machine – perhaps the most complex machine in the universe. It’s so complex that it produces things quite unlike anything else – until recently.

For example, it produces thoughts, which are “the brain’s own sensory perception of… calculation and analysis.” Just as the eyes receive light and translate it into electrical signals that are recognized to fit certain patterns, there are also brain regions that act as “eyes” that watch the brain itself – the calculations and processes it is carrying on itself.

The most obvious comparison to be made is the computer. The computer takes input from the outside world – through a mouse, keyboard, webcam, etc. – and then processes it, fits it into certain patterns, and does things to the input based on what patterns it matches. The mouse and keyboard  are the “eyes” of the computer.

But the computer also has internal eyes to watch its own processes. Antivirus software, for example, watches everything your computer does for patterns that are known to cause harm – and reacts if it sees one.

The brain also has abilities – reinforced neural pathways that produce a certain output for a given input. We also have memories – stored recognition-sets, related to each other in various ways. And we have traits – reinforced neural pathways for emotional responses and impulses sent out in response to certain inputs, much like abilities.

A computer can also do these things. It has skills – ways of responding in a useful way to certain types of input. It can store information and memories. It can also have traits. Microsoft Windows, for example, seems to have many “reinforced pathways” that lead to “freezing” and “crashing” in response to – well, damn near anything.

A century ago, there was nothing else besides the brain that could do all these things – not even close. Now, the computer’s ability to mimic the brain grows more impressive every day. The one last thing that a brain can do and a computer can’t is recognize itself. That requires an awful lot of computing power, and computers aren’t quite there… yet.

But computing power doubles every two years. At this rate, most experts expect a supercomputer to surpass the power of the human brain somewhere between 2030 and 2080. Of course, for a computer to recognize itself as human brains do, it must not only have the raw processing power necessary, it must also be programmed to recognize itself – just as evolution programmed human neural circuits to recognize their own processes. That will require some inventive programming, and more work decoding how the brain already does that.

Frankly, it will not surprise me if humans create an artificial consciousness and intelligence that rivals human consciousness and intelligence within my lifetime. That’s not science fiction. It’s just a reasonable prediction based on current trends.

Qualia

It is often claimed that naturalism cannot explain what philosophers call “qualia” – subjective experiences like the “blueness” of blue or the “urgency” of a desire.

And this is true, so far. But this is to say no more than to say that in the year 1500 naturalism could not account for lightning. The reason I expect naturalism will eventually be found to account for qualia is very simple: everything else we’ve been able to thoroughly investigate has turned out to be explained by natural processes. To expect otherwise is, well, against all the facts we do have. And certainly, “Goddidit” is no explanation.

But Carrier has some guesses as to what qualia are:

I suspect qualia are like the pawns in a game of computer chess: they don’t really exist anywhere, beyond the shifting and complicated pattern of behavior of certain electrical signals on a grid. Just as a difference of pattern in this activity makes all the difference between a pawn and a knight, so also a different pattern of activity in the brain makes all the difference between an experience of ‘redness’ and an experience of ‘loudness’, or nothing at all.

…I think it is safe to say that any process that produces virtual models, and analyzes and reacts to them intelligently, probably experiences qualia, since ‘perceiving’ the attributes of a perceptual model is exactly the same thing as perceiving its qualities… qualia. And we know all higher animals do this. So if we could get a mouse to talk, it could probably tell us all about what it is ‘like’ to ‘see’ a light at the end of a tunnel or to ‘feel’ the heat of a stove. How could it not? Would it make any sense for the mouse to say it was ambling toward a bright light rather than a dim one… and at the same time report to us that it was having no experiences of [lightness or dimness]? I can’t make any sense of such a thing… To explain perception is to explain qualia: they are one and the same thing. And advanced computers, still pure machines, have achieved perception, at least on the level of a mouse. I think it follows they have experienced qualia. They just can’t appreciate or talk about them, any more than a mouse can.

Consider blindsight. Some people lose their “consciousness” or “qualia” of vision, and yet they can still point to things (but, less reliably). Due to brain damage, their vision has been cut off from those “eyes” that watch internal brain processes, and sight has moved from the conscious to the subconscious. And yet, if we could hook up that disconnected vision center to a new set of internal “eyes” that watches its own processes, and a system for processing language and speech, we could probably ask that part of the brain what it is like to see the things it knows to point at, and it would tell us.

In fact, if we can learn how to rewire the brain such that previously isolated parts are plugged into those internal eyes (which are located in the cerebral cortex, by the way), we may be able to consciously experience all the other things our brain does that we aren’t consciously aware of. I’m not sure I’d want to, but I’m sure those who dig LSD would think that quite a trip.

Knowledge

And knowledge, of course, is a set of information stored in the brain that makes us confident of a particular proposition. The greater the amount, clarity, and coherence of information, the more we are confident about the conclusions we draw from it. All this is a pattern of neurons in the brain, just like computer information is a pattern of electrons.

The human’s great advantage is that we evolved language and culture. This allows us to transmit knowledge accumulated and refined over millions of years into the mind of a 5-year-old. No other animal can compete with that.

This has been a very brief overview of what the mind is according to naturalism. Next, we’ll discuss III.6.6 The Evidence for Mind-Body Physicalism.

Previous post:

Next post:

{ 20 comments… read them below or add one }

Zak July 20, 2009 at 10:20 pm

Very interesting read.
However, I would recommend the book “On Intelligence” by Jeff Hawking (a computer scientist turned neurologist) . In my opinion, he demolishes the idea that some sort of intelligent (or self aware) computer is just in need of more processing power. The real problem is that computers and brains are, where it matter, completely different.
Anyway, it is a really interesting read. I’m sure you would enjoy it. He also has a TED talk.
 

  (Quote)

Yair July 20, 2009 at 11:37 pm

I think it is safe to say that any process that produces virtual models, and analyzes and reacts to them intelligently, probably experiences qualia, since ‘perceiving’ the attributes of a perceptual model is exactly the same thing as perceiving its qualities

Why is that “safe to say”? The whole question is how does it come about that a physical process starts ‘perceiving’ something, rather than reacting mindlessly. That the virtual model exists in the process means that the process reacts to it, not that it perceives it.
 
I do not agree the mind-body problem is akin to other problems that naturalism has resolved. There is a real conceptual gap between subjective and objective descriptions, a gap that doesn’t exist in as-yet-unexplained objective natural phenomena such as high-temperature superconductivity (to take a modern version of the lightning problem). Consciousness is, as David Chalmers put it, a Hard Problem.
 
I have yet to run across a good solution to this problem. The theistic solutions are of course childish and naive; they don’t solve the problem, they just make it more mysterious, and do not fit with neuroscience. Substance dualism likewise is a non-starter, incompatible with the brain.
The closest I’ve come to resolving the issue for myself is coming to the (surprising) conclusion of the neutral monism espoused by Bertrand Russel, or to call the baby by its true name to panpsychism – the belief that there is no dead matter, but rather that matter is fundamentally consciouss, fundamentally aware, fundamentally perceives. This does not, however, completely resolves the problem, it merely blunts it somewhat. For even if each individual particle is aware (of its internal states, presumably?), it isn’t clear how a singular consciousness, the self, can emerge from a structure such as the brain. This is the Composition Problem that the Hard Problem is reduced to under neutral monism. I’ve yet to find a reasonable answer to it, although information seems to be the key.

  (Quote)

Gaylen Moore July 21, 2009 at 5:12 am

I agree that we need a naturalistic (or at least non-theistic, non-dualistic) theory of mind and qualia. But based on what I read in your essay, I think you are still a long way from anything like a genuine theory of qualia. (Not your fault, of course, we are all struggling here.) You wave your hands at the problem, saying that somehow the brain produces qualia, but I don’t think this approach can ever work. I don’t think it is a matter of “production.” If you buy into the “production” metaphor, then you have unwittingly bought into a form of dualism. If X produces Y, then X is not the same as Y. And you’ve also bought into a form of subjectivism that leads to skepticism about the external world: if what you perceive is not the world, but only a mental representation of the world (a model of the world built out of purely private qualia), then clearly qualia cannot simply be the material world.

Personally, I am working on a theory according to which qualia are not “in the brain” or “produced by” the brain. I will argue that the world itself is qualitative (“blue” is in some sense “in the world itself” – not just “in my mind”). To make this approach work, I am taking guidance from process philosophy (A.N. Whitehead, and others), but I am trying to avoid the sort of panpsychism that this approach typically implies.

The trick here is to understand the various philosophical traditions (and many mystical traditions) that dismantle the “subject/object” dichotomy at the deepest metaphysical level. The idea is that the world is essentially qualitative, and I am essentially composed of the world (I am, in a manner of speaking, “the world itself” from a given perspective). This leads to a radical re-thinking of what it means to say that that the world is “out there” – but I believe that this process-philosophy approach is, in the long run, the sort of approach that is most consistent with science. In any sort of explanations, “substance” is always a dead end. Once you posit a fundamental substance, any deeper explanation becomes impossible (by the very nature of “fundamental” and “substance”). But a fundamental process is different. You still have an explanatory “dead end” because you can’t explain why a truly fundamental process is what it is, but on the other hand, a process (even a fundamental one) is always – by its very nature – complex and dynamic, and thus it can be modeled (dynamical systems modeling), and the models can be used for further and further explanation. Bottom line: the world is a self-organizing qualitative system and “I” am essentially a unique “center of perspective” COMPOSED OF the world. I don’t “produce” qualia in my brain, I AM qualia (self-organizing qualia, to be more precise). And my brain is basically a portion of the world that, due to its complexity, is capable of self-reference. In other words, a physical brain is the world’s way of experiencing itself from a limited perspective.

  (Quote)

Reginald Selkirk July 21, 2009 at 5:50 am

Gaylen Moore: I agree that we need a naturalistic (or at least non-theistic, non-dualistic) theory of mind and qualia. But based on what I read in your essay, I think you are still a long way from anything like a genuine theory of qualia.

I agree that we need a naturalistic theory of mind, but I am not certain that it will include qualia. Perhaps we should discard it as the medieval idea that it is, and start fresh building up from what we know of neuroscience: synapses, action potentials, neural networks. Qualia may be another phlogiston.
 

  (Quote)

Reginald Selkirk July 21, 2009 at 5:53 am

As for building computers which model the human brain/mind – it might be a useful approach to attempting to understand the mind, but it does not strike me as the best way to actually carry out computation. We know a great deal about perceptual limitations, cognitive biases, etc., which I have invoked on several occasions. Starting from scratch, building in silicon, we should be able to do better than that.

  (Quote)

Reginald Selkirk July 21, 2009 at 5:55 am

This capacity probably evolved because it’s useful to be able to simulate certain situations. It’s useful to train in a flight simulator before flying a real plane. And it’s useful for a dog to simulate a hunt and imagine different possible outcomes and decisions without exposing itself to the danger and energy expense of a real hunt.

This is a bit off the mainline topic, but this paragraph reads like a just-so story. I have seen several putative explanations for dreaming, but I don’t think there is any convergence of explanation, or any reason to believe that we have a correct understanding of its purpose – if it even has one.

  (Quote)

Reginald Selkirk July 21, 2009 at 6:01 am

  (Quote)

lukeprog July 21, 2009 at 6:43 am

Zak, I specifically did not say that all computers need are more processing power. Also, note that modern computers are Turing complete, and can simulate structures more similar to brains, for example neural networks.

  (Quote)

lukeprog July 21, 2009 at 6:46 am

Yair,

Piero Scaruffi is also (mostly) a proponent of panpsychism. He has a free online book that helpfully summarizes all of the scientific and philosophical research on the problem of mind. It’s called The Nature of Consciousness.

  (Quote)

lukeprog July 21, 2009 at 6:54 am

Gaylen Moore,

Neither Carrier nor myself pretend to be able to explain how the mind works. We are only just beginning to understand.

  (Quote)

lukeprog July 21, 2009 at 6:56 am

Cool, I didn’t know there was a video of that!

  (Quote)

g July 21, 2009 at 2:27 pm

In case you didn’t notice, Luke, the comment from “Julian Roberts” is blogspam.

  (Quote)

Dace July 21, 2009 at 3:41 pm

“For even if each individual particle is aware (of its internal states, presumably?), it isn’t clear how a singular consciousness, the self, can emerge from a structure such as the brain” ~ Yair
“And my brain is basically a portion of the world that, due to its complexity, is capable of self-reference.” ~ Gaylen Moore
Looks to me like Panpsychism and the Process-Philosophy approach are still going to invoke an arrangement of parts in their explanations, which presumably explains human consciousness in terms of the functionality of this arrangement. If that’s so, I don’t really see what the advantage is in these views over a straightforward materialism.
“The trick here is to understand the various philosophical traditions (and many mystical traditions) that dismantle the “subject/object” dichotomy at the deepest metaphysical level. ” ~ Gaylen Moore
We certainly need to overcome this, but I don’t think we need to do anything drastic. Though ‘subjective’ and ‘objective’ can have meanings that are mutually exclusive, ‘subject’ and ‘object’ do not, so there is no general difficulty in supposing that some objects are subjects. If a functional theory of mind is correct, then this is easily explicable: objects are subjects because objects in certain arrangements have functional properties, and the right functional properties will instantiate a mind.

  (Quote)

lukeprog July 21, 2009 at 4:59 pm

@cartesian:

I’m still looking forward to your next response here.

  (Quote)

Kip July 21, 2009 at 5:40 pm

Here’s the video that Zak mentioned:  http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html
I just watched it.  Very cool stuff.

  (Quote)

Kip July 21, 2009 at 5:55 pm

My guess is that we will eventually have both:  artificial intelligence that replicates human intelligence, but has a separate processing system for analytical/computational intelligence.  It will be the best of both worlds:  very fast recognition and “fuzzy logic”, with parallel processing for computationally-intense operations.

  (Quote)

Zak July 21, 2009 at 7:59 pm

Luke,
My bad. I misunderstood the paragraph regarding computer power, etc.
Regardless, check out Hawkins book if you get a chance. It is extremely thought provoking. He also discusses neural networks quite a bit, and their limitations.

  (Quote)

Yair July 21, 2009 at 9:23 pm

Dace: “For even if each individual particle is aware (of its internal states, presumably?), it isn’t clear how a singular consciousness, the self, can emerge from a structure such as the brain” ~ Yair “

Looks to me like Panpsychism and the Process-Philosophy approach are still going to invoke an arrangement of parts in their explanations, which presumably explains human consciousness in terms of the functionality of this arrangement. If that’s so, I don’t really see what the advantage is in these views over a straightforward materialism.

Well, under panpsychism consciousness is not explained by the functionality of the ingredients.  Rather, counsciousness-of-x, the feeling-like-sometihng, subjective experience, is taken as a primitive. Human consciousness is (supposedly) explained by how these separate sensations merge into one by their functional relations, but consciousness as-such is not. So the advantage is resolving the hard problem (how is it that dead matter feels), but panpsychism by itself doesn’t solve the composition problem (how is it that the complex matter structure of the brain has a single, unified, consciousness).
 

However, panpsychism is the idea that all matter (energy?) senses, and is not in principle limited to reductionism. For example, one can imagine a panpsychism that works somewhat like the holistic aspects of quantum mechanical descriptions. In this case a system’s mental states would be akin to the geometrical projection of the mental states of the greater system it is a whole of, and any division of the whole reality has its mental states determined this way. A simple analogy is a vector (arrow) in three dimensions; you can project it to any two-dimensional plane (the x-y plane, the x-z plane, a plane between the two…), creating a two-dimensional vector (arrow), or to any one-dimensional axis (in any direction).

[Imagine reality as composed of a very high dimensional space, and you essentially have the quantum mechanical state (just make the vectors Complex, instead of Real, and you're basically there). The extra dimensions allow more ways to divide reality, more directions and choices of axis to project onto.]

Though ’subjective’ and ‘objective’ can have meanings that are mutually exclusive, ’subject’ and ‘object’ do not, so there is no general difficulty in supposing that some objects are subjects. If a functional theory of mind is correct, then this is easily explicable: objects are subjects because objects in certain arrangements have functional properties, and the right functional properties will instantiate a mind.

That is not the meaning of subjective being used here. The intention is to refer to the feeling-of, to subjective experience, to awareness-of. It is at least not clear why functional relations would imply subjective relations, and indeed it seems implausible that this is the whole story. Consider that the same physical structure can be described in several functional ways, and harbor multiple functional levels, and that the same functional relations can be implemented in very different physical structures.
 

  (Quote)

Dace July 22, 2009 at 4:40 pm

@Yair.
Well, the composition problem seems hard enough that I don’t think panpsychism presents an advantage. It seems to me that the skepticism which motivates a panpsychic approach takes it as obvious that subjective experience is unanalyzable into further bits, but if so, then I fail to see why human consciousness should be analyzable into bits, even if we stipulate that these are conscious.
Actually, the idea reminds me of the monadology. I wonder if anyone else has noticed that.
“That is not the meaning of subjective being used here. The intention is to refer to the feeling-of, to subjective experience, to awareness-of. ”
‘Subjective’ means ‘of a subject’, and as I have explained, a subject can be an object as well. I don’t see your problem – all I’m saying is that the terminology need not determine in advance the failure of materialistic theories of mind.
“It is at least not clear why functional relations would imply subjective relations, and indeed it seems implausible that this is the whole story. Consider that the same physical structure can be described in several functional ways, and harbor multiple functional levels, and that the same functional relations can be implemented in very different physical structures.”
The story is likely to be a long and detailed one. Given that, I find it premature to say that functionality can’t do the job. Indeed, I’d put against your basic assumption that all matter is conscious a basic assumption that all functional systems which are self-monitoring are conscious – an assumption which is far less presumptive.

  (Quote)

Yair July 22, 2009 at 10:21 pm

 

Dace: @Yair. Well, the composition problem seems hard enough that I don’t think panpsychism presents an advantage.

I consider it a rather triumphant success. :)

It seems to me that the skepticism which motivates a panpsychic approach takes it as obvious that subjective experience is unanalyzable into further bits, but if so, then I fail to see why human consciousness should be analyzable into bits, even if we stipulate that these are conscious.

Not really accurate. Panpsychism takes it as true that subjective experience cannot be analyzed into objective descriptions. So human consciousness can be decomposed into bits, but only to conscious bits. No amount, complexity, or relations of dead matter will result in living experience.

Actually, the idea reminds me of the monadology. I wonder if anyone else has noticed that.

There are definitely similarities. I’ve not read any scholarly work on it, however, and I don’t know enough to make much more intelligent comments on that myself.

‘Subjective’ means ‘of a subject’, and as I have explained, a subject can be an object as well. I don’t see your problem – all I’m saying is that the terminology need not determine in advance the failure of materialistic theories of mind.

That’s just not the meaning of subjectivity used here. Don’t let language get in the way of communication. The argument for panpsychism is that dead matter cannot lead to consciousness, no matter how it’s arranged. The simplest assumption is that all matter is conscious, so that’s what panpsychism assumes. That consciousness belongs to subjects is… besides the point.

The story is likely to be a long and detailed one. Given that, I find it premature to say that functionality can’t do the job. Indeed, I’d put against your basic assumption that all matter is conscious a basic assumption that all functional systems which are self-monitoring are conscious – an assumption which is far less presumptive.

I think it’s fair to say that functionality alone is not enough, because of the arguments I raised. Furthermore, even if functionality is the answer, it leaves consciousness very mysterious – it isn’t at all clear WHY consciousness suddenly emerges, it just determines that it does. There is no underlying theory, no real understanding of the phenomena. It’s like Newton’s theory of gravity, which even he himself realized was mysterious.
 
 
Your suggested assumption is more presumptive – it supposes that a new kind of property, an intrinsic property or “feeling”, arises under particular circumstances. I maintain that this sort of property is endemic. My assumption is more simple, more uniform. It doesn’t involve a mysterious emergence of subjective properties out of objective existence; the emergence of an “this feels like” from “this behaves like”; the emergence of “this is aware of” from “this responds to”.

  (Quote)

Leave a Comment