• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 13


I don't remember our reading papers by Cleeremans here in the past. Let us look at this one:


HYPOTHESIS AND THEORY ARTICLE
Front. Psychol., 09 May 2011 | https://doi.org/10.3389/fpsyg.2011.00086
The radical plasticity thesis: how the brain learns to be conscious
Axel Cleeremans*
  • Consciousness, Cognition and Computation Group, Université Libre de Bruxelles, Bruxelles, Belgium
In this paper, I explore the idea that consciousness is something that the brain learns to do rather than an intrinsic property of certain neural states and not others. Starting from the idea that neural activity is inherently unconscious, the question thus becomes: How does the brain learn to be conscious? I suggest that consciousness arises as a result of the brain’s continuous attempts at predicting not only the consequences of its actions on the world and on other agents, but also the consequences of activity in one cerebral region on activity in other regions. By this account, the brain continuously and unconsciously learns to redescribe its own activity to itself, so developing systems of meta-representations that characterize and qualify the target first-order representations. Such learned redescriptions, enriched by the emotional value associated with them, form the basis of conscious experience. Learning and plasticity are thus central to consciousness, to the extent that experiences only occur in experiencers that have learned to know they possess certain first-order states and that have learned to care more about certain states than about others. This is what I call the “Radical Plasticity Thesis.” In a sense thus, this is the enactive perspective, but turned both inwards and (further) outwards. Consciousness involves “signal detection on the mind”; the conscious mind is the brain’s (non-conceptual, implicit) theory about itself. I illustrate these ideas through neural network models that simulate the relationships between performance and awareness in different tasks.

Consider the humble but proverbial thermostat. A thermostat is a simple device that can turn a furnace on or off depending on whether the current temperature exceeds a set threshold. Thus, the thermostat can appropriately be said to be sensitive to temperature. But is there some sense in which the thermostat can be characterized as being aware of temperature? Contra Chalmers (1996), I will argue that there is no sense in which the thermostat can be characterized as being aware of temperature. There are two important points that I would like to emphasize in developing this argument. The first is that there is no sense in which the thermostat can be characterized as being aware of temperature because it does not know that it is sensitive to temperature. The second point is that there is no sense in which the thermostat can be characterized as being aware of temperature because it does not care about whether its environment is hot or cold. I will further argue that these two features – knowledge of one’s own internal states and the emotional value associated with such knowledge – are constitutive of conscious experience. Finally, I will argue that learning (or, more generally, plasticity) is necessary for both features to emerge in cognitive systems. From this, it follows that consciousness is something that the brain learns to do through continuously operating mechanisms of neural plasticity. This I call the “Radical Plasticity Thesis.”

Information processing can undoubtedly take place without consciousness, as abundantly demonstrated not only by empirical evidence (the best example of which is probably blindsight), but also by the very fact that extremely powerful information-processing machines, namely computers, have now become ubiquitous. Only but a few would be willing to grant any quantum of conscious experience to contemporary computers, yet they are undeniably capable of sophisticated information processing – from recognizing faces to analyzing speech, from winning chess tournaments to helping prove theorems. Thus, consciousness is not information processing; experience is an “extra ingredient” (Chalmers, 2007a) that comes over and beyond mere computation.

With this premise in mind – a premise that just restates Chalmers’ (1996) hard problem, that is, the question of why it is the case that information processing is accompanied by experience in humans and other higher animals, there are several ways in which one can think about the problem of consciousness.
One is to simply state, as per Dennett (e.g., Dennett, 1991, 2001) that there is nothing more to explain. Experience is just (a specific kind of) information processing in the brain; the contents of experience are just whatever representations have come to dominate processing at some point in time (“fame in the brain”); consciousness is just a harmless illusion. From this perspective, it is easy to imagine that machines will be conscious when they have accrued sufficient complexity; the reason they are not conscious now is simply because they are not sophisticated enough: They lack the appropriate architecture perhaps, they lack sufficiently broad and diverse information-processing abilities, and so on. Regardless of what is missing, the basic point here is that there is no reason to assume that conscious experience is anything special. Instead, all that is required is one or several yet-to-be-identified functional mechanisms: Recurrence, perhaps (Lamme, 2003), stability of representation (O’Brien and Opie, 1999), global availability (Baars, 1988; Dehaene et al., 1998), integration and differentiation of information (Tononi, 2003, 2007), or the involvement of higher-order representations (Rosenthal, 1997, 2006), to name just a few.

Another perspective is to consider that experience will never be amenable to a satisfactory functional explanation. Experience, according to some (e.g., Chalmers, 1996), is precisely what is left over once all functional aspects of consciousness have been explained. Notwithstanding the fact that so defined, experience is simply not something one can approach from a scientific point of view, this position recognizes that consciousness is a unique (a hard) problem in the Cognitive Neurosciences. But that is a different thing from saying that a reductive account is not possible. A non-reductive account, however, is exactly what Chalmers’ Naturalistic Dualism attempts to offer, by proposing that information, as a matter of ontology, has a dual aspect, – a physical aspect and a phenomenal aspect. “Experience arises by virtue of its status as one aspect of information, when the other aspect is found embodied in physical processing” (Chalmers, 2007b, p. 366). This position leads him to defend the possibility that experience is a fundamental aspect of reality. Thus, even thermostats, for instance, may be endowed with very simple experiences, in virtue of the fact that they can toggle in two different states.

What, however, do we mean when we speak of “subjective experience” or of “quale”? The simplest definition of these concepts (Nagel, 1974) goes right to the heart of the matter: “Experience” is what it feels like for a conscious organism to be that organism. There is something it is like for a bat to be a bat; there is nothing it is like for a stone to be a stone. As Chalmers (2007a) puts it: “When we see, for instance, we experience visual sensations: The felt quality of redness, the experience of dark and light, the quality of depth in a visual field” (p. 226).

Let us try to engage in some phenomenological analysis at this point to try to capture what it means for each of us to have an experience. Imagine you see a patch of red (Humphrey, 2006). You now have a red experience – something that a camera recording the same patch of red will most definitely not have. What is the difference between you and the camera? Tononi (2007), from whom I borrow this simple thought experiment, points out that one key difference is that when you see the patch of red, the state you find yourself in is but one among billions, whereas for a simple light-sensitive device, it is perhaps one of only two possible states – thus the state conveys a lot more differentiated information for you than for a light-sensitive diode. A further difference is that you are able to integrate the information conveyed by many different inputs, whereas the chip on a camera can be thought of as a mere array of independent sensors among which there is no interaction.

Hoping not to sound presumptuous, it strikes me, however, that both Chalmers’ (somewhat paradoxically) and Tononi’s analyses miss fundamental facts about experience: Both analyze it as a rather abstract dimension or aspect of information, whereas experience – what it feels like – is anything but abstract. On the contrary, what we mean when we say that seeing a patch of red elicits an “experience” is that the seeing does something to us – in particular, we might feel one or several emotions, and we may associate the redness with memories of red. Perhaps seeing the patch of red makes you remember the color of the dress that your prom night date wore 20 years ago. Perhaps it evokes a vague anxiety, which we now know is also shared by monkeys (Humphrey, 1971). To a synesthete, perhaps seeing the color red will evoke the number 5. The point is that if conscious experience is what it feels like to be in a certain state, then “What it feels like” can only mean the specific set of associations that have been established by experience between the stimulus or the situation you now find yourself in, on the one hand, and your memories, on the other. This is what one means by saying that there is something it is like to be you in this state rather than nobody or somebody else: The set of memories evoked by the stimulus (or by actions you perform, etc.), and, crucially, the set of emotional states associated with each of these memories. This is essentially the perspective that Damasio (2010) defends.
Thus, a first point about the very notion of subjective experience I would like to make here is that it is difficult to see what experience could mean beyond (1) the emotional value associated with a state of affairs, and (2) the vast, complex, richly structured, experience-dependent network of associations that the system has learned to associate with that state of affairs. “What it feels like” for me to see a patch of red at some point seems to be entirely exhausted by these two points. Granted, one could still imagine an agent that accesses specific memories, possibly associated with emotional value, upon seeing a patch of red and who fails to “experience” anything. But I surmise that this would be mere simulation: One could design such a zombie agent, but any real agent that is driven by self-developed motivation, and that cannot help but be influenced by his emotional states will undoubtedly have experiences much like ours.

Hence, there is nothing it is like for the camera to see the patch of red simply because it does not care: The stimulus is meaningless; the camera lacks even the most basic machinery that would make it possible to ascribe any interpretation to the patch of red; it is instead just a mere recording device for which nothing matters. There is nothing it is like to be that camera at that point in time simply because (1) the experience of different colors do not do anything to the camera; that is, colors are not associated with different emotional valences; and (2) the camera has no brain with which to register and process its own states. It is easy to imagine how this could be different. To hint at my forthcoming argument, a camera could, for instance, keep a record of the colors it is exposed to, and come to “like” some colors better than others. Over time, your camera would like different colors than mine, and it would also know that in some non-trivial sense. Appropriating one’s mental contents for oneself is the beginning of individuation, and hence the beginning of a self.

Thus a second point about experience that I perceive as crucially important is that it does not make any sense to speak of experience without an experiencer who experiences the experiences. Experience is, almost by definition (“what it feels like”), something that takes place not in any physical entity but rather only in special physical entities, namely cognitive agents. Chalmers’ (1996) thermostat fails to be conscious because, despite the fact that it can find itself in different internal states, it lacks the ability to remove itself from the causal chain which it instantiates. In other words, it lacks knowledge that it can find itself in different states; it is but a mere mechanism that responds to inputs in certain ways. While there is indeed something to be experienced there (the different states the thermostat can find itself in), there is no one home to be the subject of these experiences – the thermostat simply lacks the appropriate machinery to do so. The required machinery, I surmise, minimally involves the ability to know that one finds itself in such or such a state. . . ."
 

Attachments

  • Consciousness_PlasticityThesis.pdf
    1.6 MB · Views: 0
The Debrief - Scientist Proposes A New Theory For The Mystery Of Consciousness

“A recent study published by the eminent Oxford University Press journal of Neuroscience of Consciousness claims to have solved the age-old mystery of human consciousness.
According to the paper’s author, Dr. Johnjoe McFadden, a molecular geneticist and director of quantum biology at the University of Surrey, consciousness is merely the brain’s energy field and a result of interactions between brain matter and electromagnetic energy.“


The Paper - Integrating information in the brain’s EM field: the cemi field theory of consciousness

“Consciousness is the experience of nerves plugging into the brain’s self-generated electromagnetic field to drive what we call ‘free will’ and our voluntary actions,” said McFadden in a statement.“
 

Attachments

  • NeuroscienceOfConsciousness-McFadden.pdf
    413 KB · Views: 2
I don't remember our reading papers by Cleeremans here in the past. Let us look at this one:

The radical plasticity thesis: how the brain learns to be conscious ...
So it looks like we're not the only ones to be contemplating these concepts. It seems that this thread and its contributors are constantly on the leading edge of this subject. Much of this Plasticity Thesis incorporates the idea on nonlinear causality, a subject that has popped-up recently here. The problem I see with this Cleereman's approach, is that we cannot assume that learning is any indication of consciousness. Hypothetically a non-conscious intelligence could also learn, and that is exactly what may ( or may not ) be happening with the new generation of machine learning and AI chips.

I say "may ( or may not ) be happening" on this subject because it is equally problematic to assume that these new devices have zero awareness. What if they actually do have some form of conscious experience, and yet we're carting them around in our pockets, switching them off and on at will, and discarding them like trash when the next model comes out. This is one of the main points in the movie AI that seemed to go over a lot of people's heads, despite the graphic portrayals. Should we be messing with this sort of thing? Will we even know when we've let the Genie out of the bottle?

 
Last edited:
The Debrief - Scientist Proposes A New Theory For The Mystery Of Consciousness

“A recent study published by the eminent Oxford University Press journal of Neuroscience of Consciousness claims to have solved the age-old mystery of human consciousness.
According to the paper’s author, Dr. Johnjoe McFadden, a molecular geneticist and director of quantum biology at the University of Surrey, consciousness is merely the brain’s energy field and a result of interactions between brain matter and electromagnetic energy.“


The Paper - Integrating information in the brain’s EM field: the cemi field theory of consciousness

“Consciousness is the experience of nerves plugging into the brain’s self-generated electromagnetic field to drive what we call ‘free will’ and our voluntary actions,” said McFadden in a statement.“
One reason why I wrote my paper on causation and information is in critical response to the kind of position that is indicative of the statement: ‘information is physical’ (see abstract in linked paper). Firstly, there is the explanatory void in what it is about information that makes it qualify as a 'physical thing'.

Second is the problematic issue of connecting information to meaning, for example, how does one connect this 'physical information thing' to qualitative biological meaning. In absence of answers to these kinds of problems, the stance remains a theory only of correspondence i.e. there is a correspondence between cs and energy fields. There is also a correspondence between cs and brains but... that observation isn't terribly helpful.
 
Last edited by a moderator:
One reason why I wrote my paper on causation and information is in critical response to the kind of position that is indicative of the statement: ‘information is physical’ (see abstract in linked paper). Firstly, there is the explanatory void in what it is about information that makes it qualify as a 'physical thing'.

Second is the problematic issue of connecting information to meaning, for example, how does one connect this 'physical information thing' to qualitative biological meaning. In absence of answers to these kinds of problems, the stance remains a theory only of correspondence i.e. there is a correspondence between cs and energy fields. There is also a correspondence between cs and brains but... that observation isn't terribly helpful.
These are excellent points. It seems to me that this issue is all a matter of context around labels. When I say consciousness is physical, I'm typically talking about the phenomenon as a whole, not about various facets of it. This can be likened to the pile of bricks analogy. At what point does a pile of brinks become a house? If we only look at any individual brick, we cannot tell if it is part of a pile of bricks, or part of a house. If it is part of a house, then it has this "extra quality" about it that cannot be detected unless we take a more holistic approach.

Also, if we construct a house out of bricks, we obviously have a physical structure, and when we look at an individual part of a house e.g. the chimney, we can also clearly see it is physical, but it alone isn't a "house".

Similarly, if we accept that consciousness as a whole is physical, what we need to do is recognize that all subsets of consciousness will also have a physical structure, even if they are not "consciousness" per se; just like a chimney alone isn't a "house" per se. Logically then, while information will always have a physical carrier, but that carrier can only have meaning in the context of the whole, which requires a holistic, rather than reductive approach ( both of which have value ).
 
Last edited:
I would disagree to the extent that while interaction between minds and bodies is self evident, it took @marduk to point out ( to me at least ) that the consequence of this situation logically necessitates that the mind must be physical. I don't know whether or not marduk deduced this on his own, or acquired it from another source. Either way, it was through his post on it that this particular piece of the puzzle was illuminated ( for me ).

It was actually one of my junior high school teachers that was into philosophy, and despised dualism. He made a very simple point: if the spirit is something other than physical, then how does it trigger our physical body to do stuff? That information should be measurable by definition because it must be at least in part a physical process, because our bodies are physical.

People have looked for it for thousands of years, and found nothing. Therefore, dualism (in his opinion) was a dumb idea to begin with and a source of bad things for humanity all round.

I don't think it's a new idea: Dualism (Stanford Encyclopedia of Philosophy)
 
"How can we explain why there is something it is like to entertain a mental image, or to experience an emotion? It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises."

- David Chalmers

Chalmers himself is open to traditionally “non-physical” albeit natural explanations of the mbp, but the hp does assume consciousness arises from physical processes.

The hp is just a question chalmers is posing. It doesn’t mean he believes it’s premise.

One can easily trigger a program - a neural network, say - to 'entertain a mental image.' I can't say how (or if) that neural network experiences it, but the process of triggering such an event is straightforward.

You can also trigger all kinds of 'mental images' by poking electrodes into specific parts of the brain, injecting hallucinogens, putting on a VR helmet, or simply putting the brain into complex magnetic fields.

I'm not sure at all why this is considered a big problem.
 
One can easily trigger a program - a neural network, say - to 'entertain a mental image.' I can't say how (or if) that neural network experiences it, but the process of triggering such an event is straightforward.

You can also trigger all kinds of 'mental images' by poking electrodes into specific parts of the brain, injecting hallucinogens, putting on a VR helmet, or simply putting the brain into complex magnetic fields.

I'm not sure at all why this is considered a big problem.
The question wasn’t if you can “trigger” an experience or mental image but how and why we can.
 
It was actually one of my junior high school teachers that was into philosophy, and despised dualism. He made a very simple point: if the spirit is something other than physical, then how does it trigger our physical body to do stuff? That information should be measurable by definition because it must be at least in part a physical process, because our bodies are physical.

People have looked for it for thousands of years, and found nothing. Therefore, dualism (in his opinion) was a dumb idea to begin with and a source of bad things for humanity all round.

I don't think it's a new idea: Dualism (Stanford Encyclopedia of Philosophy)
The trouble with physicalism though is that all physical explanations address the objective world (the stuff out there). Even when the objective–subjective divide is bridged by physical explanation it still is impotent in addressing the individuated subjective identity. So, physicalism is no less a hopeless dumb faith as dualism... what is the (<sensible!>) alternative?
 
One can easily trigger a program - a neural network, say - to 'entertain a mental image.' I can't say how (or if) that neural network experiences it, but the process of triggering such an event is straightforward.

You can also trigger all kinds of 'mental images' by poking electrodes into specific parts of the brain, injecting hallucinogens, putting on a VR helmet, or simply putting the brain into complex magnetic fields.

I'm not sure at all why this is considered a big problem.
Caveman A to Caveman B: "When I push a magnetic north pole towards another magnetic north pole, they repel. I'm not sure at all why this is a problem worthy an explanation."
 
The question wasn’t if you can “trigger” an experience or mental image but how and why we can.

The how is easy. You can easily trigger a mental image via a physical process.

The why is an interesting process. Can you say more?
 
The trouble with physicalism though is that all physical explanations address the objective world (the stuff out there). Even when the objective–subjective divide is bridged by physical explanation it still is impotent in addressing the individuated subjective identity. So, physicalism is no less a hopeless dumb faith as dualism... what is the (<sensible!>) alternative?

Not sure it’s a ‘dumb faith’ to side with physicalism.
Here’s my logic:
1. there’s no evidence for a non-physical universe (that I’m aware of).
2. if there were a non-physical universe, and our mind exists in that universe, then it has to interact with the physical universe to do stuff. That interaction should be measurable because it exists at least partly in the physical universe. And yet nothing like that has ever been found.
3. no other physical process known requires a non-physical component to happen.
4. mind/consciousness is a hard problem and doesn’t really have a good physical explanation. A non-physical universe could explain it as far as the physical universe is concerned (because it moves the problem outside the scope of the physical) but it brings it’s own problems:
- one needs to explain how this other universe solves the mind problem - why would a mind be a simpler problem in a non-physical universe
- one needs to explain how a non-physical mind interacts with a physical brain in ways that are not detectible
- how do you account for the massive non-parsimonious nature of the solution proposed by inventing a whole new universe to solve the consciousness problem

So why is that dumb faith? It seems logical to me.
 
Not sure it’s a ‘dumb faith’ to side with physicalism.
Here’s my logic:
1. there’s no evidence for a non-physical universe (that I’m aware of).
2. if there were a non-physical universe, and our mind exists in that universe, then it has to interact with the physical universe to do stuff. That interaction should be measurable because it exists at least partly in the physical universe. And yet nothing like that has ever been found.
3. no other physical process known requires a non-physical component to happen.
4. mind/consciousness is a hard problem and doesn’t really have a good physical explanation. A non-physical universe could explain it as far as the physical universe is concerned (because it moves the problem outside the scope of the physical) but it brings it’s own problems:
- one needs to explain how this other universe solves the mind problem - why would a mind be a simpler problem in a non-physical universe
- one needs to explain how a non-physical mind interacts with a physical brain in ways that are not detectible
- how do you account for the massive non-parsimonious nature of the solution proposed by inventing a whole new universe to solve the consciousness problem

So why is that dumb faith? It seems logical to me.
I understand the problem with dualism. You said that your teacher described it as 'a dumb idea'. I am just saying that physicalism has no less a problem which makes it no less dumber. I am not advocating or defending either position... Personally I don't think that an explanation of consciousness would address in any way the mind-body problem. I suppose I think of mind as something that extends beyond consciousness.
 
The how is easy. You can easily trigger a mental image via a physical process.

The why is an interesting process. Can you say more?
“We have hundreds of precise correlations between conscious experiences on the one hand and patterns of brain activity on the other. The hard problem of consciousness is this: Correlations are not a theory.

We would like a scientific theory that explains why conscious experiences are correlated with brain activity. Remarkably, there is no place anywhere in [the] scientific literature that can explain even one conscious experience.

Tell me the brain activity that must be or must cause the smell of vanilla, and why that brain activity could not be the taste of chocolate or the sound of a trumpet. There is nothing that’s published on that and no ideas.”

- Donald Hoffman

@marduk you bring up a good point that I’ve been trying to find a way to ask @Pharoah about.

Pharoah thinks of consciousness and quality/meaning in a very particular way. That quality/meaning is dependent on an organisms relationship to the environment it co-evolved in. I certainly agree with this but not in the strong sense that I believe he does.

For example, as Marduk says, we can stimulate an individuals brain with precisely placed electrodes and induce reports of presumably real experiences of all manners of qualitative experiences—smells, tastes, etc.

So while the particular smells, tastes, sounds, etc that any particular organism can experience may be a product of evolution, these qualitative conscious experiences can be evoked with electrical stimulation in real time in the absence of environmental stimuli that these experiences typically correspond to.

How does HCT account for this?
 
I understand the problem with dualism. You said that your teacher described it as 'a dumb idea'. I am just saying that physicalism has no less a problem which makes it no less dumber. I am not advocating or defending either position... Personally I don't think that an explanation of consciousness would address in any way the mind-body problem. I suppose I think of mind as something that extends beyond consciousness.

That's interesting and I didn't realize it earlier. I've long thought that what we understand by 'mind' -- our own and those of our historical and pre-historical conspecifics and all their recorded works -- would not have been possible without experiential grounding in prereflective and eventually reflective consciousness as we explore the world we exist in. But I totally agree with you regarding the philosophically naive limitations of physicalism.
 
. . . For example, as Marduk says, we can stimulate an individuals brain with precisely placed electrodes and induce reports of presumably real experiences of all manners of qualitative experiences—smells, tastes, etc.

I'm moved to say "so what?" What is the reasoned basis for concluding from this that all the lived experiences we and others have outside artificial situations in which we are lab subjects are not 'real', do not occur, and do not have meaning?

So while the particular smells, tastes, sounds, etc that any particular organism can experience may be a product of evolution, these qualitative conscious experiences can be evoked with electrical stimulation in real time in the absence of environmental stimuli that these experiences typically correspond to.

That's not enough to draw the conclusions you, Marduk, and probably Randall draw.
 
I understand the problem with dualism. You said that your teacher described it as 'a dumb idea'. I am just saying that physicalism has no less a problem which makes it no less dumber. I am not advocating or defending either position... Personally I don't think that an explanation of consciousness would address in any way the mind-body problem. I suppose I think of mind as something that extends beyond consciousness.
That is a really interesting position. I'm not sure it's exactly the same as the one I was pondering the other day, but if it is, I'm tempted to say it's a bit of synchronicity. Specifically, it's the idea that explaining why bodies should accompany minds is just as problematic as explaining why minds should accompany bodies.

As a mind experiment we might try reversing the problem by imagining a universe where everything is a mental fabrication except the bodies associated with those minds. Those minds wouldn't be able to provide any explanation for how or why their purely mental processes should give rise to something material.
 
Pharoah thinks of consciousness and quality/meaning in a very particular way. That quality/meaning is dependent on an organisms relationship to the environment it co-evolved in. I certainly agree with this but not in the strong sense that I believe he does.
What do you mean by "but not in the strong sense"?
How does HCT account for this?
Account for what exactly?
I'm not sure it's exactly the same as the one I was pondering the other day, but if it is, I'm tempted to say it's a bit of synchronicity. Specifically, it's the idea that explaining why bodies should accompany minds is just as problematic as explaining why minds should accompany bodies.
I claim that HCT indicates why certain mental attributes (attributes we regard as constituting conscious experience and, therefore, 'mind') come to accompany bodies (this conforms with physicalism). But that account is not addressing the existential question of one's own mind.
 
Back
Top