• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 2

Free episodes:

Status
Not open for further replies.
The "integration of experience" on my view is synonymous with the "integration of information."


Do you believe that jelly fish have the same "kind of mind' that humans, dolphins, and whales have? I don't. There is a real difference.

The distinctions I'm making may only be conceptual or by degree, but I do believe (1) minds - like organisms - have evolved, and (2) minds - like organisms - are different. Thus, I think there are a variety of "minds" in our reality, and they have a variety of differences.
Sure, there's a difference: information capacity, processing capacity, and hardware specialization.

This may help:
Mora3.jpg


The human brain's processing capacity is estimated to be somewhere in the region of 10^9 million instructions per second based on it's massively parallel architecture.

The Macbook Pro I'm typing this on has around 6.5^3 MIPs, or about 5 1/2 orders of magnitude smaller than my brain that's thinking these words.
 
OK, I'll say it again.

Just like Hoefsdaeder did.

I'm a strange loop -- the result of a lot of continuous feedback loops of my neurons firing in certain patterns that can become somewhat aware of their own activity.

I am not my neurons, because a bunch of non-functioning neurons would no longer be me. I'm the result of my physical substrate firing in certain patterns. Those patterns are me.

Like iTunes is not silicon, iTunes is the software playing my music. But it is still materialistic, and exists in the physical universe.

Take away the silicon, and iTunes goes away. Describe the silicon to a sufficient degree, including the electrons going through it, and I can reproduce the state iTunes is in right now with 100% fidelity.

I make the same assertion of my mind. It should be replicable with 100% fidelity, and since it exists in the material universe, I should be able through equivalency make the substrate abstract -- replace it in any substrate capable of simulating what the neurons do.

Yes, I read Hofstedter, this is clear to me - it's emergence.

The other possibility is panpsychism - and it's been discussed at length on this thread, from Part 1 on - I posted a summary last night and I'm working on a summary of the combination problem and one possible solution: phenomenal bonding.
 
Last edited by a moderator:
No, I see it as:

(1) Phenomenal Experience (Awareness)
(2) Phenomenal Experience of Phenomenal Experience (Meta-Awareness)

It seems that Chalmers has made a distinction between phenomenal experience and cognition, but I view them both as aspects of mind. As we've noted in this thread, thoughts/cognition do have a phenomenal "feel" to them. I don't think phenomenal experience and thoughts are ontologically (?) different. They are both constituted of information.


Another way of conceptualizing what I am saying (and there are a few) is to say:

(1) A physical organism (structure) has a corresponding informational structure. The physical body is "aware" in the sense that it has phenomenal experiences.

(2) A self-aware physical organism has a corresponding informational structure that is aware of itself! That is, the physical body has phenomenal experiences (awareness), and this awareness is able to be aware of itself.

A tentative outline/timeline might be:

Physical structure - Proto-Mind
Living Physical structure - Mind
Self-Aware Physical structure - Meta-Mind

I will need to think more about it, but I do believe that meta-mind has causal influence on the organism. Thus, I would not be an epiphenomenalist in the strong sense. That is to say, I do think mind can have causal influence on the organism.

OK - I'll post up the other part on Panpsychism, which is the combination problem and phenomenal bonding and that should leave us with consistent terminology on the basics of Panpsychism and then I'm done on this topic. You can refer back through this thread and Part 1 or the SEP.

If you see anything wrong in my two summaries, let me know. I want a good, clear record left on this thread.
 
Yes, I read Hofstedter, this is clear to me - it's emergence.

The other possibility is panpsychism - and it's been discussed at length on this thread, from Part 1 on - I posted a summary last night and I'm working on a summary of the combination problem and one possible solution: phenomenal bonding.
OK, now I'm with you, I think I'm finally starting to pick up what you're laying down.

What you're positing is that since consciousness emerged from our brains, it could emerge from equivalently complex things.

It emerges from complexity kind of like life emerged from non-living chemicals.
 
The "integration of experience" on my view is synonymous with the "integration of information."

I realize that that is your view, but its validity has not been demonstrated. In my view it is oversimplified to the extent that it ignores the nature of experienced reality in organisms both simple and complex.


Do you believe that jelly fish have the same "kind of mind' that humans, dolphins, and whales have? I don't. There is a real difference.

Of course there is. The cognitive science point of view has in general been that the differences arise solely from physical causes that can be fully accounted for by neuroscience or, lately, by the accumulation of 'information' in the brain. The cognitive phenomenology viewpoint is that the embodied protoconscious and conscious experiences of evolving organisms play an equally significant role in the evolution of consciousness to that played by information exchange in purely physical systems. This is essentially what separates our (your and my) approaches.

The distinctions I'm making may only be conceptual or by degree, but I do believe (1) minds - like organisms - have evolved, and (2) minds - like organisms - are different. Thus, I think there are a variety of "minds" in our reality, and they have a variety of differences.

I don't think anyone would argue with that statement except for the dualism it seems to imply.

Here is a recent book from Oxford UP that I wish we could read online. The table of contents is provided at the link:

Contents
1: Michelle Montague and Tim Bayne: Cognitive Phenomenology: An Introduction
2: Peter Carruthers and Bénédicte Veillet: The Case Against Cognitive Phenomenology
3: Terry Horgan: From Agentive Phenomenology to Cognitive Phenomenology: A Guide for the Perplexed
4: Uriah Kriegel: Cognitive Phenomenology as the Basis of Unconscious Content
5: Joseph Levine: On The Phenomenology of Thought
6: Michelle Montague: The Phenomenology of Particularity
7: David Pitt: Introspection, Phenomenality, and the Availability of Intentional Content
8: Jesse Prinz: The Sensory Basis of Cognitive Phenomenology
9: William Robinson: A Frugal View of Cognitive Phenomenology
10: Christopher Shields: On Behalf of Cognitive Qualia
11: Charles Siewert: Phenomenal Thought
12: Maja Spener: Disagreement about Cognitive Phenomenology
13: Galen Strawson: Cognitive Phenomenology: real life
14: Michael Tye and Briggs Wright: Is There a Phenomenology of Thought?
15: David Woodruff-Smith: Phenomenology of Consciously Thinking

Cognitive Phenomenology: Hardback: Tim Bayne


- Oxford University Press


Now in paperback at 22 pounds:

Cognitive Phenomenology: Paperback: Tim Bayne


- Oxford University Press


I will search out online versions of some of the articles.
 
Last edited:
In the physicalist account, its the nerves firing that causes the heart to beat faster. "fear" is a mental event, an overflow that cannot cause anything physical.

How does this work for you on the property dualist account? What makes the heart beat faster?
I love emotions because they blur the line between the body and mind like nothing else. Is terror an emotion or a physiological response? How about anger? The fight or flight response? The stress response?

I think emotions blur the line because the line is blurry. What makes the heart beat faster? The answer is fear, because fear is both a physical state and a phenomenal state. Again, Chalmers and others have said that the phenomenal state is "extra" but I think that's so only in the conceptual sense, not the real sense. You can't have a body in a physical state of fear without simultaneously having a phenomenal state of fear. There's just no way. A zombie is not conceivable on my view.

Constance had said that the emotions were "presentations" rather than representations. I thought that was interesting.

Emotions are incredibly hard to define. I tend to think of them on two axes: affect and intensity. That is, is the affect positive or negative and how intense is it? Happy/excited, grumpy/furious.

Emotions can be thought of us the body's alert system; they are the body's physiological response to stimuli from the environment. They prepare/compel the body to action.

And sensations can blur into emotions too. Tired? Antsy? Agitated? And what is the state of confusion? An emotion or a cognitive state.

Earlier, a researcher was quoted as saying all phenomenal experience is representation. I tend to agree with this to a degree. I would say even emotions/affect states are "representative" of the state of the body. Other phenomenal experience is "representative" of the state of the not-body.

But I'm actually not comfortable with the term "representation." That implies that our phenomenal experience of, say, a car is a representation of a "real" car outside of the skull. I think the spread mind theory is correct in that the act of perceiving (our physical body receiving physical energy) is an act of "creation." That is, when a human and a butterfly look at a car (process), they perceive two completely different things.

Organisms possess the most extraordinary sensory receiving and processing organs (hardware) which they use to gather information (software) from the environment. They use this information to survive and reproduce.
 
Last edited:
I assert that emotions are the physiological response.

Take away all the subjective hand-wringing about it.

What is anxiety? You feel tension in the muscles, cortisol levels go up, your body is hypersensitive to external stimuli, feel physical fatigue, etc. In other words, your body experiences anxiety.

What is anger? A rush of adrenaline, ramp up of adrenaline and testosterone, your body instinctively ramps up for a fight.

What I'm proposing is that these are natural evolutionary responses to situations in our environment. Which then gets interpreted as emotion by our mind.

Etc.
 
I realize that that is your view, but its validity has not been demonstrated. In my view it is oversimplified to the extent that it ignores the nature of experienced reality in organisms both simple and complex.
I agree that its validity hasn't been demonstrated. If it is valid, I suspect we won't know for certain until an AI is able to explain it to us.

How does this view ignore the nature of experienced reality? It certainly doesn't. The "feel" of consciousness doesn't somehow change if it turns out to be constituted of information, right?

The cognitive phenomenology viewpoint is that the embodied protoconscious and conscious experiences of evolving organisms play an equally significant role in the evolution of consciousness...
You've said this same thing many times and I've never understood it: "consciousness has played a role in the evolution of consciousness."

What does that mean?
 
Last edited:
OK, now I'm with you, I think I'm finally starting to pick up what you're laying down.

What you're positing is that since consciousness emerged from our brains, it could emerge from equivalently complex things.

It emerges from complexity kind of like life emerged from non-living chemicals.

Physicalist views of consciousness are emergentist. Hofstedter is an emergentist.

There are problems with emergence in general and in explaining consciousness in particular.

It's hard to articulate a view of emergentism on which consciousness isn't merely epiphenomenal - with no causal efficacy ... ie it doesn't do anything.

That is what I'm laying down.

The only other viable view now is Panpsychism. Panpsychism is the view that consciousness is a fundamental property. On this view mind does not emerge from matter because fundamental particles have mental/phenomenal properties.

If you hope for free will, this is where you will want to look.
 
I assert that emotions are the physiological response.

Take away all the subjective hand-wringing about it.

What is anxiety? You feel tension in the muscles, cortisol levels go up, your body is hypersensitive to external stimuli, feel physical fatigue, etc. In other words, your body experiences anxiety.

What is anger? A rush of adrenaline, ramp up of adrenaline and testosterone, your body instinctively ramps up for a fight.

What I'm proposing is that these are natural evolutionary responses to situations in our environment. Which then gets interpreted as emotion by our mind.

Etc.

That's a good description of physicalism emergentism and epiphenomenalism ... you didn't know you'd been speaking prose all your life, did you?

If you found out tomorrow that you have free will, what would you do differently?
 
Since AI seems to be "emerging" as a topic on this thread, here is a classic in the field.

This is from 1992, but it may be more true now as cognitive science and consciousness studies have evolved:

It has also become more common for AI researchers to seek out and study philosophy.

What Computers Still Can't Do | The MIT Press

When it was first published in 1972, Hubert Dreyfus's manifesto on the inherent inability of disembodied machines to mimic higher mental functions caused an uproar in the artificial intelligence community. The world has changed since then. Today it is clear that "good old-fashioned AI," based on the idea of using symbolic representations to produce general intelligence, is in decline (although several believers still pursue its pot of gold), and the focus of the Al community has shifted to more complex models of the mind. It has also become more common for AI researchers to seek out and study philosophy. For this edition of his now classic book, Dreyfus has added a lengthy new introduction outlining these changes and assessing the paradigms of connectionism and neural networks that have transformed the field.
At a time when researchers were proposing grand plans for general problem solvers and automatic translation machines, Dreyfus predicted that they would fail because their conception of mental functioning was naive, and he suggested that they would do well to acquaint themselves with modern philosophical approaches to human beings. What Computers Can't Do was widely attacked but quietly studied. Dreyfus's arguments are still provocative and focus our attention once again on what it is that makes human beings unique.
 
Regarding the Singularity and AI, it's often asserted:

AI -----> AI+ ---> AI++

with the length of the arrows indicating each step occurs more rapidly than the last ...

But in a talk by David Chalmers I posted above on the Singularity (Part 1 I think) someone in the Q&A pointed out something that Chalmers hadn't seemed to have thought of: that AI or AI+ might be smart enough not to pursue the next level (as it looks like we are not) and so we stop with AI or AI+ ...

And it also occurs to me the Singularity is not about intelligence, just as evolution is not (unless you insert teleology) but about adaptation.

The standard idea being that once AI takes over and the progression above occurs, it will happen so fast that we will literally not know what is going on ... but, shouldn't we expect, just like in organic evolution, all the various niches will be exploited and our AI successors will have to deal with viral AIs and parasitic AIs and others that exploit niches where intelligence is not supreme ...

And if merely adapting to the environment and competing for resources is the name of the game, then it seems the Universal Replicator (i.e. grey goo) wins out in the end?
 
The only other viable view now is Panpsychism. Panpsychism is the view that consciousness is a fundamental property. On this view mind does not emerge from matter because fundamental particles have mental/phenomenal properties.

If you hope for free will, this is where you will want to look.

Consciousness is a fundamental property of what? Matter?

And why can't it do anything if it emerges?

Life emerged, and does a whole hell of a lot. And life isn't a fundamental property of anything, but matter is a fundamental property of life. At least how we know it.
 
And if merely adapting to the environment and competing for resources is the name of the game, then it seems the Universal Replicator (i.e. grey goo) wins out in the end?
I would argue that nature already solved the universal replicator problem.

We just happen to call it life.
 
I agree that its validity hasn't been demonstrated. If it is valid, I suspect we won't know for certain until an AI is able to explain it to us.

How does this view ignore the nature of experienced reality? It certainly doesn't. The "feel" of consciousness doesn't somehow change if it turns out to be constituted of information, right?

That depends on what information 'feels' like. If Tononi's theory traffics in 'experienced reality' to any extent he would have to be able to identify kinds of 'information' we receive through phenomenal experiences had through consciousness (at levels from protoconsciousness upward). I've asked since we first began discussing Tononi for someone who supports his theory to flesh it out with some tangible, concrete experiential details. It's fallen to you to do so if you wanted to, since you've been the longstanding advocate for IIT. You haven't provided it yet despite my asking you to do so. Maybe @marduk can make this attempt if he's persuaded enough by Tononi's theory to undertake the effort.

You've said this same thing many times and I've never understood it: "consciousness has played a role in the evolution of consciousness."

What does that mean?

Have you read the Panksepp papers, Varela and Thompson, Gallagher and Zahavi, Merleau-Ponty? Any introduction to phenomenology, such as the one I posted from Sartre's Being and Nothingness in Part I of this thread? What I mean would be clear if you had, since my posts apparently haven't helped.


ps: do you actually anticipate sitting at the knees of an AI that can explain to you the nature of your own experienced being? Or even your unexperienced being, whatever that might consist of?
 
Consciousness is a fundamental property of what? Matter?

And why can't it do anything if it emerges?

Life emerged, and does a whole hell of a lot. And life isn't a fundamental property of anything, but matter is a fundamental property of life. At least how we know it.

First off, I'm still committed to moving on to other topics. But ...

Yes, Panpsychism asserts that consciousness is a fundamental property of matter. (take a deep breath and keep reading)

I'll give you two things:

1. you acknowledged "scientific hand waving" - hand waving is hand waving, be it philosophical or scientific
2. you were careful to note: "At least how we know it."

I'm not selling anything. I've been through this several times and it's all posted here on the thread, have a look (with an open mind) or don't. And to anticipate every possible cause for limbic response, as far as I know, there are no necessary religious implications of Panpsychism.

I'm not the spokesperson for Panpsychism (the idea that "there is something it is like" to be an electron is deeply weird to my sensibilities, but it has some theoretical advantages) - I just wanted a succinct account of Panpsychism on the thread for reference.

Why can't it do anything if it emerges? ... that is the part you are still missing. I can't explain it any better than I have ... I've never been very successful at getting it across, that's why I am writing Nagel. All I can do is say do the readings on emergentism and epiphenomenalism. It is hard to have an emergent theory of mind which does not entail epiphenomenalism and causal impotency.
 
I love emotions because they blur the line between the body and mind like nothing else. Is terror an emotion or a physiological response? How about anger? The fight or flight response? The stress response?

I think emotions blur the line because the line is blurry. What makes the heart beat faster? The answer is fear, because fear is both a physical state and a phenomenal state. Again, Chalmers and others have said that the phenomenal state is "extra" but I think that's so only in the conceptual sense, not the real sense. You can't have a body in a physical state of fear without simultaneously having a phenomenal state of fear. There's just no way. A zombie is not conceivable on my view.

Constance had said that the emotions were "presentations" rather than representations. I thought that was interesting.

Emotions are incredibly hard to define. I tend to think of them on two axes: affect and intensity. That is, is the affect positive or negative and how intense is it? Happy/excited, grumpy/furious.

Emotions can be thought of us the body's alert system; they are the body's physiological response to stimuli from the environment. They prepare/compel the body to action.

And sensations can blur into emotions too. Tired? Antsy? Agitated? And what is the state of confusion? An emotion or a cognitive state.

Earlier, a researcher was quoted as saying all phenomenal experience is representation. I tend to agree with this to a degree. I would say even emotions/affect states are "representative" of the state of the body. Other phenomenal experience is "representative" of the state of the not-body.

But I'm actually not comfortable with the term "representation." That implies that our phenomenal experience of, say, a car is a representation of a "real" car outside of the skull. I think the spread mind theory is correct in that the act of perceiving (our physical body receiving physical energy) is an act of "creation." That is, when a human and a butterfly look at a car (process), they perceive two completely different things.

Organisms possess the most extraordinary sensory receiving and processing organs (hardware) which they use to gather information (software) from the environment. They use this information to survive and reproduce.

Interesting, up until the usual reduction of the human to a computer. But it's progress, better late than never.
 
I would argue that nature already solved the universal replicator problem.

We just happen to call it life.

In this instance, I am referring more to Grey goo. The idea is that it is the simplest machine that self replicates and so it rapidly consumes all available matter ... something life has not yet done. As far as we know.

This idea came out of early work in nanotechnology, it may have been one of Drexler's ideas ... I'm not sure.

Grey goo - Wikipedia, the free encyclopedia

Yes, here it is:

Grey goo (also spelled gray goo) is a hypothetical end-of-the-world scenario involving molecular nanotechnology in which out-of-control self-replicating robots consume all matter on Earth while building more of themselves,[1][2] a scenario that has been called ecophagy ("eating the environment").[3] The original idea assumed machines were designed to have this capability, while popularizations have assumed that machines might somehow gain this capability by accident.
Self-replicating machines of the macroscopic variety were originally described by mathematician
John von Neumann, and are sometimes referred to as von Neumann machines. The term gray goo was coined by nanotechnology pioneer Eric Drexler in his 1986 book Engines of Creation.[4] In 2004 he stated, "I wish I had never used the term 'gray goo'."[5] Engines of Creation mentions "gray goo" in two paragraphs and a note, while the popularized idea of gray goo was first publicized in a mass-circulation magazine, Omni, in November 1986 [1].
 
But in a talk by David Chalmers I posted above on the Singularity (Part 1 I think) someone in the Q&A pointed out something that Chalmers hadn't seemed to have thought of: that AI or AI+ might be smart enough not to pursue the next level (as it looks like we are not) and so we stop with AI or AI+ ...

I missed that comment. It's good news.
 
Status
Not open for further replies.
Back
Top