• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 2

Free episodes:

Status
Not open for further replies.
If the sense of free will is a cognitive illusion that exists as a result of determined, evolutionary processes, what adaptive function does it provide?
 
If the sense of free will is a cognitive illusion that exists as a result of determined, evolutionary processes, what adaptive function does it provide?
The functional role of free-will illusion in cognition: “The Bignetti Model”

Abstract

When performing a voluntary action the agent is firmly convinced that he has freely decided to perform it. This raises two questions: “Is this subjective perception of free will (FW) an illusion?” and “Does it serve a useful purpose?”. The answers are tentatively given by “The “Bignetti Model” (TBM) as follows: (1) The so called “voluntary” action is decided and performed by the agent’s unconscious mind (UM) by means of probabilistic responses to inner and outer stimuli; (2) After a slight delay, the agent becomes aware of the ongoing action through feedback signals (somatosensory, etc.) that are conveyed to the brain as a consequence of its performance. Thus, the agent’s conscious mind (CM) always lags behind unconscious activity; (3) Owing to this delay, the CM cannot know the unconscious work that preceeds awareness, thus the CM erroneously believes it has freely decided the action. Though objectively false, this belief is subjectively perceived as true (FW illusion). It is so persistent and deep-rooted in the mind that the CM is unwilling to abandon it; (4) The FW illusion satisfies a psychological need to secure the arousal of the senses of agency (SoA) and of responsibility (SoR) of the action. Both SoA and SoR inevitably lead the CM to self-attribute reward or blame depending on action performance and outcome; (5) Both reward and blame are motivational incentives which foster learning and memory in the CM; the updating of knowledge will provide new information and the skill required for further action (restart from point 1).

Introduction

The American philosopher John Searle believes that mind and body are not two different entities; that consciousness is an emergent property of the brain, and that consciousness is a series of qualitative states (Searle, 1997). With regard to the old philosophical question of duality and FW, Searle is astonished that the problem of duality has not yet been resolved, and thus asks himself why we find the conviction of our own FW so difficult to abandon. He writes: “The persistence of the traditional free will problem in philosophy seems to me something of a scandal”. Nevertheless, many thinkers have studied this issue and many papers have been written, but it appears that little progress has been made. He questions: “Is there some conceptual problem we have simply ignored? Why is it that we have made so little progress compared with our philosophical ancestors?” He is not able to provide a philosophical solution to the question, and rather than adding further proposals, none of which would be convincing, he bypasses the obstacle by stating that “the philosophical mind–body problem seems to me not very difficult. However, the philosophical solution kicks the problem upstairs to neurobiology, where it leaves us with a very difficult neurobiological problem. How exactly does the brain do it, and how exactly are conscious states realised in the brain? What exactly are the neuronal processes that cause our conscious experience, and how exactly are these conscious experiences realised in brain structures?”

We agree with Searle when he claims to be astonished by this evidence, but we do not agree with him when he suggests that we should “kick the question upstairs to neurobiology” as if FW were not an intriguing issue anymore. This paper will attempt to take a significant step forward on this issue.

Material events can be described by an external observer as a chain of causes and effects which, in turn, may be causes for other effects and so on. Conversely, when we voluntarily cause an event, we do not feel that we are part of a chain; rather we consider our action to be the result of free will (FW). Wegner states that scientific explanations account for our decisions and the illusion of FW (Wegner, 2002). There must always be an objective mechanism, i.e., a precise relationship between causes and effects, underlying a voluntary action. We think that we consciously will what we are doing because we feel “free from causes” and because we experience this feeling many times a day (Wegner, 2002). ...
 
If the sense of free will is a cognitive illusion that exists as a result of determined, evolutionary processes, what adaptive function does it provide?

I could respond that it isn't a result of DEPs ... it's cultural. The Mindless Babylonians didn't even have a sense of self, remember? No self, no free will. Many religions and philosophies have been deterministic - Calvinism? And materialism is probably way older than Democritus.
 

There you go ... I've said before in meditation (really just paying close attention) you can become aware of a thought and what kind it is before it fully forms and then not have that thought ... so I don't know how this experiment would run if different instructions were given or if it were run in reverse as a kind of feedback to train subjects to become aware of their thoughts earlier ...
 
Last edited by a moderator:
I'm not sure of you're meaning here, but I want to be clear that I'm not trying to "trick" Marduk. It's an honest question. I'm curious to hear his thoughts.

If it's the term "quale" that you're bulking at, let's just focus on phenomenal experience.
Well, it made me laugh.
 
While we're at it...

http://www.yale.edu/acmelab/articles/Morsella_2005.pdf

The Function of Phenomenal States: Supramodular Interaction Theory

Abstract

Discovering the function of phenomenal states remains a formidable scientific challenge. Research on consciously penetrable conflicts (e.g., “pain-for-gain” scenarios) and impenetrable conflicts (as in the pupillary reflex, ventriloquism, and the McGurk effect [H. McGurk & J. MacDonald, 1976]) reveals that these states integrate diverse kinds of information to yield adaptive action. Supramodular interaction theory proposes that phenomenal states play an essential role in permitting interactions among su- pramodular response systems—agentic, independent, multimodal, information-processing structures defined by their concerns (e.g., instrumental action vs. certain bodily needs). Unlike unconscious processes (e.g., pupillary reflex), these processes may conflict with skeletal muscle plans, as described by the principle of parallel responses into skeletal muscle (PRISM). Without phenomenal states, these systems would be encapsulated and incapable of collectively influencing skeletomotor action.

Introduction

Discovering the function of phenomenal states remains one of the greatest challenges for psychological science (Baars, 1998, 2002; Bindra, 1976; Block, 1995; Chalmers, 1996; Crick & Koch, 2003; Donald, 2001; Dretske, 1997; Jackendoff, 1990; James, 1890; Mandler, 1998; Searle, 2000; Shallice, 1972; Sherrington, 1906; Sperry, 1952; Wegner & Bargh, 1998). These enigmatic phenomena, often referred to as “subjective experience,” “qualia,” “sentience,” “consciousness,” and “awareness,” have proven to be difficult to describe and analyze but easy to identify, for they constitute the totality of our experience. Perhaps they have been best defined by Nagel (1974), who claimed that an organism has phenomenal states if there is something it is like to be that organ- ism—something it is like, for example, to be human and experi- ence pain, love, breathlessness, or yellow afterimages. Similarly, Block (1995) claimed, “The phenomenally conscious aspect of a state is what it is like to be in that state” (p. 227). In this article, I present a theory that addresses a simple question: What do these states contribute to the cognitive apparatus and to the survival of the human organism? ...
 
In explaining your thoughts on how the brain and consciousness are related, you've used the analogy of a computer and software.

When it comes to computers and software (input), it's easy to see how both are physical. I think. Marduk, of what would you say computer software is constituted? How about output?
Software on a hard drive is made of discrete quanta of magnetic fields arranged such that you can recover the 1's and 0's.
Software on a flash drive is made up of logic gates arranged to store the same 1's and 0's.
Software in memory is a series of 1's and 0's stored in a different kind of memory gates that happen to be pretty close to the processor. The processor takes those 1's and 0's, processes them through logic gates into different 1's and 0's.

Some of these 1's and 0's are interpreted by specialized hardware to paint pixels on a screen, turn a magnet on and off really fast to make sound, or just sit there and listen for other 1's and 0's from, say, a mouse.

The process of execution falls into different domains, be it single-threaded, multi-threaded, but at the end of the day you're describing a series of 1's and 0's sitting in memory registers that get shifted through logic gates to perform functions according to instruction sets described in other registers and logic that is "frozen" into the hardware.

One can describe the process of software execution in different kind of domains, but I've taken you pretty far into the guts of the machine and close to the metal.

What are you after? How semiconductors work, how logic gates work, how processors or memory busses work, or OS's, or the applications running on the OS?

Maybe this will help. A computer by definition is a state machine. A state machine is one that you can ascribe a series of different states to, and how you get from one state to another.

For example, you could interpret a human being as a simple state machine in one of two states: alive or dead.

The process from getting to alive to dead means you have to die; and you don't come back. So the description is simple:
A -> D with the -> being a one way transition from alive to dead.

A computer is like this, only really really complicated.

I think the human brain in execution could also be described as a series of really really simple state machines at the neuron level, only we have a whole hell of a lot of them, with the state machines interacting and looping with each other.
And I'm wondering about whether the output of a computer and software fit into the brain/consciousness analogy. Is consciousness input, output or both?
I think it's both. When you create a state machine that can consume it's own output, you make a feedback loop.
In other words, lucky you -- you get to influence your own software at run-time.
One of the things Chalmers helped me understand is that it's not easy to see of what phenomenal experience — one aspect of consciousness — is constituted. A common example is the experience of the color green. We can call this "experience of green" a quale. (Sorry for the semantics. Shred it if needed.) Of what is a green quale "made?" Quarks? Atoms? Chemicals? Neurons?
It would be the state that gets created in your brain when you look at green or think about it.
Here's an example of a cat's "quale" being mapped:
 
That's part of the problem with interpreting experiments of that type - it shouldn't be surprising that we get a spike on an EEG or whatever the device is - before we become aware of a thought - that's how we experience it subjectively too - and we talk about it that way too, thoughts come out of nowhere or pop into our heads. We don't say "now I'm going to have this thought" but as I said with attention we can gain awareness and the brain also re wires itself as a result of how we think ... so it's Sometimes the chicken sometimes the egg for free will.

The experiments I've seen have been for simple intentions or actions so I don't know it would scale up ... but we also talk about our intentions or actions that way too ... We may experience them coming from somewhere "outside" our consciousness (though we can usually shed light on it if we look hard) or as irresistible impulses ... at one time at least the law even recognized this.
 
Software on a hard drive is made of discrete quanta of magnetic fields arranged such that you can recover the 1's and 0's.
Software on a flash drive is made up of logic gates arranged to store the same 1's and 0's.
Software in memory is a series of 1's and 0's stored in a different kind of memory gates that happen to be pretty close to the processor. The processor takes those 1's and 0's, processes them through logic gates into different 1's and 0's.

Some of these 1's and 0's are interpreted by specialized hardware to paint pixels on a screen, turn a magnet on and off really fast to make sound, or just sit there and listen for other 1's and 0's from, say, a mouse.

The process of execution falls into different domains, be it single-threaded, multi-threaded, but at the end of the day you're describing a series of 1's and 0's sitting in memory registers that get shifted through logic gates to perform functions according to instruction sets described in other registers and logic that is "frozen" into the hardware.

One can describe the process of software execution in different kind of domains, but I've taken you pretty far into the guts of the machine and close to the metal.

What are you after? How semiconductors work, how logic gates work, how processors or memory busses work, or OS's, or the applications running on the OS?

Maybe this will help. A computer by definition is a state machine. A state machine is one that you can ascribe a series of different states to, and how you get from one state to another.

For example, you could interpret a human being as a simple state machine in one of two states: alive or dead.

The process from getting to alive to dead means you have to die; and you don't come back. So the description is simple:
A -> D with the -> being a one way transition from alive to dead.

A computer is like this, only really really complicated.

I think the human brain in execution could also be described as a series of really really simple state machines at the neuron level, only we have a whole hell of a lot of them, with the state machines interacting and looping with each other.

I think it's both. When you create a state machine that can consume it's own output, you make a feedback loop.
In other words, lucky you -- you get to influence your own software at run-time.

It would be the state that gets created in your brain when you look at green or think about it.
Here's an example of a cat's "quale" being mapped:

All that skips over the subjective experience ... the "what it's like to be" ...

Objectively getting to the subjective is as concise a statement of "the hard problem" of consciousness as I can formulate.

I'm sure you've read Nagel's "What It's Like To Be A Bat".
 
That's part of the problem with interpreting experiments of that type - it shouldn't be surprising that we get a spike on an EEG or whatever the device is - before we become aware of a thought - that's how we experience it subjectively too - and we talk about it that way too, thoughts come out of nowhere or pop into our heads. We don't say "now I'm going to have this thought" but as I said with attention we can gain awareness and the brain also re wires itself as a result of how we think ... so it's Sometimes the chicken sometimes the egg for free will.

The experiments I've seen have been for simple intentions or actions so I don't know it would scale up ... but we also talk about our intentions or actions that way too ... We may experience them coming from somewhere "outside" our consciousness (though we can usually shed light on it if we look hard) or as irresistible impulses ... at one time at least the law even recognized this.
Sure, but now we're just talking about scale, which is an engineering problem, not a philosophical one.
 
All that skips over the subjective experience ... the "what it's like to be" ...

Objectively getting to the subjective is as concise a statement of "the hard problem" of consciousness as I can formulate.

I'm sure you've read Nagel's "What It's Like To Be A Bat".
Are you asking me what it's like for the software to be executed?

I would have no idea.

Again, I'm speculating that consciousness is an emergent property of some self-referential highly complex systems that can receive and respond to external stimuli.

It's not like I know this to be true, but at least it's being tested somewhat, and at least it doesn't require some mystical "stuff" that's not part of the material universe to exist.
 
Are you asking me what it's like for the software to be executed?

I would have no idea.

Again, I'm speculating that consciousness is an emergent property of some self-referential highly complex systems that can receive and respond to external stimuli.

It's not like I know this to be true, but at least it's being tested somewhat, and at least it doesn't require some mystical "stuff" that's not part of the material universe to exist.

That's just scientific hand waving! ;-)

Seriously, I don't think anyone here is saying that about mystical "stuff " ... as Soupie has said matter is "ethereal" enough on its own and saying everything is made of matter doesn't of itself rule very much out ... many religions have come to terms with it.

The hard problem has been the core of this discussion from the beginning and we've looked at a lot of possibilities.

So whee your thinking has ended until further evidence comes in is exactly where we've started speculating.

The value of that? Like other philosophical aporia, thinking about it gets you no closer to a solution but it does sharpen Soupie's saw and as we've seen in our readings it's generated a lot of very rich ideas ... you see the same thing in marhematics.

By the way ... mathematics, created or discovered?
 
I've been wanting to talk maths for awhile ... we've just never gotten to it ... I posted a few articles here and on part one.
 
[A]t the end of the day you're describing a series of 1's and 0's...
Is it fair to say that software is a pattern of information [embodied as physical bits]?

What are you after? How semiconductors work, how logic gates work, how processors or memory busses work, or OS's, or the applications running on the OS?
I think it's easy to see how the brain and a computer are alike but not so easy to see how phenomenal experience and "a series of 1's and 0's" are alike.

Having said that, my current regard of the analogy may be even more literal then yours.

I do think mind literally is a pattern of information. A very, very complex, dynamic pattern. (Having said that, I'm not suggesting that it's the complexity that gives rise to mind.)

It's not clear to me what physical substance a qualitative experience might be reduced to, nor do I see how a quale could emerge from physical processes. (That is, not exist beforehand in any form, but then exist post-hand.)

Thus I believe the constituents of qualia — phenomenal experiences — exist as a fundamental aspect of physical reality.

I think qualitative experiences — and the rest of consciousness — are constituted of information.

I think it's both. When you create a state machine that can consume it's own output, you make a feedback loop.
In other words, lucky you -- you get to influence your own software at run-time.
As I believe the mind is information, I think it's constituted of both the incoming and outgoing (so to speak) information.

[A quale] would be the state that gets created in your brain when you look at green or think about it.
I'm not sure what you mean by "the state" here. Can you go a little farther here? The state of neurons? How does this state give rise to or create qualitative experience?
 
Is it fair to say that software is a pattern of information [embodied as physical bits]?


I think it's easy to see how the brain and a computer are alike but not so easy to see how phenomenal experience and "a series of 1's and 0's" are alike.

Having said that, my current regard of the analogy may be even more literal then yours.

I do think mind literally is a pattern of information. A very, very complex, dynamic pattern. (Having said that, I'm not suggesting that it's the complexity that gives rise to mind.)

It's not clear to me what physical substance a qualitative experience might be reduced to, nor do I see how a quale could emerge from physical processes. (That is, not exist beforehand in any form, but then exist post-hand.)

Thus I believe the constituents of qualia — phenomenal experiences — exist as a fundamental aspect of physical reality.

I think qualitative experiences — and the rest of consciousness — are constituted of information.


As I believe the mind is information, I think it's constituted of both the incoming and outgoing (so to speak) information.


I'm not sure what you mean by "the state" here. Can you go a little farther here? The state of neurons? How does this state give rise to or create qualitative experience?

image.jpg
 
...before we become aware of a thought - that's how we experience it subjectively too - and we talk about it that way too, thoughts come out of nowhere or pop into our heads. ...

We may experience them coming from somewhere "outside" our consciousness (though we can usually shed light on it if we look hard) or as irresistible impulses ... at one time at least the law even recognized this.
"Popping into our heads" and "coming from outside our consciousness" may be the subjective experience of our bodies receiving and integrating information; that is, the process of received, pre-integrated information (unconscious mind) being processed into integrated information (conscious mind).

This particular meditative experience you've described many times — the formation and rejecting of thoughts — may be your sense of self (conscious mind or integrated information) observing (and ultimately denying) the integration of new information.
 
I know I'm on a serious information theory/philosophy of mind kick right now, but bear with me.

I recently heard a very insightful explanation for the subjective experience of "not being able to think straight."

I was told (or read, can't remember) that this subjective experience is one's working memory dropping out. It often happens when one is stressed, nervous, or panicked.

I'm not sure if it is legit, but I think it's interesting.
 
Last edited:
"Popping into our heads" and "coming from outside our consciousness" may be the subjective experience of our bodies receiving and integrating information; that is, the process of received, pre-integrated information (unconscious mind) being processed into integrated information (conscious mind).

This particular meditative experience you've described many times — the formation and rejecting of thoughts — may be your sense of self (conscious mind or integrated information) observing (and ultimately denying) the integration of new information.

Right. What did you think I was saying?
 
Status
Not open for further replies.
Back
Top