• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 2

Status
Not open for further replies.
I just dropped in (to see what condition my condition was in): How ‘blind insight’ changes our view of metacognition | NeuroBanter

Neuroscientists long appreciated that people can make accurate decisions without knowing they are doing so. This is particularly impressive in blindsight: a phenomenon where people with damage to the visual parts of their brain can still make accurate visual discriminations while claiming to not see anything. But even in normal life it is quite possible to make good decisions without having reliable insight into whether you are right or wrong.

In a paper published this week in Psychological Science, our research group – led by Ryan Scott – has for the first time shown the opposite phenomenon: blind insight. This is the situation in which people know whether or not they’ve made accurate decisions, even though they can’t make decisions accurately!

This is important because it changes how we think about metacognition. ...

The discovery of blind insight changes the way we think about decision-making. ... Exactly what is going on remains an open question. Several possible mechanisms could account for our results. One exciting possibility appeals to predictive processing, which is the increasingly influential idea that perception depends on top-down predictions about the causes of sensory signals. If top-down influences are also involved in metacognition, they could carry the additional information needed for blind insight. This would mean that metacognition, like perception, is best understood as a process of probabilistic inference. ...

This brings us to consciousness (of course). Metacognitive judgments are often used as a proxy for consciousness, on the logic that confident decisions are assumed to be based on conscious experiences of the signal (e.g., the dim light was consciously seen), whereas guesses signify that the signal was processed only unconsciously. If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them.
In the comments, the author is asked to expand on how these research relates to consciousness:

Well that’s the highly speculative bit! In visual perception, we are already starting to see how top-down expectations can have dramatic influences on conscious contents – or on what reaches consciousness in the first place (we are writing up some of these experiments now; a key issue here is to distinguish expectation from attention, which is difficult but possible). So the thought is that if metacognition involves top-down processes – perhaps in shaping expectations about the statistics of probability distributions underlying perceptual decision – then the act of making a metacognitive judgement could actually shape (or maybe even ‘give rise to’ in the sense of crossing a threshold) – the corresponding conscious contents. This fits nicely with the idea that behavioural report of an experience (which by definition involves metacognition) is an action, not just a passive read-out of some pre-existing information. And actions can shape how perceptions arise from sensations. What we need to do next is to develop theoretical models of how metacognitive judgements actually arise, and then do the experiments to show whether (and how) engaging in these judgements changes the conscious contents these judgements are about.

My own thinking has been that an organism can have subjective experience, but lack self-aware consciousness (and the ability to reflect on their subjective experience).

However, the author's musings about metacognition and consciousness, as well as Graziano's theory, imply that subjective experience — the "what it's like" — arises in part due to meta-awareness.

This is not my current belief.

How consciousness works – Michael Graziano – Aeon

Lately, the problem of consciousness has begun to catch on in neuroscience. How does a brain generate consciousness? In the computer age, it is not hard to imagine how a computing machine might construct, store and spit out the information that ‘I am alive, I am a person, I have memories, the wind is cold, the grass is green,’ and so on. But how does a brain become aware of those propositions? The philosopher David Chalmers has claimed that the first question, how a brain computes information about itself and the surrounding world, is the ‘easy’ problem of consciousness. The second question, how a brain becomes aware of all that computed stuff, is the ‘hard’ problem.

The related question I have asked is: How does information become aware of itself?

This question is scientifically approachable, and the attention schema theory supplies the outlines of an answer.

One way to think about the relationship between brain and consciousness is to break it down into two mysteries. I call them Arrow A and Arrow B. Arrow A is the mysterious route from neurons to consciousness. If I am looking at a blue sky, my brain doesn’t merely register blue as if I were a wavelength detector from Radio Shack. I am aware of the blue. ...

The attention schema theory does not suffer from these difficulties. It can handle both Arrow A and Arrow B. Consciousness isn’t a non-physical feeling that emerges. Instead, dedicated systems in the brain compute information. Cognitive machinery can access that information, formulate it as speech, and then report it. When a brain reports that it is conscious, it is reporting specific information computed within it. It can, after all, only report the information available to it. In short, Arrow A and Arrow B remain squarely in the domain of signal-processing. ...

What are out-of-body experiences then? One view might be that no such things exist, that charlatans invented them to fool us. Yet such experiences can be induced in the lab, as a number of scientists have now shown. A person can genuinely be made to feel that her centre of awareness is disconnected from her body. The very existence of the out-of-body experience suggests that awareness is a computation and that the computation can be disrupted. Systems in the brain not only compute the information that I am aware, but also compute a spatial framework for it, a location, and a perspective. Screw up the computations, and I screw up my understanding of my own awareness. ...

I think Graziano has something here, but it's not the answer to the Hard Problem. Like the first paper on metacognition, it might indicate that self-awareness (or meta-awareness) is a necessary ingredient for subjective experience.

If this is the case, then organisms lacking meta- or self-awareness will not have human-like phenomenal experience — not just the inability to report them.

Graziano suggests that his model provides an answer to the hard problem, and while attention/awareness may ultimately play a role in the realization of consciousness, I think what Graziano's model does best is describe how the brain creates a model of the mental self. The "I" that resides inside my body instead of the big tree out front.

http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3223025/pdf/nihms328502.pdf

... Second, people routinely compute the state of awareness of other people. A fundamental
part of social intelligence is the ability to compute information of the type, “Bill is aware of
X.” In the present proposal, the awareness we attribute to another person is our
reconstruction of that person’s attention. This social capability to reconstruct other people’s
attentional state is probably dependant on a specific network of brain areas that evolved to
process social information, though the exact neural instantiation of social intelligence is still
in debate.

Third, in the present hypothesis, the same machinery that computes socially relevant
information of the type, “Bill is aware of X,” also computes information of the type, “I am
aware of X.” When we introspect about our own awareness, or make decisions about the
presence or absence of our own awareness of this or that item, we rely on the same circuitry
whose expertise is to compute information about other people’s awareness.

Fourth, awareness is best described as a perceptual model. It is not merely a cognitive or
semantic proposition about ourselves that we can verbalize. Instead it is a rich informational
model that includes, among other computed properties, a spatial structure. A commonly
overlooked or entirely ignored component of social perception is spatial localization. Social
perception is not merely about constructing a model of the thoughts and emotions of another
person, but also about binding those mental attributes to a location. We do not merely
reconstruct that Bill believes this, feels that, and is aware of the other, but we perceive those
mental attributes as localized within and emanating from Bill. In the present hypothesis,
through the use of the social perceptual machinery, we assign the property of awareness to a
location within ourselves. ...

Again, I think Graziano has something here: the same brain processes that allow us to project awareness and intention on other objects (often times erroneously) likely play a role in the creation of our own "I" centered in our bodies.
 
Last edited:
Graziano's argument this (I'm paraphrasing): (1) Because we know the brain processes information for motor function, and (2) we can verbally report our conscious experiences, then (3) consciousness experiences must be information, (4) because only information can cause stuff to happen in the brain-body.

I don't find that argument (underscored above) to be persuasive. Would you cite the paper in which he expresses it? I can agree that 'information' in nature enables creatures to experience the world, and to be aware of themselves as present in their experiences, but I don't see how it accounts for what experience is, how it differs from 'information'. Thus the hard problem remains unresolved.
The argument is outlined at the end of Graziano's paper linked above: How Consciousness Works

My understanding of the hard problem is that it asks how and why do we have phenomenal experiences, a feeling of "what its like."

My current thinking is that how we have it is via meaningful data, i.e., information or meaning. When organisms process received physical data into meaningful physical information, it becomes phenomenal physical information.

My current thinking as to why organisms have it is because it's adaptive.

Isn't epiphenomenalism the 'idea' that phenomenal experience is not real and does not depend on 'consciousness' for its existence, thus that what 'looks like consciousness' is a computational byproduct of information processing in the brain? I might be missing something here; it's a long time since I've tangled with epiphenomenalism.
No. Epiphenominalism is the idea that conscious, phenomenal experience has no causal influence on the physical body-brain.

How can we know the ontology of consciousness if we don't know what consciousness is? It seems clear that we can't know what it is until we resolve the hard problem. Denying that the hard problem exists is not sufficient to dissolve it.
Knowing what it is and knowing what it is made of seem to be two different things.

I think most people agree what consciousness is: the feeling of "what its like."

What consciousness is made of is the million dollar question.

I don't see how experience can be understood to be non-subjective, nor how experience can be seen as a substance of any kind. Except to rescue the materialist/physicalist paradigm, I don't see the purpose in trying to establish any substance as constitutive of consciousness.
I'm not suggesting phenomenal experience is "non-subjective." You must have misunderstood me.

I believe for something to exist, it must be something rather than nothing. So if consciousness exist, which i believe it does, it must consist of something. Of course, i could be wrong on both accounts.

But, organisms definitely exist and are definitely made of something, and if consciousness can causally interact with organisms, than it definitely must be made of something, something that can interact with organisms. A difference that makes a difference.

I still have not seen an explanation of how consciousness can be defined as 'information'. Doyle does not attempt to do so, judging by your summary of his approach here:
I think consciousness is meaning. I also think information is meaning. Ergo, consciousness is information. Information is consciousness.

I'm pretty certain Doyle believe the mind is ontologically information embodied by the body-brain. But he (and I as well) do not think the human body-brain processes data like a computer.

That's a hypothesis, I believe, rather than something proved.
Yes, that phenomenal and cognitive consciousness is generated at the neuronal level is hardly proved.

do you think the same 'information' involved in generating and sustaining human bodies and brains likewise shapes or even determines what can be felt and thought by humans?
I believe that what is felt and thought by humans is information.

The mental self is information embodied in our body self.

That is, humans are not triune, but biune; we do not consist of a physical self, information self, and a mental self. The informational self and mental self are ontologically the same: Thus, we consist of the physical self and the mental (informational) self.

Why does meaningful data — information — feel like something? I don't know. How does an organism make meaningless data into meaningful data (information)? I don't know but there's a lot of biologists, neuroscientists, and computer scientists trying to figure it out.

Finally: Are we living in the age of the brain? | Prospect Magazine

As Marcus points out, it seems reasonable to suppose that the brain is a kind of computer but we still have no idea what kind of computer it is: how it manipulates and organizes information. The temptation has been to imagine that it must be a computer like the ones we build, using the same principles of computation that were outlined by pioneers of computational theory such as John von Neumann and Alan Turing. But that might not be true. According to artificial intelligence specialist Rodney Brooks of the Massachusetts Institute of Technology, “I believe that we are in an intellectual cul-de-sac, in which we model brains and computers on each other, and so prevent ourselves from having deep insights that would come with new models.” We don’t understand, for example, why it is that the human brain finds so easy tasks that tax the best supercomputer (such as parsing text), and vice versa.

It could also be a mistake to imagine the brain as some optimized device that uses just a few fundamental principles. It has, after all, been cobbled together by evolution, and like so much else shaped that way it only has to work “well enough.” Cognitive scientist VS Ramachandran has suggested that the brain might simply be a “bag of tricks,” or what Marcus has dubbed a “kluge:” a clumsy, makeshift solution that does the job but without any particular elegance. If that’s so, understanding the brain is going to be even harder than we might imagine. And it won’t be done simply by mapping it down to the last synapse.
 
Last edited:
Heres a link to the article referenced in the last article snippet above:

Is the Brain a Good Model for Machine Intelligence?

To celebrate the centenary of the year of Alan Turing’s birth, four scientists and entrepreneurs assess the divide between neuroscience and computing. ...

To advance AI, we need to better understand the brain’s workings at the algorithmic level — the representations and processes that the brain uses to portray the world around us. For example, if we knew how conceptual knowledge was formed from perceptual inputs, it would crucially allow for the meaning of symbols in an artificial language system to be grounded in sensory ‘reality’. ...

Brains differ from computers in a number of key respects. They operate in cycles rather than in linear chains of causality, sending and receiving signals back and forth. Unlike the hardware and software of a machine, the mind and brain are not distinct entities. And then there is the question of chemistry.

http://www.gatsby.ucl.ac.uk/~demis/TuringSpecialIssue(Nature2012).pdf
This brief article is the best I've read outlining how little we know about the brain.
 
Last edited:
Fascinating!!! Thank you for posting this, Flipper. It inspires me to read Vedantic and Yogic texts and perhaps even to pursue their practices. That state of dreamless sleep, that lowest not-quite-'contentless' state, seems to me to signify pure presence, a sense of pure being. Husserl was therefore wrong to claim "no consciousness but by virtue of/in awareness of things" as a corollary of "no things but in consciousness." I want to read Thompson's new book on this neurophenomenological research.
Thats the first ive heard Thompson speak. Very impressive. I will be starting his book that you recommended long ago as soon as I finish the IIT article and Pharoah's book.

Re: Husserl's claim being wrong: I don't think it is, as an awareness of being, is still awareness of some "thing."

Thompson used a lot of the language you use, and at the very end described the idea youve expressed many times — this core, biological awareness of the body, of living — and I was very able to follow intellectually. Im anxious to read his book.
 
Heres a link to the article referenced in the last article snippet above:

Is the Brain a Good Model for Machine Intelligence?

To celebrate the centenary of the year of Alan Turing’s birth, four scientists and entrepreneurs assess the divide between neuroscience and computing. ...

To advance AI, we need to better understand the brain’s workings at the algorithmic level — the representations and processes that the brain uses to portray the world around us. For example, if we knew how conceptual knowledge was formed from perceptual inputs, it would crucially allow for the meaning of symbols in an artificial language system to be grounded in sensory ‘reality’. ...

Brains differ from computers in a number of key respects. They operate in cycles rather than in linear chains of causality, sending and receiving signals back and forth. Unlike the hardware and software of a machine, the mind and brain are not distinct entities. And then there is the question of chemistry.

http://www.gatsby.ucl.ac.uk/~demis/TuringSpecialIssue(Nature2012).pdf
This brief article is the best I've read outlining how little we know about the brain.

Now, didn't I tell you long ago we didn't know much!? ;-)

It could also be a mistake to imagine the brain as some optimized device that uses just a few fundamental principles. It has, after all, been cobbled together by evolution, and like so much else shaped that way it only has to work “well enough.” Cognitive scientist VS Ramachandran has suggested that the brain might simply be a “bag of tricks,” or what Marcus has dubbed a “kluge:” a clumsy, makeshift solution that does the job but without any particular elegance. If that’s so, understanding the brain is going to be even harder than we might imagine. And it won’t be done simply by mapping it down to the last synapse.

1. if Ramachandran is right then this idea, like every idea, is itself the product of a kludge!
2. I wonder if any "solution" to the problem of brain is kludge - rather than an optimized device that uses just a few fundamental principles?

If 2 - then "well enough" is optimized ... or as @Constance might maintain ... it isn't nice to fool Mother Nature.

I still like the Smolin? critique of AI as being a level 5 or 6 problem. Given time and physical constraints (gravity in relation to bone in relation to neural transmission speed, etc) we might be the most intelligent possible beings ... at least in our corner of the galaxy.
 
Given time and physical constraints (gravity in relation to bone in relation to neural transmission speed, etc) we might be the most intelligent possible beings ... at least in our corner of the galaxy.
If life and intelligence (meaning making) are substrate dependent, then this might be correct.

Side note: Under the section "related" at Evan Thompson's wiki page is a link to Jordan Peterson's wikipage. :)
 
If life and intelligence (meaning making) are substrate dependent, then this might be correct.

Side note: Under the section "related" at Evan Thompson's wiki page is a link to Jordan Peterson's wikipage. :)

And, because of time - it could be correct even if they are substrate independent. It could even be correct without time ... the things of which we are made have some very interesting properties that say sillicone does not.

If we do make brain it's a good bet we might use hydrogen and oxygen in combination and carbon - not to mention the fact that there is lots of it! So it's an interesting assumption you make that the best possible materials aren't already pressed into the best possible configuration ... you know, considering it's a kludge.
 
And, because of time - it could be correct even if they are substrate independent. It could even be correct without time ... the things of which we are made have some very interesting properties that say sillicone does not.

If we do make brain it's a good bet we might use hydrogen and oxygen in combination and carbon - not to mention the fact that there is lots of it! So it's an interesting assumption you make that the best possible materials aren't already pressed into the best possible configuration ... you know, considering it's a kludge.
The fact that life as we know it consists of an organic substrate may have more to do with abiogenesis than organics being the "best possible materials."

But it might not.

If life and intelligence can and did make a "jump" to non-organic materials — with the aid of human teleology — would you consider that natural, evolution, or even natural evolution?

You obviously dislike the idea of the brain/body/mind being a kludge. Why?
 
And, because of time - it could be correct even if they are substrate independent. It could even be correct without time ... the things of which we are made have some very interesting properties that say sillicone does not.

If we do make brain it's a good bet we might use hydrogen and oxygen in combination and carbon - not to mention the fact that there is lots of it! So it's an interesting assumption you make that the best possible materials aren't already pressed into the best possible configuration ... you know, considering it's a kludge.
It remains to be seen if life and intelligence are indeed substrate dependent. If they are not, then it remains to be seen at which level they emerge, arise, whatever.

Life appears to arise at the cellular level, right? Is DNA alive? Does life reside at the molecular level?

At what level does intelligence reside? Individual neurons? Neural networks? Brains? Bodies? Environments?

Just thinking out loud.
 
The fact that life as we know it consists of an organic substrate may have more to do with abiogenesis than organics being the "best possible materials."

But it might not.

If life and intelligence can and did make a "jump" to non-organic materials — with the aid of human teleology — would you consider that natural, evolution, or even natural evolution?

You obviously dislike the idea of the brain/body/mind being a kludge. Why?

Now ... you're just wanting to argue ... ;-)

The fact that life as we know it consists of an organic substrate may have more to do with abiogenesis than organics being the "best possible materials.

Once again, in English?

But it might not.

YES! might might is where play happens ... watch this:

might = play
play = good
good = right

might = right!

If life and intelligence can and did make a "jump" to non-organic materials — with the aid of human teleology — would you consider that natural, evolution, or even natural evolution?

Yes ... but then I also walk to school and carry my lunch. On the other hand - if humans are abiogenic ...

You obviously dislike the idea of the brain/body/mind being a kludge. Why?

I so obviously don't! I posted the alien technology article that explored this idea in deapth remember? And brain = kludge goes back to the naturalism argument by Plantinga - all the way back to the pre-C&P era:

BC&PE
 
It remains to be seen if life and intelligence are indeed substrate dependent. If they are not, then it remains to be seen at which level they emerge, arise, whatever.

Life appears to arise at the cellular level, right? Is DNA alive? Does life reside at the molecular level?
At what level does intelligence reside? Individual neurons? Neural networks? Brains? Bodies? Environments?

Just thinking out loud.

I don't have a dog in the fight either way - because I don't as yet see where anything else follows from substrate (in)dependence - it's not a materialist or non materialist argument - either could be true and substrate dependence could be true or not true, in other words consciousness could be immaterial and brain substrate dependent ... or any other combination.

The rest of the questions I think aren't helpful except in terms of how life is defined for the context it's used ... so it's kind of a logical fallacy to go looking for "life" as a thing other than how we definie it - a useful logical fallacy though ... PLAY!

PLAY OUT LOUD

But look, again - we have our old argument that is evidence for intelligence as non-local ... and other theories of consciousness as immaterial ... or whatever ... so as we play with ideas, we keep an eye open for the best possible explanation, not just for what we see - but in order to see what is.
 
Now ... you're just wanting to argue ... ;-)

The fact that life as we know it consists of an organic substrate may have more to do with abiogenesis than organics being the "best possible materials.

Once again, in English?
If life evolved from non-life, organic material may have been more (or the only substrate) conducive to this process.

However, now that life and intelligence are established, perhaps a transition to non-organics is now feasible.

But it might not.
YES! might might is where play happens ... watch this:
And playing is what I who so often am accused of proclimating am doing.

If life and intelligence can and did make a "jump" to non-organic materials — with the aid of human teleology — would you consider that natural, evolution, or even natural evolution?

Yes ... but then I also walk to school and carry my lunch. On the other hand - if humans are abiogenic ...
My point is that neither non-organic life/intelligence nor human driven evolution need be considered non-natural. Although one could make the argument.

The crux for me is that the supernatural is excluded, not non-organics and teleology.

You obviously dislike the idea of the brain/body/mind being a kludge. Why?
I so obviously don't! I posted the alien technology article that explored this idea in deapth remember? And brain = kludge goes back to the naturalism argument by Plantinga - all the way back to the pre-C&P era:

BC&PE
Ok. But would agree that historically man has arrived at incorrect — albeit explanatory meaning and science has helped us develop better — perhaps more truerist — narratives?
 
in other words consciousness could be immaterial and brain substrate dependent ... or any other combination.
Haha!

The rest of the questions I think aren't helpful except in terms of how life is defined for the context it's used ... so it's kind of a logical fallacy to go looking for "life" as a thing other than how we definie it - a useful logical fallacy though ...
This is a non-vitalist view, correct. (Don't want to put words in your mouth.)

I think viruses are interesting in this regard, as they are — hell, I don't know what they are — organic information constructs that seem to bridge the gap between non-living matter and living matter. I agree with you — I think — that life is a process not a thing.

And I think consciousness is the same. And that I do not think you'd agree with.

But look, again - we have our old argument that is evidence for intelligence as non-local ... and other theories of consciousness as immaterial ... or whatever ... so as we play with ideas, we keep an eye open for the best possible explanation, not just for what we see - but in order to see what is.
I'm honestly not playing dumb, but what is the argument for non-local consciousness?

What I've gathered from you and Constance has been NDEs, OBEs, past life memories and the current inability to fully describe the apparent relation between the body and experience.
 
Haha!


This is a non-vitalist view, correct. (Don't want to put words in your mouth.)

I think viruses are interesting in this regard, as they are — hell, I don't know what they are — organic information constructs that seem to bridge the gap between non-living matter and living matter. I agree with you — I think — that life is a process not a thing.

And I think consciousness is the same. And that I do not think you'd agree with.


I'm honestly not playing dumb, but what is the argument for non-local consciousness?

What I've gathered from you and Constance has been NDEs, OBEs, past life memories and the current inability to fully describe the apparent relation between the body and experience.

Ha ha! or do you mean ah ha? Because nothing does follow from material substrateness that I can see.

As to super natural, that's simply the idea that something could be outside of nature - so science can't deal with it, meaning it can't reject it. It doesn't make for much discussion on this topic so let's move on.

I'm honestly not playing dumb, but what is the argument for non-local consciousness?

Read 10 studies from Radin's site - at random, all the way through - or don't ask me again. Also follow up on the dozens of suggestions @Constance has provided.

While you're at it - read McGilchrist - and something on Buddhism. I think you get my point. We can't discuss things you don't have knowledge of -

Now, I don't say Radin's site proves it - all you asked for is evidence.

On the flip side - prove it isn't non-local.
 
Last edited by a moderator:
I didnt ask for proof or evidence, I asked for an argument/theory.

I don't recall even one suggestion from Constance regarding the non-locality of consciousness. Maybe a book about quantum/entangled consciousness? Constance has certainly never provided a theory/argument for the non-locality of consciousness.

Constance won't even explain how consciousness might have naturally evolved along with biological life but at the same time not be ontologically physical/material.

Re: Radin: If I recall, his work is on psi. That the brain/mind may causally effect non-local physical objects does not mean the mind is non-local. As Constance has noted, she (and apparently Radin himself) has identified quantum entanglement (a notably physical process) as the potential mechanism of psi.

Unless I'm missing something, Radin's work doesn't indicate that the mind itself is non-local, nor does it indicate the mind in non-physical.
 
@Soupie

heres the important idea ... to me

According to artificial intelligence specialist Rodney Brooks of the Massachusetts Institute of Technology, “I believe that we are in an intellectual cul-de-sac, in which we model brains and computers on each other, and so prevent ourselves from having deep insights that would come with new models.” We don’t understand, for example, why it is that the human brain finds so easy tasks that tax the best supercomputer (such as parsing text), and vice versa.

new models ... i think the old dichotomies arent sufficient

material/immaterial
supernatural/natural

what we need is a tetralemma ...
 
I didnt ask for proof or evidence, I asked for an argument/theory.

I don't recall even one suggestion from Constance regarding the non-locality of consciousness. Maybe a book about quantum/entangled consciousness? Constance has certainly never provided a theory/argument for the non-locality of consciousness.

Constance won't even explain how consciousness might have naturally evolved along with biological life but at the same time not be ontologically physical/material.

Re: Radin: If I recall, his work is on psi. That the brain/mind may causally effect non-local physical objects does not mean the mind is non-local. As Constance has noted, she (and apparently Radin himself) has identified quantum entanglement (a notably physical process) as the potential mechanism of psi.

Unless I'm missing something, Radin's work doesn't indicate that the mind itself is non-local, nor does it indicate the mind in non-physical.

Irreducible Mind

The authors' controversial approach repudiates the conventional theory of human consciousness as a material epiphenomenon that can be fully explained in terms of physical brain processes and advances the mind as an entity independent of the brain or body. They advance an alternative “transmission” or “filter” theory of the mind-brain relationship. In so doing, they are reviving the century-old dualism of the British parapsychologist Frederic W. H. Myers (1843-1901) which was further developed by his friend and colleague the American psychologist and philosopher William James (1842–1910).

posted by @Constance numerous times
 
"I still have not seen an explanation of how consciousness can be defined as 'information'."

How about:
1. all conscious beings (appear to) emanate from a complex substrate that must interact with its environment;
2. interaction leads to coherent meaning - i.e., 'coherent meaning' is a term of value in terms of relating to the survival of the being: for interaction to be incoherent, would to deny the value of interactive relevancy, and consequently, to deny the sanity of coherent relation and henceforth to existence itself; therefore
3. consciousness correlates with an interactive process that is entirely informational - it must largely come about through the institution of interactive coherence.

Sorry I have been absent for so long. Didn't know you all still existed here... thought you all might have become largely incoherent. lol
 
If metacognition involves top-down inference, this raises the intriguing possibility that metacognitive judgments actually give rise to conscious experiences, rather than just provide a means for reporting them.

Merriam-Webster defines metacognition as follows: "awareness or analysis of one's own learning or thinking processes." This has long been my understanding of metacognition -- that its prerequisite is cognition itself, which is generally conscious activity {though it increasingly seems to me that thinking also takes place subconsciously, that we receive ideas and even relationships among ideas from our subconscious minds}. Metacognition arises with increasing self-awareness of what is going on in one's mind at the point when, in Sartre's term, the Reflective Cogito makes its appearance in the midst of the stream of consciousness in which one is ordinarily engaged when awake. This stream of consciousness consists of both self-awareness and prereflective consciousness of one's surroundings. Thinking arises from experience and is itself experienced at some level. But one's first fully conscious brush with the presence of the Reflective Cogito can be shocking and somewhat disorienting. At first it feels as if one has discovered another mind operating within one's ordinary consciousness. I had the experience before I ever read Sartre, and at first I thought I might be 'losing my mind', but one's reflective cogito soon becomes a helpful companion to the prereflective cogito. So I would say that metacognition can't trigger consciousness itself since it can only occur in an already conscious mind, but that becoming aware of the reflective cogito significantly widens and deepens consciousness, enabling it to reflect on itself.

Unless I'm missing something, Radin's work doesn't indicate that the mind itself is non-local, nor does it indicate the mind in non-physical.

It's not that an individual's mind is itself nonlocal, but that it can receive and/or access nonlocal information. As a pantheist sympathizer, you might be more than usually open to these ideas. The properties of nonlocality and entanglement first discovered in quantum physics have suggested that q. entanglement might provide the physical basis for the reception of nonlocal information.
 
"I still have not seen an explanation of how consciousness can be defined as 'information'."

How about:
1. all conscious beings (appear to) emanate from a complex substrate that must interact with its environment;
2. interaction leads to coherent meaning - i.e., 'coherent meaning' is a term of value in terms of relating to the survival of the being: for interaction to be incoherent, would to deny the value of interactive relevancy, and consequently, to deny the sanity of coherent relation and henceforth to existence itself; therefore
3. consciousness correlates with an interactive process that is entirely informational - it must largely come about through the institution of interactive coherence.

Sorry I have been absent for so long. Didn't know you all still existed here... thought you all might have become largely incoherent. lol

Welcome back to our little roundtable, Pharoah. We thought you'd know where to find us.

Re your formulation above, I would say that at point 2, when the biological creature moves out to explore its ecological niche, the informational interaction that takes place becomes a temporal, existential one involving the organism and its environment in which a number of different and undetermined things can happen. As we've seen in Panksepp's affective neuroscience theory, even primordial organisms are 'affected' by their environments, and this affectivity opens a direction or path for the evolutionary development of increasing awareness, protoconsciousness, consciousness, and mind. It seems to be a recent insight in biological and environmental research that both the organism/animal and its ecological niche contribute to what becomes their mutual stability, or one might use your phrase in point 3 -- their 'interactive coherence'. But I see the contributions of protoconscious and conscious animals as active, even proactive, in their gaining a grip on their situations in order not just to survive but to thrive, thus as more than mere correlation with 'information'.

3. consciousness correlates with an interactive process that is entirely informational - it must largely come about through the institution of interactive coherence.
 
Last edited:
Status
Not open for further replies.
Back
Top