• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 10

Status
Not open for further replies.
"Before McCulloch and Pitts, neither Turing nor anyone else had used the mathematical notion of computation as an ingredient in a theory of mind and brain. The present paper aims, among other things, to point out how McCulloch and Pitts’s theory changed the intellectual landscape, so that many could see neural computations as the most promising way to explain mental activities."

http://www.umsl.edu/~piccininig/First_Computational_Theory_of_Mind_and_Brain.pdf
 
If you are serious about not just consciousness but POM, read the primary texts ... and we'll see you in a few years.
Ah, the sweet jangles of the elitism of academia.

Actually I come here for synthesis, to glean what i can from your fine minds, and to shoot your horses.
 
"Although McCulloch had a keen interest in philosophy and mathematics, in which he took several undergraduate and graduate courses, he was mainly a neurophysiologist and psychiatrist. He believed that the goal of neurophysiology and psychiatry was to explain the mind in terms of neural mechanisms, and that scientists had not seriously tried to construct a theory to this effect.7

While pursuing his medical studies in the mid-1920s, McCulloch claimed that he developed a psychological theory of mental atoms. He postulated atomic mental events, which he called ‘‘psychons,’’ in analogy with atoms and genes: My object, as a psychologist, was to invent a kind of least psychic event, or ‘‘psychon,’’ that would have the following properties: First, it was to be so simple an event that it either happened or else it did not happen. Second, it was to happen only if its bound cause had happened...that is, it was to imply its temporal antecedent. Third, it was to propose this to subsequent psychons. Fourth, these were to be compounded to produce the equivalents of more complicated propositions concerning their antecedents.8 McCulloch said he tried to develop a propositional calculus of psychons.

Unfortunately, the only known records of this work are few passages in later autobiographical essays by McCulloch himself.9 The absence of primary sources makes it difficult to understand the nature of McCulloch’s early project. A key point was that a psychon is ‘‘equivalent’’ to a proposition about its temporal antecedent. In more recent terminology, McCulloch seemed to think that a psychon has a propositional content, which contains information about that psychon’s cause. A second key point was that a psychon ‘‘proposes’’ something to a subsequent psychon. This seems to mean that the content of psychons can be transmitted from psychon to psychon, generating ‘‘the equivalents’’ of more complex propositions. These themes would play an important role in McCulloch’s mature theory of the brain."
 
Ah, the sweet jangles of the elitism of academia.

Actually I come here for synthesis, to glean what i can from your fine minds, and to shoot your horses.

Quite the opposite! If you're serious about any subject you have to read what others have written about it. Texts ... not text books. All freely available on the web. Dr. Sadler's podcasts, linked above, are explicitly outside the academy and are a great, generous gift for all.
 
Last edited:
Recursivity, paradox ... illusion/mirage are good examples of the need for an intelligent being to recognize and step out of loops ... i.e. why we need "artificial common sense" not AI. An ACS system might not develop AI+ for starters.

Are there loops that can be recognized but not stepped out of? Or at least not stepped into?

This is why I said that if @Michael Allen 's right about the fatality of centroid slicing ... then he ... and us with him are already dead.

And yet ...

QED
I had said of @Constance a few posts ago that at time it seems as if she seeks to explain the origin and nature of the personal (consciousness) at the personal level.

And the I said that I was assuming that the personal (aka consciousness) arose at the sub-personal level. The quantum level, say.

But the quantum level still is the personal level.

I take this to be what @Michael Allen means when he says we can't even ask relevant questions about being/questioning-ing.

We—the conscious subject—can't get below the personal level. It's similarly said that we can't get behind consciousness.

But what I take @smcder to be saying is that "ah, but if we say what-is can't be known, then we are saying we know the unknown; that is, that it's unknownable." Or, we can't say what-is is unknowable because that's self-contradictory.

At the same time, that doesn't mean it is knowable. To us.

But it may be. If we assume that minds are essentially computational programs essentially running on essentially a software platform beyond our ken, then the above logic does apply.

But minds may not essentially be computational programs essentially running on essentially a software platform beyond our ken. The origin and nature of being may indeed lie outside the personal level, and thus beyond personal logic and physical mechanisms, but it might not be unkowable.

As @smcder says humans seem to be able to step out of and/or not step in these loops.

Who knows what's going on? Certainly not the neurologists nor the physicists.
 
I had said of @Constance a few posts ago that at time it seems as if she seeks to explain the origin and nature of the personal (consciousness) at the personal level.

And the I said that I was assuming that the personal (aka consciousness) arose at the sub-personal level. The quantum level, say.

But the quantum level still is the personal level.

I take this to be what @Michael Allen means when he says we can't even ask relevant questions about being/questioning-ing.

We—the conscious subject—can't get below the personal level. It's similarly said that we can't get behind consciousness.

But what I take @smcder to be saying is that "ah, but if we say what-is can't be known, then we are saying we know the unknown; that is, that it's unknownable." Or, we can't say what-is is unknowable because that's self-contradictory.

At the same time, that doesn't mean it is knowable. To us.

But it may be. If we assume that minds are essentially computational programs essentially running on essentially a software platform beyond our ken, then the above logic does apply.

But minds may not essentially be computational programs essentially running on essentially a software platform beyond our ken. The origin and nature of being may indeed lie outside the personal level, and thus beyond personal logic and physical mechanisms, but it might not be unkowable.

As @smcder says humans seem to be able to step out of and/or not step in these loops.

Who knows what's going on? Certainly not the neurologists nor the physicists.

"I take this to be what @Michael Allen means when he says we can't even ask relevant questions about being/questioning-ing." - we'll know what @Michael Allen means when he's ready to be understood.
 
Maybe not. I can be right and we are mostly dead--%-wise with respect to the immense timescape of the universe.

Its obvious to the universe that we are "not dead yet." And yet...

You can be right. Make a falsifiable claim and we can know if you're right.

Otherwise, this trade in ambiguities can go on forever.
 
I had said of @Constance a few posts ago that at time it seems as if she seeks to explain the origin and nature of the personal (consciousness) at the personal level.

I must have been unclear. My view is that consciousness as we humans experience it is necessarily 'personal' -- i.e., self-referential, self-reflexive -- as 'knowing'/understanding that our individual experiences are 'our own'. Our experiences in the world happen to us as individuals. But I think it's also clear that we participate, consciously and subconsciously, in transpersonal experiences with others, most obviously with others with whom we immediately co-inhabit the environmental niches, societies, cultures into which we are born and continue to exist. @Burnt State or @Michael Allen asked, wondered, at some point in the last few pages how we are able to find any 'meaning' in our experienced worldly situations (which of course include interpersonal actions and reactions, but also include the whole sense we have of the world that encompasses us all, in which we dwell together). The response from phenomenological philosophy and semiotic analysis of human experience and consciousness is that we can't not find meaning in the interpersonal world we inhabit, from early childhood (even infanthood) in our temporal, sensual, and increasingly reflective existences on this planet. ETA: meaning also being grasped in what we see of the lived experiences and behaviors of animals, and in manifold other expressions of nature to which we respond aesthetically and with pleasure and joy except in situations of turmoil, immediate threat to our lives and/or the lives of other creatures, drastic lack of sustenance, sickness and death of others of all kinds.

I also have evidently not been clear enough about my view of the origin of consciousness, in which I follow Panksepp in recognizing that consciousness has originated in protoconsciousness in nature with the appearance of primordial species of life [capable of affectivity and germinal awareness] and evolved in complexity over eons of time in the evolution of species increasingly capable of awareness, self-referentiality, seeking behavior, and cooperative behavior within and sometimes across species.

And the I said that I was assuming that the personal (aka consciousness) arose at the sub-personal level. The quantum level, say.

But the quantum level still is the personal level.

Is it? If it is, how can that be demonstrated rather than simply hypothesized?

I take this to be what @Michael Allen means when he says we can't even ask relevant questions about being/questioning-ing.

I think the history of human philosophy, science, religion, and art stand against that claim.

We—the conscious subject—can't get below the personal level.

Many people I know can and do 'get below the personal level' in their conscious and subconscious responses to the conditions, afflictions, and suffering of others (including animal 'others') in the local world we co-inhabit. Moral and ethical issues have been central since the beginnings of our species' written philosophy, and anthropology and archaeology have disclosed signs of both ontological thinking and moral/ethical concerns among our prehistorical forebears.

It's similarly said that we can't get behind consciousness.

I think that's true, but it is a different kind of question from the questions concerning
the nature of reality [locally and beyond visible horizons] and what we humans should do with our lives in shaping our social mores in our existentially lived worlds.

But what I take @smcder to be saying is that "ah, but if we say what-is can't be known, then we are saying we know the unknown; that is, that it's unknownable." Or, we can't say what-is is unknowable because that's self-contradictory.

At the same time, that doesn't mean it is knowable. To us.

I follow and support what Steve has written. What we can 'know' about the nature of 'reality' is contingent on that which we are capable of experiencing and measuring within the natural and cultural 'worlds' in which we exist. The fact is that we experience more than we can measure, account for and define, objectively. A further fact is that there is no firm dividing line between what is objectively measureable and definable in our experience and the subjectivity we bring to everything we see, hear, touch, and otherwise sense in the world in which we feel both 'at home' and 'not at home'. We can't leap from situated, historically qualified, existentially lived experience of be-ing to a view from everywhere or a view from nowhere.

But it may be. If we assume that minds are essentially computational programs essentially running on essentially a software platform beyond our ken, then the above logic does apply.

I can't join you in that assumption, even theoretically, given what we (our species) has learned about the evolution of consciousness as realized in the evolution of species on our planet. As I see it, neither interactions perceivable in the q substrate nor a computational program conjectured to have produced the physical universe/cosmos we can measure [to the extent that we can measure it] from the 'Big Bang' forward can begin to make sense of the lived experiences and developing consciousnesses germinated with the origin of life itself in our own distant past.

But minds may not essentially be computational programs essentially running on essentially a software platform beyond our ken. The origin and nature of being may indeed lie outside the personal level, and thus beyond personal logic and physical mechanisms, but it might not be unknowable.

Right. Thus Heidegger distinguished between Being and being, and concluded that what we can know of 'Being' must begin with an understanding of that which we experience in our own be-ing -- which is always, for phenomenological philosophy, situated "being-in-the-world" that we can know only incompletely and through its phenomenal appearances to us as and where we live.
 
Last edited:
I had said of @Constance a few posts ago that at time it seems as if she seeks to explain the origin and nature of the personal (consciousness) at the personal level.

And the I said that I was assuming that the personal (aka consciousness) arose at the sub-personal level. The quantum level, say.

But the quantum level still is the personal level.

I take this to be what @Michael Allen means when he says we can't even ask relevant questions about being/questioning-ing.

We—the conscious subject—can't get below the personal level. It's similarly said that we can't get behind consciousness.

But what I take @smcder to be saying is that "ah, but if we say what-is can't be known, then we are saying we know the unknown; that is, that it's unknfownable." Or, we can't say what-is is unknowable because that's self-contradictory.

At the same time, that doesn't mean it is knowable. To us.

But it may be. If we assume that minds are essentially computational programs essentially running on essentially a software platform beyond our ken, then the above logic does apply.

But minds may not essentially be computational programs essentially running on essentially a software platform beyond our ken. The origin and nature of being may indeed lie outside the personal level, and thus beyond personal logic and physical mechanisms, but it might not be unkowable.

As @smcder says humans seem to be able to step out of and/or not step in these loops.

Who knows what's going on? Certainly not the neurologists nor the physicists.

My reply to @Michael Allen was a jab at his sense of certainty. He was certain that a particular understanding meant "death" - I was just pointing out that that certainty had the same consequence as actually getting the understanding - I made a reference to the short story by Algernon Blackwood "The Man Who Found Out" - in which a man uncovers an ancient writing that tells man's ultimate purpose or the meaning of life - he reads it and goes mad ... at the end of the story the man's friend is reading the script and of course going mad ... the story was done with very good effect in the second "Outer Limit" series.

As an aside, one of the things you might take from the story is that so far no writing of this kind - that would drive anyone mad - exists - such writings do exist for machines - on the other hand, some individuals can't step out of some loops - but so far, there is no one loop that no one can step out of.

Anyway, if such a manuscript existed and people were reading it and going mad and someone who had not read it, was absolutely certain that if he read it, he too would go mad ... then he could infer that whatever ultimate purpose man had, it was ultimately maddening - he too might go mad without even reading it.

So is @Michael Allen "dead" in whatever sense he means this? I think in a way, that's very possible.

I'll go back to an earlier exchange - re: "Cartesian chauvinism" - I love that phrase!) we sort of disagreed over the term "robot" vs. "organism".

Right now I think there is still a reasonable difference in a robot and an organism - IF in the future there isn't such a difference, I would say it's just as easy then to call the robot an organism as the organism a robot - if we are machines, we are not machines of a kind we now know ... (see the AI lecture I posted above) ... if organisms are machines, they are not "just" machines - so if we are the kind of machines called "organisms" then I doubt that will be a shock to anyone ... and similarly if machines get complex enough (and maybe not only complex) .... so let's say if machines get "enough" ... then we'll call them "organism" or maybe friend ... and they'll be as wonderful and as fragile as we are and we may have some kinship ... or we may be bitterest enemies but we'll have some understanding of one another. If machines go beyond us - OR as I think the trend right now is - they get "smart" in a brute force or a black box kind of way, then we may well have nothing in common with them. Then we would beliving, hopefully, in some as yet unimaginable relation to an alien kind of intelligence - an intelligence we might not call "being".
 
Last edited:
I had said of @Constance a few posts ago that at time it seems as if she seeks to explain the origin and nature of the personal (consciousness) at the personal level.

And the I said that I was assuming that the personal (aka consciousness) arose at the sub-personal level. The quantum level, say.

But the quantum level still is the personal level.

I take this to be what @Michael Allen means when he says we can't even ask relevant questions about being/questioning-ing.

We—the conscious subject—can't get below the personal level. It's similarly said that we can't get behind consciousness.

But what I take @smcder to be saying is that "ah, but if we say what-is can't be known, then we are saying we know the unknown; that is, that it's unknownable." Or, we can't say what-is is unknowable because that's self-contradictory.

At the same time, that doesn't mean it is knowable. To us.

But it may be. If we assume that minds are essentially computational programs essentially running on essentially a software platform beyond our ken, then the above logic does apply.

But minds may not essentially be computational programs essentially running on essentially a software platform beyond our ken. The origin and nature of being may indeed lie outside the personal level, and thus beyond personal logic and physical mechanisms, but it might not be unkowable.

As @smcder says humans seem to be able to step out of and/or not step in these loops.

Who knows what's going on? Certainly not the neurologists nor the physicists.

McGinn's work on "cognitive closure" is good on this - he imagines minds for whom "the hard problem" might be no problem at all. In the film "Arrival" and better, the short story "Stories of Our Lives" this idea is revisited - the alien physics is very different than ours - certain problems in physics that are very difficult for us - basically we have only a mathematical understanding - are intuitive for the aliens - and vice-versa - the story reveals "why" this is so ... another point to take from this would be that no mind would be good at everything - so our "hard" problem of consciousness might manifest as say "the hard problem of ..." well, maybe that is fodder for another successful short story.
 
"Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function."

Re-phrasing the "mechanism" specification component of the "easy" problem shows us all too directly why the hard problems is "hard." When we try to examine the relations (i.e. ties) regarding our interactions with the "mechanical components" we find that the answer to the question is the same as the question we started with. In order to specify a mechanism to perform a function we have to embed our own bodies into the very relations of which we are asking the question. The problem is that we try to clarify the very infrastructure to ourselves in a language of meaning the excludes the our interactivity within--and through--the same infrastructure which gave us the sense that something needed to be questioned. The source of our ability to question is at question, and when we turn our analysis to that ability we end up recursively falling into a bottomless pit. Recognizing that such a move has this property is the first step in understanding our fundamental and necessary misunderstanding of being. Because our "average and daily understanding of being" is a mirage.

Let's get more context: Hard problem of consciousness - Wikipedia

Easy problems are easy because all that is required for their solution is to specify a mechanism that can perform the function. That is, their proposed solutions, regardless of how complex or poorly understood they may be, can be entirely consistent with the modern materialistic conception of natural phenomena. Chalmers claims that the problem of experience is distinct from this set, and he argues that the problem of experience will "persist even when the performance of all the relevant functions is explained".[3]

The existence of a "hard problem" is controversial and has been disputed by philosophers such as Daniel Dennett[4] and cognitive neuroscientists such as Stanislas Dehaene.[5] Clinical neurologist and skeptic Steven Novella has dismissed it as "the hard non-problem"


OK, so let me see if I can break this down:

Re-phrasing the "mechanism" specification component of the "easy" problem shows us all too directly why the hard problems is "hard."

A specification component seems to be a phrase from programming - for example:

Component Specification (Buckminster) - Eclipsepedia

A Component Specification (CSPEC) is a generic description of a component. It defines its dependencies to other components, what actions that can be performed on it and how those actions affect the dependencies. It also defines what artifacts thecomponent can export to other components.

That matches the common sense reading - so I might write this as:

Easy problems can be solved by specifying a mechanism that can perform the function. Re-phrasing the requirement that "easy" problem(s) have a mechanism that can perform the function shows us why the hard problem is "hard".

Unfortunately, no such re-phrasing follows, so it may be that the author thinks such re-phrasing is obvious.

How would we re-phrase the requirement that easy problems have a mechanism etc etc? Is it just that the hard problem is asking for a solution that does not involve a mechanism? That is the very point of the hard problem - and the failure of the functionalist movement underscores that. So you can reject that claim but you still have to account for or deny experience.

When we try to examine the relations (i.e. ties) regarding our interactions with the "mechanical components" we find that the answer to the question is the same as the question we started with.

It's not clear what question we started with - if we take that to mean "what is the relation regarding our interactions with the "mechanical components"?" then the answer is "that is the relation regarding our interactions with the "mechanical compnents" " - the relationship is the ability to ask the question of ...

In order to specify a mechanism to perform a function we have to embed our own bodies into the very relations of which we are asking the question. The problem is that we try to clarify the very infrastructure to ourselves in a language of meaning the excludes the our interactivity within--and through--the same infrastructure which gave us the sense that something needed to be questioned. The source of our ability to question is at question, and when we turn our analysis to that ability we end up recursively falling into a bottomless pit. Recognizing that such a move has this property is the first step in understanding our fundamental and necessary misunderstanding of being. Because our "average and daily understanding of being" is a mirage

I'm not sure I agree that what follows is a "mirage" - certainly it seems to breakdown under certain kinds of scrutiny - for example analytical meditation - but it instantly re-assembles itself - this is the sort of thing McGinn says that the human kind of mine just isn't good at - and we might never grasp it intuitively - the only problem with cognitive closure is you'll never to what you are cognitively closed, not in border line cases anyway - so there will never be an ultimately persuasive argument that we should stop trying. It also assumes there is no benefit from the trying - so we don't know if this is a moth to the flame situation.
 
Transcript of an interview with Noam Chomsky on AI (2012) - the link to video is broken.

Noam Chomsky on Where Artificial Intelligence Went Wrong

"If you take a look at the progress of science, the sciences are kind of a continuum, but they're broken up into fields. The greatest progress is in the sciences that study the simplest systems. So take, say physics -- greatest progress there. But one of the reasons is that the physicists have an advantage that no other branch of sciences has. If something gets too complicated, they hand it to someone else."
 
Last edited:
Right now I think there is still a reasonable difference in a robot and an organism - IF in the future there isn't such a difference, I would say it's just as easy then to call the robot an organism as the organism a robot - if we are machines, we are not machines of a kind we now know ... (see the AI lecture I posted above) ... if organisms are machines, they are not "just" machines - so if we are the kind of machines called "organisms" then I doubt that will be a shock to anyone ... and similarly if machines get complex enough (and maybe not only complex) .... so let's say if machines get "enough" ... then we'll call them "organism" or maybe friend ... and they'll be as wonderful and as fragile as we are and we may have some kinship ... or we may be bitterest enemies but we'll have some understanding of one another. If machines go beyond us - OR as I think the trend right now is - they get "smart" in a brute force or a black box kind of way, then we may well have nothing in common with them. Then we would beliving, hopefully, in some as yet unimaginable relation to an alien kind of intelligence - an intelligence we might not call "being".
Cows experiencing grass for the first time in six months.


Adult elephants rescue adolescent elephant from water

2 Elephants Team Up To Rescue Calf After It Plunges Into Pool | HuffPost

It's hard not to observe the behavior of these machines and not think that they are having subjective experiences that would be similar (but not identical) to our own subjective experiences given similar situations.
 
Was having a discussion this morning with two physicalists arguing that phenomenal consciousness is a neuronal computation.

In the discussion I presented some of my argument in phrase that I found helpful. Thought I would share here.

If we consider that perception involves representation/simulation/user interface*, then the following:

Concepts < empirical observation < perception < base reality

Also:

Physicalists privilege the contents of perception over the process of perception. For example, when they say consciousness is an illusion but neurons are real.

*we've never found a model of perception that didn't
 
Cows experiencing grass for the first time in six months.


Adult elephants rescue adolescent elephant from water

2 Elephants Team Up To Rescue Calf After It Plunges Into Pool | HuffPost

It's hard not to observe the behavior of these machines and not think that they are having subjective experiences that would be similar (but not identical) to our own subjective experiences given similar situations.

That's fun!

I thought of Bradbury's classic All Summer in a Day

https://www.btboces.org/Downloads/6_All Summer in a Day by Ray Bradbury.pdf
 
Was having a discussion this morning with two physicalists arguing that phenomenal consciousness is a neuronal computation. In the discussion I presented some of my argument in phrase that I found helpful. Thought I would share here.
If we consider that perception involves representation/simulation/user interface*, then the following: Concepts < empirical observation < perception < base reality


"If we consider that perception involves representation/simulation/user interface*, then the following:

Concepts < empirical observation < perception < base reality"


This formula sounds very similar to the presuppositions of the early Wittgenstein as linked yesterday by Steve. My impression is that the later Wittgenstein moved beyond these presuppositions. Is there anyone here who knows?


If we consider that perception involves [?]
representation/simulation/user interface*


*we've never found a model of perception that didn't


In the above, the term 'representation' remains ambiguous [our problem for a long time here], and so does the verb 'involves'.

The phenomenological analysis of perception recognizes that in encountering things and others in the world we are presented with their phenomenal appearances to us within our situated lived experience, rather than with 'representations' of things and others already understood conceptually. Sensing and perceiving are fluid interchanges between what an organism is capable of sensing and perceiving and that which the organism senses/perceives and remembers, carrying forward the sense and meaning [sens] of the organism's environment and being within it.
Understanding of the nature of an organism's situatedness in the world is a gradual accomplishment, beginning in prereflective/preconscious experience in primordial organisms and human infants alike. In species like our own [and there must be innumerable types of species like our own in the universe], prereflective experience leads to reflective experience and then to the development of minds that seek adequate concepts with which to describe the nature of their own being and of the possible nature of the Being of all that is.

Re "simulation" and "user interface", the other two components of perception you identify, can you say more about what you mean by 'simulation'? Re 'user interface', I know you take that idea from Hoffman and others who see/want to see consciousness in computational terms [in which lived experience is to be understood to be generated by 'information' physically transmitted directly to the brain from a deep and unobservable source in the universe that generates all being/all that is. Or something like that. While I can't make that leap, I do think there is merit in the idea that living organisms [beginning from the outset of the appearance of life on our planet] interface with that which is/is sensed to be around them and 'other' to them in their environmental niches. This 'self-other' sense begins in the primordial capacity of 'affectivity', as Panksepp and Maturana,Varela have identified it.

Physicalists privilege the contents of perception over the process of perception. For example, when they say consciousness is an illusion but neurons are real.

The question is how, on what explanatory basis, physicalists can claim to know the contents of the perceptions [and thus of the consciousnesses] of any living being other than themselves (individually). It's even doubtful that any of us know the full scope of the 'contents' of our own moment-by-moment overlapping perceptions, prereflectively or reflectively.
 
Status
Not open for further replies.
Back
Top