• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 12


Status
Not open for further replies.
Why Computers Will Never Be Truly Conscious
By Subhash Kak - Oklahoma State University October 16, 2019

Extracts:

". . .
Living organisms store experiences in their brains by adapting neural connections in an active process between the subject and the environment. By contrast, a computer records data in short-term and long-term memory blocks. That difference means the brain's information handling must also be different from how computers work.

Why Computers Will Never Be Truly Conscious

Exception: neural networks running in a computer system can evolve

Thought provoking, but ultimately wrong article assumes that no multilayered convolutional neural net can adjust pre-existing weights with the same end result of a biological network that "grows new neurons and connections."

 
It seems that you think of 'neurophysiology' or 'neurophysiological frameworks' as static rather than as developing and expanding ...
I don't know whatever gave you that idea. Evolution alone would seem to be sufficient evidence that our biological systems, including our brains and nervous systems are changing over time. Even if we take evolution out of the picture, our brains undergo a lot of changes during life. Not long ago I pointed to a video about how infants can acquire perfect pitch by being exposed to complex tones and music. Maybe you just want to see me as having certain perspectives in order to accommodate your arguments. I don't know.
 
"A conscious person is aware of what they're thinking, and has the ability to stop thinking about one thing and start thinking about another — no matter where they were in the initial train of thought. But that's impossible for a computer to do. More than 80 years ago, pioneering British computer scientist Alan Turing showed that there was no way ever to prove that any particular computer program could stop on its own — and yet that ability is central to consciousness. . . ." Why Computers Will Never Be Truly Conscious

Interesting that we think we have the ability to consciously start and stop thinking...Computers have the ability to switch between threads (and processes) based on semaphore/mutex and even can change priorities of the same on the fly. A computer that ends a thread of computation (unfinished) and starts on another -- we assume "starting" and "stopping" as if these are permanent and final. Perhaps we are "thinking" about thinking in the wrong way... the switch between threads are more like moving a spotlight on a vast landscape of parallel processes in our "brain" from one task to another. In our "awareness" we feel that something is "started" and something else is "stopped" but in reality "we are" simply moving the spotlight.


I am replying to myself in order to avoid the silly flag that arrives for the moderator due to too many edits :) To continue what I have said above, we "think" we are stopping and moving on to some other "process" or "thread" (the usual computer science distinction between these two terms is irrelevant to the point at the moment) when in fact all threads are constantly processing even when we aren't "spotlighting" them with the other module that brings about the feedback that "processes are working." We see evidence of this in our emergent dreams (where we don't have control but yet feel that we should) or in the sudden invasion of one thought on another from some trigger in our environment (added information) that causes excitement in a temporarily darkened cluster of activity (which then activates the global sense to focus and the illusion of a "switch").

A human mind is a cluster of loosely connected computational modules that perform their duties "involuntarily" until the module that brings the sense of final ownership and control focuses (either by chance or by force) on increased activity.

Conclusion: The "We" or "I" does NOT have the ability to stop thinking...subject to the same laws and principles of the halting problem. But the "we" or "I" have a knack for burying the halting problem under the illusion that a shift of focus within the processes underlying the coupling of ourselves with the world somehow fundamentally means that we've consciously summoned the decision to decide--and act.
 
"Another perspective on this is that the most mundane memory tasks are associated with multiple areas of the brain — some of which are quite large. Skill learning and expertise involve reorganization and physical changes, such as changing the strengths of connections between neurons. Those transformations cannot be replicated fully in a computer with a fixed architecture. . . ."

Actually this perspective is incorrect--a re-organization of elements in the physical memory cells involved in the RAM and long-term storage in a computer achieve the same goal as the physical re-organization changes in the neurons of a brain. Perhaps a mathematician would balk at my deeming such processes as a complicated derivative of what is termed an isomorphism. The assumption lies in a misunderstanding of what "fixed" means in this context--human beings are a fixed architecture on a certain (albeit higher) level but the fixation and the red-herring demands of "replication" fall flat when you examine the limits of human embryonic development.

Ironically it is these very limits which constrain our imagination regarding the identification and response to other agents like ourselves. We don't "calculate" the trajectory of a predator chasing us in order to avoid it--we "assume" that something as complex as ourselves ("theory of mind") has us in it's sights and will constantly adjust it's direction to catch (and eat) us...such an agent may have a "fixed architecture" which is nothing more than saying that such an agent cannot be reprogrammed (see definition of "wild animal"). As agents we imagine our own artificial constructs as permanently limited because we put a lot more meaning on our "role" as "artificer." We may write stories and mythologies to then expand this role into something universal (i.e. "God.") and then export our own limbs of the tree of life into the heavens or even within the space between the atoms (Quantum Mechanical Mysticism). The result is the same...we refuse and disown the very obvious and constantly present foundations of our own existence into areas that force self-knowledge beyond our understanding (by default).
 
The questions are the same ones we've been asking and attempting to answer throughout this 5-year thread -- what is consciousness, in our species and others? When, where, and how does it begin and how does it evolve? What purposes does it serve in the evolution, survival, and development of life from primordial species to our own species? To observe and recognize that species of birds communicate in purposeful and beneficial ways with the embryos living and growing inside their eggs points to the reality of both protoconsciousness and consciousness existing -- indeed fostered -- deep in nature. If the scientists making these discoveries are impressed, you should, it seems to me, pay attention to what they report.


And the answers get funnier as you move on (yes, laughter will play a role here)

(1) What is "consciousness?"

The question as framed comes from the language processing and motor centers of the very thing that leads to you clicking on a button and writing symbols that may (or may not) help others understand the foundational what which underlies your impulse to communicate this to another person which you've already assumed will have some kind of comprehension of the "what" along with the term "consciousness."

I don't think a more loaded question can be constructed such as "what is 'consciousness'" -- even the embedded quotes betray the problem in even asking such as question


I will close on one of my favorite passages from Heidegger's "Being and Time"

As a seeking, questioning needs prior guidance from what it seeks. The meaning of being must therefore already be available to us in a certain way. We intimated that we are always already involved in an understanding of being. From this grows the explicit question of the meaning of being and the tendency toward its concept. We do not know what "being" means. But already when we ask, "What is being?" we stand in an understanding of the "is" without being [sic...multiordinal usage of being in english not in german] able to determine conceptually what the "is" means. We do not even know the horizon upon which we are supposed to grasp and pin down the meaning. This average and vague understanding of being is a fact.

From "The Exposition anof the Question of the Meaning of Being: The Necessity, Structure, and Priority of the Question of Being"

(Stambaugh trans.)
 
Or at least Heidegger doesn't know, and he has done his best to confuse the rest of us as well.

Yikes...I thought it was very clear. Heidegger is using the "question" as a hint at what has already formulated the question (what we call our "consciousness") I switch from verbal to imagery to visceral response to stimuli with a kind of seamlessness that may be strange to others...a "question" may arise in the simple act of your arms and legs motivating your body out of your sleeping bag in a tent to search for a lamp...or the grasping of a doorknob and testing which direction you have to turn it (usually counter-clockwise, but not always) in order to open and escape a tedious situation.

Try a word substitution as an experiment:

But already when we ask, "What is consciousness?" we stand in an understanding of the "is" without being able to determine conceptually what the "is" means. We do not even know the horizon upon which we are supposed to grasp and pin down the meaning. This average and vague understanding of consciousness is a fact.
 
Indeed because the capturing, expressing, and translation into "terms" requires a formal description that lies outside the entity creating the same.

OK, I'll bite. What, and where, is 'the formal description that lies outside the entity creating the same'?
Also what is the referent of 'the same'?


Even worse, our "formal" structures lie within the very framework attempting to "undermine" the same through "explanation."

What 'formal' structures are you referring to? And why, 'lying within the very framework of the entity', do these structures attempt to "undermine" the same? To comprend what you've written [never say die] it is necessary to ask what is the referent of 'the same' in this sentence? Possibilities implied by your sentence include 1) the entity; 2) the entity's 'formal' structures; and/or 3) the 'framework' (a) of the entity, or (b) of the 'formal description that lies outside the entity?

The engine of explanation attempts to undermine it's own basis of generation...

So what, finally, is 'the engine of explanation'? And why does it 'attempt to undermine its own basis of generation'?

I have to say I suspect a good bit of 'woo' in these cryptic statements. Surely you can write more clearly than this. Try.

 
My usage of the word "woo" had nothing to do with consciousness in current computers.

OK, what were you referring to then?

A good article and PDF. Thanks. However, in reading and searching through them, I didn't find the specific sections you cite. Regardless, there are a couple of problems with the quote. Firstly, experiences themselves aren't stored neurophysiologically. Nobody knows how experiences are created yet. At best neurophysiology is causal, but as you have pointed out in the past, at present we only have correlation.

Strange. Look again. The extracts I posted were both from the LiveScience.com article I linked.

Biological memory also has short term and long term "modules", but in any case, it's not that sort of information handling that seems relevant to me. Rather it's the difference between the physical construction of the systems. In other words, no amount of neuron modelling by electronic circuits can make the model into actual neurons. I suspect ( yet to be proven ), that something about the situation with actual brain materials and functions is responsible for consciousness, and those situations might not occur the way they need to with current electronic designs.

The above also has little relevance because so far as we know, computers don't "consciously interpret the data". Additionally, human perception actually is directly related to sensory data. Perhaps it's not always real time sensory data, but then again pattern recognition in computers also works with a combination of real time and stored data.

Computer memory changes its configuration as needed in order to do what it needs to do. It's not a "fixed architecture". Brains can grow new cells, but that's not practically different than simply accessing new unused memory, or installing more memory as needed. For these reasons, while I think the writer is intuitively correct, the specific differences mentioned, aren't necessarily at the root of the question, whereas the materials and design are.

Are you saying you are Subhash Kak? Or are you still quoting someone else?

Last sentence of the above segment of Randel's post:
Are you saying you are Subhash Kak? Or are you still quoting someone else?

Also strange. I am not saying that I am Subhash Kak. The individual whose article I quoted is Subhash Kak. (Also the paragraphs preceding this question all appear to be written and posted by you.)

The above implies a leap in logic in that it's premise applies to situations that are different than the premise it sets out. It also assumes that persons can simply stop thinking on command. I see no evidence for this. Humans can change their minds, or shoot themselves in the head, but they cannot simply switch off their brains off. At best we can only fall asleep, and that isn't even always guaranteed. Even then it's also not a true off state.

By "the above", are you referring to my posting of extracts from the Live Science article by Subhash Kak in my earlier post or to what I wrote in response to your first response to those extracts? On second reading, it appears that you are referring to a 'leap in logic' by SK, though you haven't quoted his statement here.

In contrast computers have advanced to the point where they can do rudimentary self-programming and adapt to environmental conditions. Attach a light sensor to a computer and it can dim, brighten, or turn itself off in response to changing lighting conditions without any human intervention. A lot more is also possible. Eventually computers will be at the point where they are no longer designed, programed, or built by us.

When computers evolve to that point, I have little doubt that if they select an option to turn themselves off, they'll be able to do so, but whether or not they will experience anything in the process is another question altogether.

Gee, that's what I thought. Perhaps a lightbulb turns on for me at this point: you and MA think that neither we nor the computers we build are conscious. . . . but then why are you talking about these notions of yours in a thread devoted to consciousness?
 
Last edited:
I am replying to myself in order to avoid the silly flag that arrives for the moderator due to too many edits :) To continue what I have said above, we "think" we are stopping and moving on to some other "process" or "thread" (the usual computer science distinction between these two terms is irrelevant to the point at the moment) when in fact all threads are constantly processing even when we aren't "spotlighting" them with the other module that brings about the feedback that "processes are working." We see evidence of this in our emergent dreams (where we don't have control but yet feel that we should) or in the sudden invasion of one thought on another from some trigger in our environment (added information) that causes excitement in a temporarily darkened cluster of activity (which then activates the global sense to focus and the illusion of a "switch").

I would like to hear more about the phenomenon you refer to above.

A human mind is a cluster of loosely connected computational modules that perform their duties "involuntarily" until the module that brings the sense of final ownership and control focuses (either by chance or by force) on increased activity.
Conclusion: The "We" or "I" does NOT have the ability to stop thinking...subject to the same laws and principles of the halting problem. But the "we" or "I" have a knack for burying the halting problem under the illusion that a shift of focus within the processes underlying the coupling of ourselves with the world somehow fundamentally means that we've consciously summoned the decision to decide--and act.

True (the underscored statement). But what Sabhash Kak actually wrote was this:

"A conscious person is aware of what they're thinking, and has the ability to stop thinking about one thing and start thinking about another — no matter where they were in the initial train of thought. But that's impossible for a computer to do."

You and Randle appear to think that an artificial intelligence built into a computer or robot experiences 'streams of consciousness' or 'thinking' not directed or routed beforehand by their engineers, and that, like us, they can change the subject of their thinking/computing spontaneously, for one reason or another, e.g., when interrupted or distracted or occurring in cases in which a human decides to stop pursuing one train of thought and take up another, or go out for coffee. Interesting, if true. I'll watch the newspapers.
 
Last edited:
OK, what were you referring to then?
I was referring to the assumption that because we have not proven causation between material biological systems and consciousness, but only correlation, that causation isn't the case ( when it probably is the case - but it's just unproven ), which leads to a leap in logic by those who support notions of afterlives, that consciousness ( and all the rest of what defines personhood ) is somehow unaffected by the death of our material biological systems, and can therefore go floating off into woo woo land ( or whatever afterlife believers want to call it ).
 
Yikes...I thought it was very clear. Heidegger is using the "question" as a hint at what has already formulated the question (what we call our "consciousness") I switch from verbal to imagery to visceral response to stimuli with a kind of seamlessness that may be strange to others...a "question" may arise in the simple act of your arms and legs motivating your body out of your sleeping bag in a tent to search for a lamp...or the grasping of a doorknob and testing which direction you have to turn it (usually counter-clockwise, but not always) in order to open and escape a tedious situation.

Try a word substitution as an experiment:

But already when we ask, "What is consciousness?" we stand in an understanding of the "is" without being able to determine conceptually what the "is" means. We do not even know the horizon upon which we are supposed to grasp and pin down the meaning. This average and vague understanding of consciousness is a fact.
I'm not so sure I would agree with any of those claims. They seem more like word games that are constructed in such a manner as to be ambiguous and therefore responses can be accurate or inaccurate depending on the context of the question, which is not made clear, and can therefore be manipulated by the questioner with impunity. When the context is clear, I don't have any problem with words like "is".
 
I was referring to the assumption that because we have not proven causation between material biological systems and consciousness, but only correlation, that causation isn't the case ( when it probably is the case - but it's just unproven ), which leads to a leap in logic by those who support notions of afterlives, that consciousness ( and all the rest of what defines personhood ) is somehow unaffected by the death of our material biological systems, and can therefore go floating off into woo woo land ( or whatever afterlife believers want to call it ).

Still harping on afterlives when we have all we can do to understand ourselves and our productions in this world?
 
A paper published recently by Sabhash Kak in the journal NeuroQuantology:

NeuroQuantology | May 2019| Volume 17 | Issue 05 | Page 71-75| doi: 10.14704/nq.2019.17.05.2359

Is Consciousness Computable?
Subhash Kak

ABSTRACT
This commentary reviews different scientific positions for and against consciousness being a computable property. The role that quantum mechanics may play in this question is also investigated. It is argued that the view which assigns consciousness a separate category is consistent with both quantum mechanics and certain results in cognitive science. It is further argued that computability of consciousness implies the solution to the halting problem which is computationally impossible.
Key Words: cognition, machine consciousness, learning models, quantum Zeno effect

This paper is available in its entirety as a pdf at this link:

https://neuroquantology.com/data-cms/articles/20191022042524pm2359.pdf
 
Is Consciousness Computable?
Asking if consciousness is computable is like asking if gravity is computable. A computer can simulate a star and calculate its gravitational influence, but that will never change the weight of the computer in the process. At best, computation might be able to predict better designs for the structures that make consciousness apparent. A more relevant question might be: Is it possible to engineer consciousness, or the possibility of consciousness into a computer?

Hypothetically I see no reason why not. It just requires the right materials and design. Now all we have to do is figure out what those materials and designs are. But before we start playing God with machines, maybe we should be asking whether or not we should engineer consciousness into machines in the first place.
 
Last edited:

"Digital philosophy is a modern re-interpretation of Gottfried Leibniz's monist metaphysics, one that replaces Leibniz's monads with aspects of the theory of cellular automata. Since, following Leibniz, the mind can be given a computational treatment, digital philosophy attempts to consider some main issues in the philosophy of mind. The digital approach attempts to deal with the non-deterministic quantum theory, where it assumes that all information must have finite and discrete means of its representation, and that the evolution of a physical state is governed by local and deterministic rules.[1]

In digital physics, existence and thought would consist of only computation. (However, not all computation would necessarily be thought.) Thus computation is the single substance of a monist metaphysics, while subjectivity arises from computational universality. There are many variants of digital philosophy; however, most of them are Digital data theories that view all of physical realities and cognitive science and so on, in framework of information theory.[1]"

"Thus computation is the single substance of a monist metaphysics..."

@Soupie the bit on Rudy Rucker in this article, sounds like Donald Hoffman:

"Rucker's second conclusion uses the jargon term 'fact-space'; this is Rucker's model of reality based on the notion that all that exists is the perceptions of various observers."
 
Last edited:

"According to pancomputationalism, all physical systems – atoms, rocks, hurricanes, and toasters – perform computations. Pancomputationalism seems to be increasingly popular among some philosophers and physicists. In this paper, we interpret pancomputationalism in terms of computational descriptions of varying strength—computational interpretations of physical microstates and dynamics that vary in their restrictiveness. We distinguish several types of pancomputationalism and identify essential features of the computational descriptions required to support them. By tying various pancomputationalist theses directly to notions of what counts as computation in a physical system, we clarify the meaning, strength, and plausibility of pancomputationalist claims. We show that the force of these claims is diminished when weaknesses in their supporting computational descriptions are laid bare. Specifically, once computation is meaningfully distinguished from ordinary dynamics, the most sensational pancomputationalist claims are unwarranted, whereas the more modest claims offer little more than recognition of causal similarities between physical processes and the most primitive computing processes. "
 
Status
Not open for further replies.
Back
Top