• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 6

Free episodes:

Status
Not open for further replies.
Pharoah, I think this paragraph from a post of yours early in part 6 might be a good place to clarify what you think of as 'information':

Steve had written: "But that evaluation assumes the experience in the first place! As the stimuli grow more complex and rapid ... an experience of them "emerges" in order to sort them all out - but what is the reason that that evaluation has to occur at the level of phenomenal experience?"

You responded:

"The phrase, "in order to sort them all out" is putting the horse and the cart the wrong way round. : Experience doesn't emerge to sort out anything. Rather, it is what comes into being because of the sorting: it is individuated and qualitative and chaanging on a moment by moment basis... that is what experiencing the world is. That is the condition that Humans have given the 'phenomenal expeirence' label to, namely the thing that we are in that is individuated and qualitative... and then we humans label everything in this world-view as that thing with that quale, or this thing with this quale. I am saying phen consc is that process."


Do you mean that 'information' operating in unconscious neurophysiological processes is, as you express it, "what experiencing the world is," and that it is information that delivers phenomenal experience of the world already wrapped up in virtual form to conscious beings? If so, why has nature evolved consciousness in living organisms?
 
Last edited:
"Do you mean that 'information' operating in..."
"in". This word says it all. I don't think of information as something being 'in' or manipulated by a mechanism. It is not external to the mechanism.
As with @Soupie, (correct me soupie if I am wrong) I think of mechanism itself as embodying meaning (acquired through interaction)—and mechanism being a construct that thereby has an informed relation to and of the world . The environment itself is not informational and does not require informational translation or organisation to confer meaning for an observing agency.
Similarly, cognitivists think about of the brain as "working with information" through neural mechanism (sending, receiving, processing etc). In this context (cf. Velman) information is being treated like clay that the brain is working with and moulding. This is wrong to my way of thinking. Information is not a commodity that the brain manipulates to confer meaning. Information is the construct of the mechanism (that sentence sounds obtuse :) )
I think soupie thinks about information in my way when he says things like, phenomenal experience is information and is why he finds IIT apealing. I hope I am not misquoting or misrepresenting soupie here.
 
@Soupie, above you wrote that "the same "response" [that Velmans made to Gray's approach] is pertinent to Panksepp's model, @Pharoah 's model, and Thompson/Varela's model."

Can you show why Velman would respond to Panksepp, Pharoah, and Thompson/Varela in the same way?

I'm still not sure what Pharoah's 'model' is at present since it has been undergoing development and restatement in recent months, but I can't see why you would or should expect Velmans to react to Panksepp's evolutionary project and Thompson/Varela's neurophenomenology project in the same way he has reacted to Gray, which you seem to imply. Can you explain your reasoning there? That is, what is it in Panksepp's and Thompson/Varela's projects that places them in the same category as Gray's project?

In fact, I think Velman's paper on the evolution of consciousness supports both Panksepp's contributions and Thompson/Varela's contributions as productive of necessary changes in consciousness studies.
From Velmans:

"This commentary elaborates on Gray's conclusion that his neurophysiological model of consciousness might explain how consciousness arises from the brain, but does not address how consciousness evolved, affects behaviour or confers survival value. The commentary argues that such limitations apply to all neurophysiological or other third-person perspective models. To approach such questions the first-person nature of consciousness needs to be taken seriously in combination with third-person models of the brain."

1) While incorporating the first-person perspective is necessary, it's not sufficient in itself. In the opening of Mind in Life, Thompson states at the outset that Neurophenomenology cannot solve the mind/body problem. In fact, he states that Neurophenomenology cannot even tell us whether consciousness exists prior to the emergence of neurons.

This is very telling. All these neurophysiological or neurophenomenological approaches can do is help us determine more precisely which neurophysiological processes mediate which phenomenal processes.

Neurophysiological processes don't "generate" phenomenal processes, they embody phenomenal processes!

2) Contrary to what Velmans seems to indicate, Gray's model of RST is not just a third-person approach; it models the internal, phenomenal affective states of individuals experiencing hope, fear, and anxiety—identifies neurophysiological mechanisms mediating these emotions, and relates them to corresponding approach and avoidance behaviors seen in all mammals.

@Constance, I would ask you the same: can you identify how Panksepp and Neurophenomenology differ from Gray's RST?
 
@Soupie
I get an inclining from you post above that you might be seeing information the way I speak of it, namely, as dependent in the nature of the dynamic construction of the agency interacting with the environment, not as something that exists in the environment independently of agency. That would make two of us in the world with this view!
No, we are not the only two with this view. Check out Robin Faichney's work or David Deutsch who says: "information is physical."

Compare this to the Neurophenomenologists who tell us that consciousness is physical (embodied).

Mind and body are two "sides" of the same coin.

In short, human consciousness creates as well as receives ‘information’ in its world-making, and what we do with the capacities of our conscious being is at least as significant as that which we receive from the affordances of physical and biological evolution, which include ‘information’ of the type that interests you most. That information is far from the whole story of what we are and what we are responsible for {what we are called upon to do} in our responses to our existential situations in the present-day world we have constructed on the earth.
Constance, despite your affinity for Neurophenomenology and "embodied" naturalistic approaches to consciousness, you typically characterize consciousness in dualistic language.

You do the same when speaking of information. Consider the lock example I gave above; I spoke of that situation as involving information processing. However, that same scenario can be described completely as a purely physical process involving no information processing.

How can that be? Because information (processing) is a purely physical process.

Consider Chalmers' Zombie conceivability problem. Similar to the lock scenario above, we can conceivably describe human behavior objectively via the third-person perspective. (And as Velmans points out above, we cannot incorporate the first-person perspective into this functional, third-person perspective.)

How can that be? Because consciousness is a purely physical process.

Consciousness and information are fully embodied by physical processes. When viewed from the purely third-person perspective, they disappear.
 
Last edited:
As with @Soupie, (correct me soupie if I am wrong) I think of mechanism itself as embodying meaning (acquired through interaction)—and mechanism being a construct that thereby has an informed relation to and of the world . The environment itself is not informational and does not require informational translation or organisation to confer meaning for an observing agency.
Similarly, cognitivists think about of the brain as "working with information" through neural mechanism (sending, receiving, processing etc). In this context (cf. Velman) information is being treated like clay that the brain is working with and moulding. This is wrong to my way of thinking. Information is not a commodity that the brain manipulates to confer meaning. Information is the construct of the mechanism (that sentence sounds obtuse :))
As Ive noted before Pharoah, I think in many cases this is just a "short hand" way of talking about information.

Its not unlike the way biologists (who clearly know better) sometime speak of evolution anthropomorphically saying things such as "the human ear was designed by evolution."

Evolution is a purely physical processes guided by physical laws.

I suggest that its likewise with information; while laymen might get things confused (as they do with evolution) scientists generally understand that "information" is not "out there" in the physical stimuli/energies of the environment per se.

That is, environmental energies (stimuli) are neutral (objective), and any meaning/information they embody is internal (subjective) to the receiving structure.
 
As Ive noted before Pharoah, I think in many cases this is just a "short hand" way of talking about information.

Its not unlike the way biologists (who clearly know better) sometime speak of evolution anthropomorphically saying things such as "the human ear was designed by evolution."

Evolution is a purely physical processes guided by physical laws.

I suggest that its likewise with information; while laymen might get things confused (as they do with evolution) scientists generally understand that "information" is not "out there" in the physical stimuli/energies of the environment per se.

That is, environmental energies (stimuli) are neutral (objective), and any meaning/information they embody is internal (subjective) to the receiving structure.
I know you would like to think that it is just the innocuous way that people use the language. One could argue that it is just sloppy writing. Alternatively, that nobody knows what they are talking about.
Personally, I think nobody knows what they mean by 'information'. It is just a convenient placeholder-of-a-term. But the placeholder is niave and wrong. It starts with the understandable premise, "the world is what we see it is". The naturalist's endeavour then, is principally concerned with how the brain neurologically models this world of information. Philosophers are fairly anal about their terminology. I don 't see why I should give them any slack.
 
Neurophysical process, information, and consciousness:

Phys. Rev. Lett. 115, 108103 (2015) - Percolation Model of Sensory Transmission and Loss of Consciousness Under General Anesthesia (Original paper)

Physics - Focus: How Anesthesia Switches Off Consciousness (Journal article)

"Entering anesthesia, the mind seems to shut down abruptly and then later re-emerge from the blackness with equal swiftness. A new theoretical model suggests that these changes may result from a sudden, global change in the ability of the network of neurons to transmit information. The model can reproduce the changes in electrical activity (“brain waves”) seen with anesthetized patients. ...

In this model, the chance of a signal being successfully passed along any link is controlled by a probability factor p that applies to the entire network. For a given run of the simulation, p is fixed at a value between 1 (100% probability of transmission) and 0 (zero probability of transmission), and each link's on/off state is re-evaluated periodically. If p is set to 0.6, for example, then there is a 60% chance, at each instant, of a successful transmission between any two connected nodes. Unless p is zero, if a link is turned off at one moment, it could turn on again in the next moment. Decreasing p mimics the effect of anesthetics, which block signals between neurons.

Xu and colleagues found that a computer realization of their branching model, with 7381 nodes in all, captures the electrical signature of anesthesia in patients. Electrodes on the scalp pick up some of the neural signals and generate complicated waveforms (EEGs) that are processed to indicate their component frequencies (using the standard technique of Fourier analysis). The waves are categorized based on these frequency components. An anesthetized brain undergoes a change from the so-called gamma and beta waves associated with consciousness to alpha waves associated with relaxation and drowsiness and delta waves associated with deep sleep.

When the researchers ran their simulations using the random signal known as white noise as the input, they found that a randomly selected node at the output layer of the network (branch tips) produced signals that matched those of patients. The signals switched from predominantly gamma- and beta-like waves at high p (close to 1) to mostly alpha and delta waves at low p (below 0.5).

The researchers also drew on standard information theory to define the amount of information encoded in the input and output signals of the network in terms of their so-called information entropy. The entropy of the output relative to the input dropped abruptly at a p value of about 0.3, meaning that very little information was getting transmitted through the network. The relative abruptness of this transition matched the observation that there is a critical concentration of anesthetic for which consciousness is abruptly and completely lost.

The researchers say that this breakdown of information transmission reflects the fact that at low p it becomes almost impossible for the information to find a continuous path through the network. This loss of a fully connected route is called a percolation transition, resembling the way that a fluid flowing through a random porous network (like hot water through packed coffee grains) “searches” for a complete path. But even at very low p, the researchers say, a route might still open up transiently by chance.

“It is intriguing that a simple model within the traditional percolation theory framework, with a single parameter related to network connectivity, can account for several features of brain dynamics under anesthesia,” says Plamen Ivanov of Boston University. “The beauty of the approach is that it is simple, built from first principles, and generates rich dynamics controlled by a single parameter.” But he cautions that the model is still a long way from explaining actual mechanisms of consciousness."

This beautiful study presents an objective physical process and a simultaneous subjective process (consciousness), bridged by an information process.

Physical process: A network of nerve cells causally interacting with one another via an electrochemical process. These causal interactions can be characterized as physical gamma, beta, alpha, and delta waves, or synchronized firing patterns.

Information process: The nerve cells can be characterized as "nodes," their electrochemical firings can be characterized as "signals," and the cause-effect interaction of the nerve cells/nodes characterized as "information processing." The initial input into the network is referred to as information as well as the output at the other side of the network.

Phenomenal process: Humans report losing consciousness and/or being conscious.

These three processes occur simultaneously. They are not separate, distinct processes.

When physical brain waves are in the gamma and beta ranges, the information flow is close to 100% probability, and consciousness experience is.

When physical brain waves are in the alpha and delta ranges, the information flow is close to 30% probability, and consciousness is not.

Noted is the fact that "an actual mechanism for consciousness" is a long way off. Per Velmans there will never be a mechanism for consciousness. Physical processes don't create consciousness ex nihilo.

Consciousness, in some primal form, accompanies all physical processes, including primal physical processes. There is no doubt, however, that the physical processes of living organisms are highly complex and thus the associated conscious experiences are equally complex.
 
Last edited:
From Velmans:

"This commentary elaborates on Gray's conclusion that his neurophysiological model of consciousness might explain how consciousness arises from the brain, but does not address how consciousness evolved, affects behaviour or confers survival value. The commentary argues that such limitations apply to all neurophysiological or other third-person perspective models. To approach such questions the first-person nature of consciousness needs to be taken seriously in combination with third-person models of the brain."

1) While incorporating the first-person perspective is necessary, it's not sufficient in itself. In the opening of Mind in Life, Thompson states at the outset that Neurophenomenology cannot solve the mind/body problem. In fact, he states that Neurophenomenology cannot even tell us whether consciousness exists prior to the emergence of neurons.

This is very telling. All these neurophysiological or neurophenomenological approaches can do is help us determine more precisely which neurophysiological processes mediate which phenomenal processes.

Neurophysiological processes don't "generate" phenomenal processes, they embody phenomenal processes!

2) Contrary to what Velmans seems to indicate, Gray's model of RST is not just a third-person approach; it models the internal, phenomenal affective states of individuals experiencing hope, fear, and anxiety—identifies neurophysiological mechanisms mediating these emotions, and relates them to corresponding approach and avoidance behaviors seen in all mammals.

Gray (your point 2 above) makes a useful contribution to understanding how neurophysiological 'mechanisms' are neurologically involved in/connected to three emotional states recognizable in animals and humans, but it hardly answers the questions a) what consciousness is, and b) the role it plays in the development of -- and differences among -- the lived realities {the experience as a whole) of the various species of life evolved on earth.


@Constance, I would ask you the same: can you identify how Panksepp and Neurophenomenology differ from Gray's RST?[/QUOTE]

I think they differ in obvious ways that we should be attempting to understand together -- which would require that we all read Gray, Panksepp, Thompson, Varela, and numerous other researchers and theorists still at work on the question of the relationship of the brain and 'information' to the major problematic phenomena of consciousness and mind. I think it should be obvious that, if Gray, Totoni, and others whose neurophysical/informational 'models' you find persuasive were adequate to account fully for the relationship of brain and consciousness/mind, we would not see the continuing proliferation of approaches to that question. The interdisciplinary field of Consciousness Studies is only 25 years old at this point. Far from answering the questions that inspired it, the field as a whole has only begun to recognize the complexity of these questions. In short, if Gray or Tononi or anyone else had answered them, others would not have developed other theories and 'models' and research programs toward reaching an adequate understanding of the two core problems of Consciousness Studies, would they?

I'll come back to your other comments tonight.
 
Last edited:
Rather than continue to talk back and forth on the basis of parts of what it is we all need to understand -- the question of the relationship of conscious/mind to the brain's neural networks -- why don't we instead discuss another theory than the ones we personally hold. I suggest as a text for discussion the article by John Searle entitled "Can Information Theory Explain Consciousness?". I suggest this paper because Searle is a thinker to be reckoned with in addressing this question, and also because I don't think any of us fully adhere to his point of view on that question.

Perhaps exploring the ways each of us can agree and disagree with Searle on certain issues will avoid the friction that arises when we react to one another's positions, and thus provide us all with more distance on our own perspectives and also lead us to recognize elements of one another's perspectives we can agree with to some extent {for there is no question that various kinds of information in the physical and biological world are involved in the operation and development of neural nets affecting and enabling consciousness}. Discussing Searle's paper would be a way to confront issues in consciousness studies that neither Searle nor any of us have dealt with sufficiently at this point.

I hope you agree that this might be a productive avenue toward dialogue on complex aspects of the relationship of consciousness/mind and brain that we all need to recognize as problems yet-to-be either resolved or at least understood in CS and our own long discussion here. Here is an extract of the relevant part of Searle's paper:


“. . . Information is one of the most confused notions in contemporary intellectual life. First of all, there is a distinction between information in the ordinary sense in which it always has a content—that is, typically, that such and such is the case or that such and such an action is to be performed. That kind of information is different from information in the sense of the mathematical “theory of information,” originally invented by Claude Shannon of Bell Labs. The mathematical theory of information is not about content, but how content is encoded and transmitted. Information according to the mathematical theory of information is a matter of bits of data where data are construed as symbols. In more traditional terms, the commonsense conception of information is semantical, but the mathematical theory of information is syntactical. The syntax encodes the semantics. This is in a broad sense of “syntax” which would include, for example, electrical charges.

Information theory has proved immensely powerful in a number of fields and may become more powerful as new ways are found to encode and transmit content, construed as symbols. Tononi and Koch want to use both types of information, they want consciousness to have content, but they want it to be measurable using the mathematics of information theory.

To explore these ideas two distinctions must be made clear. The first is between two senses of the objective and subjective distinction. This famous distinction is ambiguous between an epistemic sense (where “epistemic” means having to do with knowledge) and an ontological sense (where “ontological” means having to do with existence). In the epistemic sense, there is a difference between those claims that can be settled as a matter of truth or falsity objectively, where truth and falsity do not depend on the attitudes of makers and users of the claim. If I say that Rembrandt was born in 1606, that claim is epistemically objective. If I say that Rembrandt was the best Dutch painter ever, that is, as they say, a matter of “subjective opinion”; it is epistemically subjective.

But also there is an ontological sense of the subjective/objective distinction. In that sense, subjective entities only exist when they are experienced by a human or animal subject. Ontologically objective entities exist independently of any experience. So pains, tickles, itches, suspicions, and impressions are ontologically subjective; while mountains, molecules, and tectonic plates are ontologically objective. Part of the importance of this distinction, for this discussion, is that mental phenomena can be ontologically subjective but still admit of a science that is epistemically objective. You can have an epistemically objective science of consciousness even though it is an ontologically subjective phenomenon. Ben Libet was practicing such an epistemically objective science; so are a wide variety of scientists ranging, for example, from Antonio Damasio to Oliver Sacks.

This distinction underlies another distinction—between those features of the world that exist independently of any human attitudes and those whose existence requires such attitudes. I describe this as the difference between those features that are observer-independent and those that are observer-relative. So, ontologically objective features like mountains and tectonic plates have an existence that is observer-independent; but marriage, property, money, and articles in The New York Review of Books have an observer-relative existence. Something is an item of money or a text in an intellectual journal only relative to the attitudes people take toward it. Money and articles are not intrinsic to the physics of the phenomena in question.

Why are these distinctions important? In the case of consciousness we have a domain that is ontologically subjective, but whose existence is observer-independent. So we need to find an observer-independent explanation of an observer-independent phenomenon. Why? Because all observer-relative phenomena are created by consciousness. It is only money because we think it is money. But the attitudes we use to create the observer-relative phenomena are not themselves observer-relative. Our explanation of consciousness cannot appeal to anything that is observer-relative—otherwise the explanation would be circular. Observer-relative phenomena are created by consciousness, and so cannot be used to explain consciousness.

The question then arises: What about information itself? Is its existence observerindependent or observer- relative? There are different sorts of information, or if you like, different senses of “information.” In one sense, I have information that George Washington was the first president of the United States. The existence of that information is observer-independent; I have that information regardless of what anybody thinks. It is a mental state of mine, which while it is normally unconscious can readily become conscious. Any standard textbook on American history will contain the same information. What the textbook contains, however, is observer-relative. It is only relative to interpreters that the marks on the page encode that information. With the exception of our mental thoughts—conscious or potentially conscious—all information is observer-relative. And in fact, except for giving examples of actual conscious states, all of the examples that Tononi and Koch give of information systems—computers, smart phones, digital cameras, and the Web, for example—are observer-relative.

We cannot explain consciousness by referring to observer-relative information because observer-relative information presupposes consciousness already. What about the mathematical theory of information? Will that come to the rescue? Once again, it seems to me that all such cases of “information” are observer-relative. The reason for the ubiquitousness of information in the world is not that information is a pervasive force like gravity, but that information is in the eye of the beholder, and beholders can attach information to anything they want, provided that it meets certain causal conditions. Remember, observer relativity does not imply arbitrariness, it does not imply epistemic subjectivity.

An example prominently discussed by Tononi will make this clear. He considers the case of a photodiode that turns on when the light is on and off when the light is off. So the photodiode contains two states and has minimal bits of information. Is the photodiode conscious? Tononi tells us, and Koch is committed to the same view, that yes, the photodiode is conscious. It has a minimal amount of consciousness, one bit to be exact. But now, what fact about it makes it conscious? Where does its subjectivity come from? Well, it contains the information that the light is either on or off. But the objection to that is: the information only exists relative to a conscious observer. The photodiode knows nothing about light being on or off, it just responds differentially to photon emissions. It is exactly like a mercury thermometer that expands or contracts in a way that we can use to measure the temperature in the room. The mercury in the glass knows nothing about temperature or anything else; it just expands or contracts in a way that we can use to gain information.

Same with the photodiode. The idea that the photodiode is conscious, even a tiny bit conscious, just in virtue of matching a luminance in the environment, does not seem to be worth serious consideration. I have the greatest admiration for Tononi and Koch but the idea that a photodiode becomes conscious because we can use it to get information does not seem up to their usual standards.

A favorite example in the literature is the rings in a tree stump. They contain information about the age of the tree. But what fact about them makes them information? The answer is that there is a correlation between the annual rings on the tree stump and the cycle of the seasons, and the different phases of the tree’s growth, and therefore we can use the rings to get information about the tree. The correlation is just a brute fact; it becomes information only when a conscious interpreter decides to treat the tree rings as information about the history of the tree. In short, you cannot explain consciousness by referring to observer-relative information, because the information in question requires consciousness. Information is only information relative to some consciousness that assigns the informational status.

Well, why could not the brute facts that enable us to assign informational interpretations themselves be conscious? Why are they not sufficient for consciousness? The mercury expands and contracts. The photodiode goes on or off. The tree gets another ring with each passing year. Is that supposed to be enough for consciousness? As long as we have the notion of “information” in our explanation, it might look as if we are explaining something, because, after all, there does seem to be a connection between consciousness and observer-independent information.

There is no doubt some information in every conscious state in the ordinary content sense of information. Even if I just have a pain, I have information, for example that it hurts and that I am injured. But once you recognize that all the cases given by Koch and Tononi are forms of information relative to an observer, then it seems to me that their approach is incoherent. The matching relations themselves are not information until a conscious agent treats them as such. But that treatment cannot itself explain consciousness because it requires consciousness. It is just an example of consciousness. It is just an example of consciousness at work.

7. There are many other interesting parts of Koch’s book that I have not had the space to discuss, and as always Koch’s discussions are engaging and informative. I would not wish my misgivings to detract from the real merits of his book. But the primary intellectual ambitions of the book—namely to offer a model for explaining consciousness and to suggest a solution to the problem of free will and determinism— do not seem to me successful.”

http://virgil.gr/wp-content/uploads/2013/04/searle.pdf
 
Last edited:
The first question that comes to mind for me in Searle's paper concerns his statement that:

To explore these ideas two distinctions must be made clear. The first is between two senses of the objective and subjective distinction. This famous distinction is ambiguous between an epistemic sense (where “epistemic” means having to do with knowledge) and an ontological sense (where “ontological” means having to do with existence).

@Pharoah has read more of Searle than I have so perhaps he can help me work through that statement. I would agree with Searle at this point on the recognition that both subjective and objective aspects are involved in/coexist in what we humans work out about the nature of both epistemology and ontology. As I see it, this is the result of our combining naturally given subjective limitations in our understanding of all that is 'real' [and recognizing this] with an understanding that the natural world as a whole and we ourselves evolved and existing within it also possess objective qualities. Were we not physically embodied it is doubtful that we could conceive of the objective reality of the world we live in. So on the face of it I agree with Searle's statement in that extract, but I wonder what work he wants it to do in supporting the theory of consciousness he prefers. My first question is does my restatement of what Searle writes seem to be an accurate representation of his meaning or not? And my next question is does his statement raise any differences of opinion in this company?

Beyond that, I wonder if @Soupie and/or @Pharoah consider Searle to be a dualist based on his distinction between the coexisting objective and subjective aspects of embodied consciousness and mind?.

I have some difficulty making sense of another claim by Searle, that

". . . you cannot explain consciousness by referring to observer-relative information, because the information in question requires consciousness. Information is only information relative to some consciousness that assigns the informational status."

How do each of you make sense of that claim (if you do)? Is Searle ignoring, or unable to see, the ways (numerous ways, I think) in which consciousness depends upon biological information executed in the body and brain -- something we can understand through various scientific and evolutionary analyses and deal with epistemically in our efforts to understand the relationship of such information to consciousness and mind?
 
Last edited:
@Constance
That article by Searle: I have read it.
I thought he had, somehow, got hold of my text and copied it—word for word! But it appears that he has just expressed exactly what I think about IIT and happened to use the same analogy. (read my draft article on information regarding tree rings, and compare with Searle on tree rings)
As you know, I found it difficult to take IIT seriously and the same goes for Koch—it is incoherent. I find neither particularly difficult to critique and unsurprisingly, Searle makes sense to me.
He says, "...once you recognize that all the cases given by Koch and Tononi are forms of information relative to an observer, then it seems to me that their approach is incoherent. The matching relations themselves are not information until a conscious agent treats them as such. "
I am in full agreement about the idea of observer-independence and observer-relativity re, information.
It appears Searle is on the same wavelength as me when it comes to information. Curiously, he is a naive realist (or perhaps was and is no longer). I would love to chat to him.
 
Searle has his entire lectures at Berkeley on podcast. Very engaging and informative.
He is not a dualist. He is a 'rare' breed: a naive realist. Although that cannot sit comfortably with him.

Your second query. Let me put it like this:
Information can not exist without an agent interpreting it: Therefore, you can not have information 'being consciousness' as an explanation for consciousness 'being information'... or something like that. Alternatively,
consciousness is required to make the world informational: therefore you cannot have information ontologically equivalent to consciousness.
At base IIT is incoherent. It is a panpsychist theory and as Searle says, panpsychism is a non starter. What is the greatest puzzle to me is how Chalmers comes to endorse IIT... so I must be missing something.

Your first query. I'm not sure how to answer. I did find this Searle passage a bit of a mind bender and need to reread. He is laying the ground for his critique. I have quoted Searle expressing the same, in my rejected paper.
 
... Noted is the fact that "an actual mechanism for consciousness" is a long way off. Per Velmans there will never be a mechanism for consciousness ...
Claiming it is a, "fact that an actual mechanism for consciousness is a long way off." doesn't appear to be well supported. Maybe you missed the video I posted where corresponding regions of the brain have been mapped. I also attempted to draw attention to the role of the Thalamocortical system in particular. These findings have essentially zoomed us in on the material structures directly associated with conscious experience, They are the "mechanisms" from which consciousness appears to arise, and this has been proven sufficiently by science to the extent that it can be proven. What is still some ways off is how to replicate these mechanisms via engineering rather than baby making.
 
Last edited:
These findings have essentially zoomed us in on the material structures directly associated with conscious experience, They are the "mechanisms" from which consciousness appears to arise, and this has been proven sufficiently by science to the extent that it can be proven.
When you suggest that these are the mechanisms "directly related" to consciousness, you are correct. However, when you say they are the mechanisms from which consciousness arises, you go too far. They firmly remain neural correlates of consciousness, nothing more.

We have no physical theories/models which can begin to approach the mind-body problem (the hard problem). That is, a physical theory which might offer a mechanistic explanation.

The only physical (natural) theory that I'm aware of is IIT; the idea that neurons causally interacting (integrated) in a particular way "give rise" to conscious experience. (However, no participants here care for the theory. :) Regardless, its the only physical/natural theory I'm aware of.)

For the record, I don't believe that consciousness "arises" from or is "secreted" by the brain/body. I think consciousness is the neurophysiological processes of the body/brain. I think consciousness is a particular kind of information embodied by particular kinds of physical systems.
 
Last edited:
@ufology

I assume you are familiar with Stephen Wolfram. He recently made a very interesting, in depth blog post about his thoughts on the foundamental nature of reality. Since you expressed (skeptical) interest in QST, perhaps Wolfram's thoughts might provide a different perspective.

What Is Spacetime, Really?—Stephen Wolfram Blog

"... Space as a Network

So could this be what space is made of? In traditional physics—and General Relativity—one doesn’t think of space as being “made of” anything. One just thinks of space as a mathematical construct that serves as a kind of backdrop, in which there’s a continuous range of possible positions at which things can be placed.

But do we in fact know that space is continuous like this? In the early days of quantum mechanics, it was actually assumed that space would be quantized like everything else. But it wasn’t clear how this could fit in with Special Relativity, and there was no obvious evidence of discreteness. By the time I started doing physics in the 1970s, nobody really talked about discreteness of space anymore, and it was experimentally known that there wasn’t discreteness down to about 10-18 meters (1/1000 the radius of a proton, or 1 attometer). Forty years—and several tens of billions of dollars’ worth of particle accelerators—later there’s still no discreteness in space that’s been seen, and the limit is about 10-22 meters (or 100 yoctometers).

Still, there’s long been a suspicion that something has to be quantized about space down at the Planck length of about 10-34 meters. But when people have thought about this—and discussed spin networks or loop quantum gravity or whatever—they’ve tended to assume that whatever happens there has to be deeply connected to the formalism of quantum mechanics, and to the notion of quantum amplitudes for things.

But what if space—perhaps at something like the Planck scale—is just a plain old network, with no explicit quantum amplitudes or anything? It doesn’t sound so impressive or mysterious—but it certainly takes a lot less information to specify such a network: you just have to say which nodes are connected to which other ones.

But how could this be what space is made of? First of all, how could the apparent continuity of space on larger scales emerge? Actually, that’s not very difficult: it can just be a consequence of having lots of nodes and connections. It’s a bit like what happens in a fluid, like water. On a small scale, there are a bunch of discrete molecules bouncing around. But the large-scale effect of all these molecules is to produce what seems to us like a continuous fluid.

It so happens that I studied this phenomenon a lot in the mid-1980s—as part of my efforts to understand the origins of apparent randomness in fluid turbulence. And in particular I showed that even when the underlying “molecules” are cells in a simple cellular automaton, it’s possible to get large-scale behavior that exactly follows the standard differential equations of fluid flow.



So when I started thinking about the possibility that underneath space there might be a network, I imagined that perhaps the same methods might be used—and that it might actually be possible to derive Einstein’s Equations of General Relativity from something much lower level.

Maybe There’s Nothing But Space

But, OK, if space is a network, what about all the stuff that’s in space? What about all the electrons, and quarks and photons, and so on? In the usual formulation of physics, space is a backdrop, on top of which all the particles, or strings, or whatever, exist. But that gets pretty complicated. And there’s a simpler possibility: maybe in some sense everything in the universe is just “made of space”.

As it happens, in his later years, Einstein was quite enamored of this idea. He thought that perhaps particles, like electrons, could be associated with something like black holes that contain nothing but space. But within the formalism of General Relativity, Einstein could never get this to work, and the idea was largely dropped. ..."
 
Last edited:
Information can not exist without an agent interpreting it: Therefore, you can not have information 'being consciousness' as an explanation for consciousness 'being information'... or something like that. Alternatively, consciousness is required to make the world informational: therefore you cannot have information ontologically equivalent to consciousness.
Yes, there is certainly something here. Not a mystery per se, but a problem.

Douglas Hofstadter has written two, highly regarded books on the phenomenon of self-referential systems. I think self-aware systems are self-referential systems.

"Hofstadter seeks to remedy this problem in I Am a Strange Loop by focusing and expounding on the central message of Gödel, Escher, Bach. He demonstrates how the properties of self-referential systems, demonstrated most famously in Gödel's incompleteness theorems, can be used to describe the unique properties of minds.[2]"

I recommend his book "GEB" and "I am a Strange Loop."

I took this quote from Searle:

But also there is an ontological sense of the subjective/objective distinction. In that sense, subjective entities only exist when they are experienced by a human or animal subject. Ontologically objective entities exist independently of any experience. So pains, tickles, itches, suspicions, and impressions are ontologically subjective; while mountains, molecules, and tectonic plates are ontologically objective.
There's that confusing notion again that pops up in Consciousness studies so often!

The notion that experiences are experienced! The idea that experiences are observed in the same fashion that a sunset is observed is a confusion. There is no observer! Ive noted this in Searle's arguments before (and asked @Pharoah about his own view before).

Killing the Observer | Naturalism.org

The "observing agent" is the organism/system, not some mental (supernatural?) observer. When a physical system "makes sense" of its physical environment, the "sense" is the phenomenon we know as consciousness (and, I submit, information).

For what it's worth, here are two thorough papers on the topic of information. (In particular, highlighting the important difference between computation and information processing.) The first seems to provide a deep explanation of the various concepts of information. I will read both, and return to the discussion.

http://www.umsl.edu/~piccininig/Computation_vs_Information_Processing.pdf

Information processing, computation, and cognition
 
When you suggest that these are the mechanisms "directly related" to consciousness, you are correct. However, when you say they are the mechanisms from which consciousness arises, you go too far. They firmly remain neural correlates of consciousness, nothing more.
You say that as if it's insignificant. It's like saying that the electromagnet is merely the electronic correlate of a magnetic field. Both are true. Both are critical.
We have no physical theories/models which can begin to approach the mind-body problem (the hard problem). That is, a physical theory which might offer a mechanistic explanation.
The Hard Problem is only relevant to the extent of illustrating the challenge of being able to know with the same certainty that we have about our own personal consciousness, whether or not something else also has something like it. In other words, not knowing what it like to be some other being with consciousness in no way detracts from knowing the situation that gives rise to consciousness. They're two separate issues, which is why I pointed out way back when it was first mentioned, that from that perspective, it isn't coherent. Searle says pretty much the same thing when he says that we can study ontological subjectivity from an epistemological perspective. We don't necessarily have to "know what it's like" in order to create another one like it. Our bodies already do it automatically when we reproduce, and that is an entirely physical process. No non-physical mystical ( whatever the case may be ) extra ingredient required.
For the record, I don't believe that consciousness "arises" from or is "secreted" by the brain/body. I think consciousness is the neurophysical processes of the body/brain. I think consciousness is a particular kind of information embodied by particular kinds of physical systems.
The idea that consciousness is information only substitutes one abstract concept label for another without getting us any further down the road regarding how either comes into being, and if anything consciousness is less about the information and more about the experience of the information. But I'm still curious. If you're assuming that consciousness is something that is simply "embodied" but not a product or that body, then how do you figure it gets "embodied" in the first place?
 
The only physical (natural) theory that I'm aware of is IIT; the idea that neurons causally interacting (integrated) in a particular way "give rise" to conscious experience. (However, no participants here care for the theory. :) Regardless, its the only physical/natural theory I'm aware of.)

That's a hypothesis in process so far. To reach the level of what we can call a scientific theory {implying wide agreement among various scientific and mind specialists about its potential validity} will require increased knowledge of how neurons arise and develop in biological organisms, and will also require, of course, knowledge of how that which neurons enable in the functioning of organisms produces consciousness. I'm not at all averse to what you characterize as "the idea that neurons causally interacting (integrated) in a particular way 'give rise'" to increasingly unified brain processes. As I see it they 'give rise' to increasingly integrated sense perception, physical facility, self-awareness at the level of affectivity, and self-control of organisms and animals. In other words, evolving neurons and neural nets in various organisms and animals amount to mostly prereflective understanding of how to maneuver in, survive in, and even thrive in their environments. Reflective consciousness and mind constitute further steps in the evolution of consciousness in humans and other 'higher' animals, which are built out of personal lived experience in the world (and, at sub- and un-conscious levels, the lived experience of our evolutionary forebears).

Consciousness as a complex of stored experiential memory maintained within both waking consciousness and in personal subconsciousness and the collective unconscious play significant roles in the kinds of feelings and experiences we have and how we sort them out and conceptualize about them. That's why I said when you first introduced Tononi's project that his greatest challenge would be in demonstrating that IIT can also account for the subconscious and unconscious layers of 'information' that affect us in waking consciousness and also in dreaming consciousness.


For the record, I don't believe that consciousness "arises" from or is "secreted" by the brain/body. I think consciousness is the neurophysical processes of the body/brain. I think consciousness is a particular kind of information embodied by particular kinds of physical systems.

"Arises from," like "secreted by," are usages that need to be placed within quotes if not stricken from our discourse about informational and neurological attempts to account for consciousness. Those terms are too vague, too inspecific, about the processes of exchange between neural nets and consciousness to be taken on faith. When you write that you presently think that "consciousness is a particular kind of information embodied by particular kinds of physical systems," I acknowledge the existence of what you and others consider to be a workable theory of consciousness, but most of the work that needs to be accomplished to validate that theory remains to be done by neuroscientists and especially biological neuroscientists.

Your second sentence raises further questions:

"I think consciousness is a particular kind of information embodied by particular kinds of physical systems." First, how to differentiate between or among "different kinds of 'information'" (?); and second, how to identify the different physical circuits and networks connecting the "particular kinds of physical systems" carrying information that produces consciousness (?). It seems to me that an investigation of that sort would also need to provide an evolutionary survey of a broad range of animal brains from which our species' brains evolved.
 
Last edited:
We don't necessarily have to "know what it's like" in order to create another one like it.
I agree. But thats not the point im making.

Finding the correlates of consciousness does not give us a physical (cause and effect) explanation of how consciousness might be caused by the physical structures/processes/mechanisms correlated with it.

Sure, we can hope that as we narrow the correlates down further and further, an understanding of the cause-effect relationship will become obvious. However, currently, we have no good theories about how conscious, phenomenal experience might be physically "caused." Importantly, as pointed out by Velmans and Chalmers, there are philosophical reasons to believe that a physical cause-effect model of consciousness cannot be obtained.

However, that doesn't mean we won't be able to create physical systems that are conscious.

The idea that consciousness is information only substitutes one abstract concept label for another without getting us any further down the road regarding how either comes into being...
Yes and no.

Yes, I agree that both "consciousness" and "information" are abstract, objective (or perhaps, inter-subjective) concepts we have to capture the phenomenon that is our experiences.

However, the notion that conceptualizing consciousness as a type of information doesnt get us further down the road is wrong.

In my opinion, as a dual-aspect monist, the concept of information is a tool that can act as bridge between the subjective and objective poles of reality. Information is neither purely objective nor subjective, but both, just like reality.

...and if anything consciousness is less about the information and more about the experience of the information. But I'm still curious. If you're assuming that consciousness is something that is simply "embodied" but not a product or that body, then how do you figure it gets "embodied" in the first place?
Consciousness is intentional information. To be conscious is to be conscious of something, even if that means being conscious of being conscious (see self-referential systems above). And it seems that being conscious of something entails feeling like something.

How does consciousness become embodied in the first place? On my view, [our] consciousness is neurophysiological processes. As these neurophysiological processes developed over evolutionary time and likewise as these neurophysiological processes develop within the lifetime of the organism, consciousness develops with it. They are two sides of the same coin.
 
Last edited:
I agree. But thats not the point im making.

Finding the correlates of consciousness does not give us a physical (cause and effect) ...
Sure it does. In humans, the brain-body system is the "physical cause" of the effect ( consciousness ). From a scientific standpoint, it's as obvious as it can be. It's just as obvious as an electromagnet is the physical cause of the electromagnetic effect. Or at least to me it is. Not sure why you're not in agreement there.
... explanation of how consciousness might be caused by the physical structures/processes/mechanisms correlated with it.
Hmm ... so you seem to be differentiating between the "what" and the "how" as qualifiers. If you accept the what, then answering the how question requires an "It does it by ( insert process here )." type answer. With the brain-body system, that answer is the domain of physicians, who can describe at some extended length the processes by which the brain-body system functions. It's complex, but again, since we create ourselves naturally, and we understand how that's accomplished, all the way down to the molecular level, we know it's an entirely physical process, and we're a lot closer to being able to engineer it than ever before. So again, I don't think we're all that far away from figuring it out.
Sure, we can hope that as we narrow the correlates down further and further, an understanding of the cause-effect relationship will become obvious. However, currently, we have no good theories about how conscious, phenomenal experience might be physically "caused."
Again, yes we do. We even know the parts of the brain responsible for turning consciousness on and off. Again, have a look at the Thalamocortical system. You seem to want to simply hand wave this information as irrelevant when it's not. I'll grant that we may not get the physical nature of consciousness from an ontological perspective pinned down any more than we have the physical nature of a magnetic field ontologically pinned down. At some point we have no choice but to simply accept that if we do something a certain way, certain things happen, and for practical reasons, that's all we really need to know. If we go past that we step off the ledge into the land of metaphysics, which is really cool, but for the sake of this discussion, seemingly irrelevant.
Importantly, as pointed out by Velmans and Chalmers, there are philosophical reasons to believe that a physical cause-effect model of consciousness cannot be obtained.
Sure it can. We already have it. There are plenty of models showing the physical cause ( the brain-body system ) giving rise to consciousness. We can even watch it happening live during brain scan experiments. We can even manipulate the various experiential phenomena associated with consciousness remotely via magnetic fields.
However, that doesn't mean we won't be able to create physical systems that are conscious.
It seems to me you're conflating the metaphysical an ontological with the epistemological. In other words you're over-thinking it. We already know the cause and the physical workings of that cause, we just don't know for sure how to measure the effect of the cause in an objective fashion. Using our electromagnet analogy. It took quite a while before we figured that all out, but at the very start we had a way of detecting magnetism with magnetic powder. With that tool we had a direct 1:1 correlation. It's a scientific fact that today nobody would seriously bother to argue over.

On the other hand, with consciousness, we're able to detect regions ( volumes of space ) that seem to correlate 1:1 with consciousness, but it's not quite as cut and dried as that. It's not really a 1:1 correlation. It's a correlation between physical processes that indicate consciousness, but in and of themselves aren't. It's like being able to detect the electricity and metal in and electromagnet when it becomes active, but not being able to detect the field itself. Perhaps this is the sticky part you are referring to. If so, then we are on common ground.

I agree that both "consciousness" and "information" are abstract, objective (or perhaps, inter-subjective) concepts we have to capture the phenomenon that is our experiences. However, the notion that conceptualizing consciousness as a type of information doesnt get us further down the road is wrong. In my opinion, as a dual-aspect monist, the concept of information is a tool that can act as bridge between the subjective and objective poles of reality. Information is neither purely objective nor subjective, but both, just like reality.
I think Searle would disagree. So would I. Information is entirely subjective because it is conceptual in nature. All information requires a subjective interpreter. Compact disks and memory chips are objectively real, but without a subjective interpreter, the encoding is simply tiny configurations of pits and peaks or gates and substrate. Even when accessed by a system capable of decoding the physical patterns, without a subjective interpreter, all there is is electrical signals, photons, and other physical elements devoid of meaning, that is until a subjective interpreter enters the picture and interprets the pixels in a meaningful way. Then we have information.
Consciousness is intentional information. To be conscious is to be conscious of something, even if that means being conscious of being conscious (see self-referential systems above). And it seems that being conscious of something entails feeling like something.
OK. I could quibble, but I won't.
How does consciousness become embodied in the first place? On my view, [our] consciousness is neurophysiological processes. As these neurophysiological processes developed over evolutionary time and likewise as these neurophysiological processes develop within the lifetime of the organism, consciousness develops with it. They are two sides of the same coin.
I guess you'd have to clarify what you mean by "process" there. Are you suggesting that the neurophysiological process of transferring chemicals between synapses is consciousness? I have a hard time with that. It's sort of like saying that consciousness is really the clockwork orange, only the cogs and gears are really tiny and there are so many of them that the complexity just adds up to consciousness. I don't think so. I think all those cogs and gears don't simply turn for their own sake, but rather, like billions of little windings and cores, generate something else; something outside themselves analogous to a magnetic field, perhaps even a very specific type of EM field that we call consciousness. After all, we can measure brain waves and know something sort of like that is actually taking place. But we haven't got it really pinned down yet.
 
Last edited:
Status
Not open for further replies.
Back
Top