• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 8


Status
Not open for further replies.
The comment section to the article is also good:

micha berger says:
April 21, 2016 at 3:31 pm

Did I misunderstand, or is this a modern form of Kant's original arguments proving the difference between phenomenological reality and what's "really out there"?


Henrik Nielsen says:
April 21, 2016 at 5:54 pm

Micha Berger: no, you didn't misunderstand, but unfortunately today's scientists tend to be utterly ignorant of philosophy.


Chuck Bennett says:
April 21, 2016 at 5:30 pm

Without using the word, Hoffman seems to be talking about abstraction, the idea of hiding complexity behind a simple interface. This is a coomon strategy, for example, in computer science.

The mind adapted the most evolutionarily useful abstractions of reality given the senses we had.

It's not that they're illusions, which has a pejorative sense to it, that we're being duped. It's that the most adaptive abstractions of reality were selected for and serve their purpose well, just as do the desktop file icons for files on the computer.

smcder @Soupie is this last statement above the correct interpretation of Hoffman's theory?
 
More comments:

Milton says:
April 22, 2016 at 6:50 am

I've surprised myself by concluding for the first time, having read this article, that I might agree with those who criticise string theory and its sisters for being a pointless exercise (being unsusceptible to replication, testing or falsification).

Whether Prof Hoffman is right or wrong (or, as I suspect, falling into a quasi-solipsistic philosophical trap), it seems ever more likely to me that we're making no real progress in understanding the fundamental structures of the universe because they exist in more dimensions than our crude lumpy chunks of chemistry are capable of processing.

I actually admire the clever and ingenious thinking that has gone into string theory, M-theory, quantum supergravity and the rest, but I seriously wonder whether, even when the equations add up and seem to agree with reality, we have attained true understanding—or merely described what happens. We may get a What; but that's not the same as a Why.

I'm feeling bad about such a negative view, and hope it's just the result of stupidity on my part. But better brains by far than mine have banged on the quantum cosmological door for 50 years now, and in truth, when you get right down to it—got nowhere.

Jaime says:
April 22, 2016 at 7:14 am

The brain IS hot. Brains are large and hot, I honestly don't think that quantum effects are important… I believe that a sufficiently large and complex classical neuron network will eventually qualify as 'conscious'.

Hoffman's theory seems to me a very interesting 'effective' or 'emergent' theory of consciousness, but it is certainly not more fundamental than elementary particles, or strings or whatever physical object turns out to be most fundamental.

When confronted with the Chinese room experiment, I've always answered with a solution in line with Hoffman's theory. The room as a whole is a conscious agent that understands Chinese.

  • Jan says:
    April 22, 2016 at 8:04 am
    What he says is irrelevant. Insects, birds and humans all see reality differently. But their representations are as good as anything out there. For all intents and purposes our perception of reality can be banked on so claiming it is an illusion is just so much hippy talk. It's not an illusion if an asteroid hits or if I take a bullet to the back of my head. Things happen all the time that I never observed. This branch of science is a load of crap.

  • 2f5ea90fec1dae0106e2a19eabfb8f1c
    ed says:
    April 22, 2016 at 8:30 am
    what a load of bollocks


smcder I included Jan Ed's comment for fun, because this is a forum, not a journal after all ... but Jan does raise an interesting question ... and it may be that the rest of reality is very much "like" the reality of what we can see and understand, even if it would look very, very different to us as seen through another kind of mind that could see those aspects we can't ... does that make sense @Soupie?
 
Jake Becker says:
April 22, 2016 at 9:21 am

Micha: I believe to some extent, which is a bit abhorrent, but for an intriguing related work that doesn't quite indulge as much in that end of philosophy, I would recommend "The Ego Tunnel" by Thomas Metzinger. I do appreciate what Hoffman has to say about the "need to know", that's a whole other turn I'd like to think about.
 
Having another look at this:

Interface theory of perception can overcome the rationality fetish

looking at Hoffman's model and assumptions:

Unfortunately, to get any interesting dynamics out of this model, the authors have to introduce cognitive costs. In particular, since the truth strategy requires more detailed perception, it is always second to act (unless the two truths are competing against each other, in which case the order is random) and also sustains a small fitness penalty for the larger brain required for this perception. The authors solve the replicator dynamics for this model, and notice that truth does not go extinct only if the simple strategies’ boundary is badly placed (either below ~33 or above ~77) and the cost per extra bit of information is low; in some of these cases truth drives simple to extinction, and in others they co-exist. However, note that even when the cost per bit of information is zero, truth still has a penalty in acting second, so the results are not surprising.
 
Of course, interface has the higher expected fitness and it is not surprising that it will always out-compete the critical realist strategy, and will also beat the naive realist strategy depending on how much extra truthful perception costs.

This shows that the fitness is more important than ‘truth’ to the agent, and if perception is expensive then the agent will tune its perceptive coarse-graining to reflect the fitness distribution (something that depends on how the agent can interact with the environment, not just the external environment) not the amount of resources (a property of only the external environment). Although it is interesting, it is not surprising. In particular, the agent still acts rationally, myoptically, and selfishly (although there is no social dilemma here) with respect to the objective fitness. The interface theory overturns rational fetish belief that our perception is always tuned to accurately reflect the external world. Instead, Mark, Marion, and Hoffman (2010) show that it is tuned to accurately reflect the interaction between agent and external world.

Marcel, Thomas, and I have extended beyond this to show that sometimes the tuning reflects not just the agent’s but society’s interaction with the world. Even without a penalty, in certain settings agents will evolve misrepresentations of the world that tell them incorrect fitness-information. What’s even more mind-blowing is that these incorrect assessments of objective fitness information actually help agents overcome their selfish tendencies and promote the social good. This happens despite the fact that the agents are acting completely rationally on what Hoffman would call their perceptions and what I call the subjective experience.
 
smcder I included Jan Ed's comment for fun, because this is a forum, not a journal after all ... but Jan does raise an interesting question ... and it may be that the rest of reality is very much "like" the reality of what we can see and understand, even if it would look very, very different to us as seen through another kind of mind that could see those aspects we can't ... does that make sense @Soupie?
But what Jan is saying is no different than what Chuck Bennet is saying.

Also, it doesn't matter how veridical or abstract our perceptions of reality turn out to be. The important point to me—as it relates to the hard problem—is that our perceptions of reality are a subset of reality.

I'll say it again: our perceptions of reality are a subset of reality.

To me, this utterly profound!

Attached is a PDF contrasting the views expressed and argues for here. And additionally an attempt by me to express my approach in a different way.
 

Attachments

  • Approaches to Consciousness.pdf
    16.2 KB · Views: 4
But what Jan is saying is no different than what Chuck Bennet is saying.

Also, it doesn't matter how veridical or abstract our perceptions of reality turn out to be. The important point to me—as it relates to the hard problem—is that our perceptions of reality are a subset of reality.

I'll say it again: our perceptions of reality are a subset of reality.

To me, this utterly profound!

Attached is a PDF contrasting the views expressed and argues for here. And additionally an attempt by me to express my approach in a different way.

Yes our perceptions of reality are a subset of reality. What is your sense of what the rest of the set contains?
 
But what Jan is saying is no different than what Chuck Bennet is saying.

Also, it doesn't matter how veridical or abstract our perceptions of reality turn out to be. The important point to me—as it relates to the hard problem—is that our perceptions of reality are a subset of reality.

I'll say it again: our perceptions of reality are a subset of reality.

To me, this utterly profound!

Attached is a PDF contrasting the views expressed and argues for here. And additionally an attempt by me to express my approach in a different way.

I'm definitely not a dualist according to this diagram as I don't see matter and consciousness in separate circles.

In neutral monism traditionally the substrate has aspects of both but is neither, but here it seems we start with a neutral substrate then derive consciousness ... then derive matter ... is that correct?

Neutral Monism: The fundamental substrate is neither consciousness nor matter. That is, consciousness is derivative of this neutral substrate, and matter is derivative of consciousness (i.e., matter is our perceptual representation of the neutral substrate).

The last is your view? It seems to contrast with definitions of neutral monism as here:

In the philosophy of mind, neutral monism is the view that the mental and the physical are two ways of organizing or describing the same elements, which are themselves "neutral", that is, neither physical nor mental. This view denies that the mental and the physical are two fundamentally different things.

in that you seem to be saying that consciousness is derived from the neutral substrate, which leads to the question: how is consciousness derived from the neutral substrate? and that matter then is derivative of consciousness ... when do these things occur, when does consciousness derive from the neutral substrate? Is there a triggering event for the derivation? The same questions for matter, when does consciousness derive matter? it has to come when there are perceptions ... that it is our perceptual representation of the neutral substrate out of which consciousness has been derived ... ?

How do we know this is the case, rather than case 2, which it seems could appear exactly the same way to us ... how do we know matter is how we see the neutral substrate out of which consciousness is derived, rather than matter is how we see matter out of which consciousness is arrived?

In neutral monism traditionally the substrate has aspects of both but is neither, but here it seems we start with a neutral substrate then derive consciousness ... then derive matter ... is that correct?
 
Last edited:
Interface theory of perception can overcome the rationality fetish

comments section

This is a solid observation across living organisms (applies to vegetables as well): bees visual reception extends to ultra-violet frequencies because flowers use them to advertise their presence, we (humans) don’t rely on this signal and don’t see UV light. The number of examples can continue forever, but it’s perfectly obvious: sensory systems of different organisms are tuned to their own specific needs.

I think it all boils down to the observation that it doesn’t make evolutionary sense to have a “simple” critical realist strategy as the one described here: what is selected for is the fitness value of perception and since the connection between truth and fitness is not always straightforward, some “wrong” perceptions will be selected for (that’s the wrong perceptions that happen to be useful heuristics or approximations), how could it be anything else?


Artem's response:

I don’t think you are fully embracing the interface theory, yet. The examples you give can still be consistent with critical realism, this orthodox view does not rule out “wrong” perceptions (else the cube at the start of my post would have caused people to abandon the theory long ago) but they argue as you do that they are useful approximations. The interface theory, instead, says things are useful but aren’t even approximations.

My results with Marcel take this even further! We show that not only do they not resemble a reality that doesn’t correlate to fitness (as this post had). Sometimes the interface doesn’t even resemble or approximate individual fitness! Instead, in social dilemma, the individual’s interface can instead serve the society’s interest and be objectively irrational for the individual that holds it.

but Artem comment's on Hoffman's presentation:

(unfortunately, I think that everything after 13m28s mark is content-less speculation and I don’t recommend watching past that point):
 
A more extensive critique of Hoffman's theory is here:

Kooky history of the quantum mind: reviving realism

All that Hoffman has shown (from his brief explanation, I will have to look at his publications sine I haven’t been aware of his work, before), is that agents tune only to the fitness-channel, and not the truth. This is not surprising to biologist, although it might be surprising to LessWrong uber-rationalist types, it just says that when penalized for looking at excess information the agent tunes in only to the fitness effects.

This is a little trivial. We’ve extended beyond this, by showing that even without a penalty, in certain settings agents will evolve misrepresentations of the world that tell them incorrect fitness-information. What’s even more mind-blowing is that these incorrect assessments of objective fitness information actually help agents overcome their selfish tendencies and promote the social good. This happens despite the fact that the agents are acting completely rationally on what Hoffman would call their perceptions and what I call the subjective experience. Further more, we can perturb rationality with some psychological effects like quasi-magical thinking and show some results about it.

Finally, everything in the video after 13m30 is utter non-sense. It is about as insightful as simply renaming a Hilbert-space as a “mind space”, arbitrarily picking two parts (or more if you have more agents) of it, then decomposing all operators into they action they have on those parts and calling one operator “perception” and the other “action”, and discretize time (as you would from thinking in terms of unitaries instead of Hamiltonians). Such an approach would actually be more general and mathematically precise than Hoffman, but just like Hoffman’s would give absolutely no insight into anything. However, instead of just making fun of his stuff, I will point out a little bit of the specific nonsense that should set off red-flags.

[1] He doesn’t understand what falsifiable means. You can very clearly see this when he in passing says “the Church-Turing thesis is falsifiable”. The CT-thesis is actually a quintessential example of something that isn’t falsifiable. To falsify the CT-thesis, you would need to show that some problem (this is well defined mathematically, not Hoffman-atically) is not computable on a Turing machine. Unfortunately, any finite slice of any function is computable on a Turing machine (i.e. if you take any problem, even an uncomputable one, but only test a finite number of instance of that problem then there will exist a Turing machine that will give you the right answer on those instances). For something to be falsifiable, you have to be able to specify a finite number of experiments that can falsify the theory (for example “the sun won’t rise eventually” is not falsifiable, but “the sun won’t rise one morning in the next 2 weeks” is falsifiable). If you had a potential “super-Turing” machine candidate, you would only be able to test it on a finite number of instances, so only check a finite part of a problem.

[2] His model is not falsifiable (no matter how many times he says that word), in fact, it isn’t even a theory, it is a framework. In order for it to be a theory, he would have to explain how to calculate these arbitrary transition functions that he introduces. Without that, I an put anything I want there and since he allows for the space to be continuous there aren’t even computability constraints (unlike how he claims, although if we restrict to uniform discrete state spaces then it will be Turing-complete).

[3] He says false mathematical statements. A Markov kernel is not the most general type of channel, in fact Markov kernels can’t model quantum channels. For a discussion on how to model quantum channels aimed at a non-mathematical audience, see my recent commentary (Kaznatcheev & Shultz, 2013).

[4] As stated in (2) his model is too under-specified to actually be tested. In particular, even on a theoretical grounding it can’t be shown to capture or not-capture (in an interesting way) even non-relativistic QM. Since he allows arbitrarily large state-spaces and doesn’t compare the size of his state space to the size of the QM he embeds, you can always embed a quantum model in an exponentially large classical model. This only breaks down, if you define a notion of locality or tensoring of spaces (which he doesn’t). Once that is in place, you can show that Hoffman’s model won’t capture QM because of (3), Markov kenels don’t capture quantum channels (this is effectively the essence of Bell’s theorem and later refinements).

[5] Finally, and the biggest disappointment, is that he straight up lies. When he puts those two equations side-by-side, he says we can derive how parameters relate. He then finds a parameter corresponding to the speed of light… except he is working in a non-relativistic theory (as he admits in 33:23, and as you can recognize from the equations) and hence the speed of light never enters any of the equations and so he can’t possible say what those equations impose on it.

There are a number other issues (like how he considers a special case but then from it draws a general conclusion, although there are plenty of other special cases expressible in his model that violate it), but it isn’t worth further typing effort from me.

That's pretty heavy criticism ...
 
But what Jan is saying is no different than what Chuck Bennet is saying.

Also, it doesn't matter how veridical or abstract our perceptions of reality turn out to be. The important point to me—as it relates to the hard problem—is that our perceptions of reality are a subset of reality.

I'll say it again: our perceptions of reality are a subset of reality.

To me, this utterly profound!

Attached is a PDF contrasting the views expressed and argues for here. And additionally an attempt by me to express my approach in a different way.

I'll say it again: our perceptions of reality are a subset of reality.

To me, this utterly profound!

What was your sense of things prior to this?

All that Hoffman has shown (from his brief explanation, I will have to look at his publications sine I haven’t been aware of his work, before), is that agents tune only to the fitness-channel, and not the truth. This is not surprising to biologist, although it might be surprising to LessWrong uber-rationalist types, it just says that when penalized for looking at excess information the agent tunes in only to the fitness effects.

This is a little trivial. We’ve extended beyond this, by showing that even without a penalty, in certain settings agents will evolve misrepresentations of the world that tell them incorrect fitness-information. What’s even more mind-blowing is that these incorrect assessments of objective fitness information actually help agents overcome their selfish tendencies and promote the social good. This happens despite the fact that the agents are acting completely rationally on what Hoffman would call their perceptions and what I call the subjective experience. Further more, we can perturb rationality with some psychological effects like quasi-magical thinking and show some results about it.
 
Yes our perceptions of reality are a subset of reality. What is your sense of what the rest of the set contains?
I think it's pretty wide open.

I'm definitely not a dualist according to this diagram as I don't see matter and consciousness in separate circles.
The separate circles indicate that the don't derive from one another. And the fact that the are not within a larger circle indicates that they are fundamental and do not derive from any deeper substrate.

How does your view differ from that?

Neutral Monism: The fundamental substrate is neither consciousness nor matter. That is, consciousness is derivative of this neutral substrate, and matter is derivative of consciousness (i.e., matter is our perceptual representation of the neutral substrate).
I'm assuming the last diagram is your view? It seems to contrast with definitions of neutral monism as here:

In the philosophy of mind, neutral monism is the view that the mental and the physical are two ways of organizing or describing the same elements, which are themselves "neutral", that is, neither physical nor mental. This view denies that the mental and the physical are two fundamentally different things.

in that you seem to be saying that consciousness is derived from the neutral substrate (how is consciousness derived from the neutral substrate?) and that matter then is derivative of consciousness - that it is our perceptual representation of the neutral substrate out of which consciousness has been derived ... ?
I would say that the neutral substrate is organized in such a way that consciousness "emerges," and this consciousness is organized in such a way that matter "emerges" i.e., our perception of the neutral substrate.

I put emerges in quotes because consciousness and matter on this view are not "two fundamentally different things" but emerge from the organization of the fundamental "neutral" substrate.

How do we know this is the case, rather than case 2, which it seems could appear exactly the same way? How do we know matter is how we see the neutral substrate out of which consciousness is derived, rather than matter is how we see matter out of which consciousness is arrived?
The hard problem.
 
Last edited:
Randy Sarafan's "Simple Bots" are a fascinating continuation of Mark Tilden's BEAM robotics in the 1980s when we had enough techno-junk and know how to spawn simple solar powered robots ... culminating in sophisticated walking robots made from altered IC chips that could outperform the best technology of the day - Sarafan makes use of more up to date technology, particularly the ubiquitous among the hobby crowd servo-motor, but the principles are similar ... and I think equally or more artistic. This is an update on the found objects of early 20th century art - a la kinetic sculpture.

Untitled Document

Simple Bots

 
I think it's pretty wide open.


The separate circles indicate that the don't derive from one another. And the fact that the are not within a larger circle indicates that they are fundamental and do not derive from any deeper substrate.

How does your view differ from that?


I would say that the neutral substrate is organized in such a way that consciousness "emerges," and this consciousness is organized in such a way that matter "emerges."

I put emerges in quotes because consciousness and matter on this view are not "two fundamentally different things" but emerge from the organization of the fundamental "neutral" substrate.


The hard problem.

I sure wish you could be more specific! lol That's my main frustration with emergence.

So what organizes the neutral substrate? What triggering event? Or is it simply self-organizing and is there any point in asking when? Although you present a sequence here: neutral substrate, then consciousness, then matter, do you really mean it happens in this sequence? And if so, can you provide a time line?

How do you respond to Artem's critiques above?

As to it's being pretty wide open:

A counter-argument (please don't say this is now my "view" OK?) would be that any take reality comes from a mind/brain/black box that is based on "fitness" to the environment ... so we can be pretty sure a chimpanzee's take on the world is relatively similar to ours and that a bat or some deep-sea creature

42-53005836_bowayv.jpg

... could be pretty different, but still biological ... so that they tune in to the same sorts of things, the same energies that we do - at different frequencies or whatever ... or would you say there are cosmic whales out there feasting on quantum-plankton? A fanciful example, but what I'm getting at is something like how we might on the one hand think the people around us led very different lives, but really they are made up of the same few kinds of things in various proportions, similarly any thing that could have a take on reality would have something of this same kind of mix ... so that we could, at least theoretically, translate these experiences into something we could understand, like Geordi's visor in Star Trek?
 
I think it's pretty wide open.


The separate circles indicate that the don't derive from one another. And the fact that the are not within a larger circle indicates that they are fundamental and do not derive from any deeper substrate.

How does your view differ from that?


I would say that the neutral substrate is organized in such a way that consciousness "emerges," and this consciousness is organized in such a way that matter "emerges" i.e., our perception of the neutral substrate.

I put emerges in quotes because consciousness and matter on this view are not "two fundamentally different things" but emerge from the organization of the fundamental "neutral" substrate.


The hard problem.

The separate circles indicate that the don't derive from one another. And the fact that the are not within a larger circle indicates that they are fundamental and do not derive from any deeper substrate.

How does your view differ from that?

Not sure I vision it that way - I'm not stuck on the idea of one thing or two things ... or even seventeen or that some things are more fundamental than others - that there is at bottom some undifferentiated field of something ... because we can always imagine such a field riding on the back of yet another turtle.

When I think of consciousness as fundamental I think of it, crudely, alongside gravity, particles, the matteriness of matter, fundamental forces - something you can't make go away and still have this universe. I think of it in these terms because, for example, organisms that form legs and bones, do so in response to gravity, it seems to me there may be something similar there for brains/minds to organize around, a kernel, a seed, a bit of sand ... to condense experience from, etc ... not sure about POV as fundamental - thats not the right expression, its more like POV is just a basic part of how our universe works - Chalmers recent work on scrutability would have better terminology - these would be the basic blocks I'd put a universe together with - below this the level of abstraction seems to me to lead to confusion and seems rather arbitrary to insist there must be one something below everything else ... this is why I think we should try not to think in "physical terms" by which I mean little bits of things or force fields, things that are tangible to us - but we could ultimately apply that to everything, even if we're not able to actually do so - that I think is where you got the idea that I am a dualist - we can think productively of matter in these terms (in some cases) but I am saying I think maybe we can't think of consciousness productively that way.
 
I think it's pretty wide open.


The separate circles indicate that the don't derive from one another. And the fact that the are not within a larger circle indicates that they are fundamental and do not derive from any deeper substrate.

How does your view differ from that?


I would say that the neutral substrate is organized in such a way that consciousness "emerges," and this consciousness is organized in such a way that matter "emerges" i.e., our perception of the neutral substrate.

I put emerges in quotes because consciousness and matter on this view are not "two fundamentally different things" but emerge from the organization of the fundamental "neutral" substrate.


The hard problem.

"the hard problem" - the problem is, this approach doesn't get rid of the hard problem - I posted this question above ... the physicalist has a hard problem in explaining consciousness, the idealist has the hard problem of explaining the physical - Hoffman himself says he has to derive QM from consciousness and if he doesn't, he will have failed - has he done this?
 
Status
Not open for further replies.
Back
Top