1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.
  2. SUPPORT THE SHOW AND ENJOY A PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+! For a low subscription fee, you will receive access to an ad-free version of The Paracast, the exclusive After The Paracast podcast, featuring color commentary, exclusive interviews, plus show transcripts, the new Paracast+ Video Channel, Classic Episodes and Special Features categories! We now offer lifetime memberships! You can subscribe via this direct link:
    https://www.theparacast.com/introducing-the-paracast/

    The Official Paracast Store is back! Check out our latest lineup of customized stuff at: The Official Paracast Store!

    Subscribe to The Paracast Newsletter!
    Dismiss Notice

Consciousness and the Paranormal — Part 9

Discussion in 'General Freewheeling Chit-Chat' started by Gene Steinberg, Feb 23, 2017.



  1. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
  2. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Commentary on the Interface Theory

    Are icons sense data? - PubMed - NCBI

    and then HSP replies: Probing the interface theory of perception: Reply to commentaries
    (about 2/3 of the way down the page) ...

    Does this entail that perceptual icons are located in a private mental space? ITP itself does not answer that question, because ITP is not committed to any particular ontology (e.g., physicalist, dualist, or idealist). If one augments ITP with, say, Seth Lloyd’s ontology of qubits and gates, and then adopts, say, Tononi’s (Oizumi, Albantakis, & Tononi, 2014) integrated information theory of consciousness, in which the amount and kind of conscious experience depends on the amount and kind of integrated information in a system, then one might get an answer about where perceptual icons are located that physicalists might find congenial, even though the language of space-time and physical objects has been replaced by the language of qubits and quantum gates. We are not endorsing this ontology, but simply pointing to it as a possibility. We happen to be pursuing a different ontology (e.g., Hoffman & Prakash, 2014). But the key insight of ITP—that our perceptions are almost surely tuned to fitness rather than to objective reality—can be cashed out with many different theories of what that objective reality might be.
     
    Last edited: May 18, 2017
  3. Constance

    Constance Paranormal Adept

    Joined:
    Feb 6, 2013
    Messages:
    5,894
    Likes Received:
    3,368
    No, I haven't found it online. The link to it might be preserved in still searchable pages of this thread depending on how long ago I posted it the first time.

    Probably most humans are psychobehaviorally dysfunctional in some way or ways to some degree or other at various times in their lives. What does that prove?

    I must admit I've lost the thread of this discussion from long absence, so don't feel obligated to respond.


    Huh? (once again, I'm at a loss to follow the conversation, so never mind).


    I suppose I should read the material at this link before commenting again, if I can persuade myself to get into the ITP again.
     
    smcder likes this.
  4. Constance

    Constance Paranormal Adept

    Joined:
    Feb 6, 2013
    Messages:
    5,894
    Likes Received:
    3,368
    You seem to link your comments to the three chapters of the Depraz et al book sampled at Google. It would be good if one of us could find the link to the whole text online . . . . .
     
  5. Constance

    Constance Paranormal Adept

    Joined:
    Feb 6, 2013
    Messages:
    5,894
    Likes Received:
    3,368
    ^Re Naomi Eilan, Perceptual Intentionality. Attention and Consciousness, no I haven't and would very much like to find and read it.

    ETA, linking to the author's name provides this bibliographical information:

    Eilan, N. (1998). Perceptual Intentionality. Attention and Consciousness. In A. O'Hear (Author), Contemporary Issues in the Philosophy of Mind (Royal Institute of Philosophy Supplements, pp. 181-202). Cambridge: Cambridge University Press. doi:10.1017/CBO9780511563744.011

    Access to the chapter is behind a paywall.
     
    Last edited: May 19, 2017
    smcder likes this.
  6. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Yes, she appears to be one of these fancy, high dollar philosophers ... abstracts, abstracts everywhere and not a one to link. What happened to the old days when Cynics were just lying all over the streets? ;-)

    Maybe I can get it through the library.
     
  7. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    I sent her a message through Academia.edu to see if it might be made available there.
     
  8. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    The comments and Hoffman et al's responses helped me a lot with CR and ITP.

    I would like to see some comment on the mathematical formalism of Conscious Agents - I do have a very specific concern there.

    It was helpful to see that Hoffman said there was no specific ontological commitment for his theory - he offered one that he thought would make the physicalists happy - but in another comment he referred to the 2014 paper for a suggested ontology - which I think was the one in which CAs were further described - (the one on time) and if so, then what I can figure is that the ontology IS CAs ... i.e. the world is made up of CAs but CAs are mathematical formalisms ... and so the question is WHAT are mathematical formalisms ... ;-) (enter this loop anywhere)
     
  9. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Reverse engineering the world: a commentary on Hoffman, Singh, and Prakash, “The interface theory of perception”

    "World War II gave practical urgency to two previously-obscure ‘reverse engineering’ problems: (1) given a signal of unknown provenance, how does one determine its source? and (2) given a device of unknown manufacture, how does one determine its intended function or behavior? An important wartime constraint on solutions to these problems was that they must be non-destructive; in the case of the second problem, this constraint sometimes took the form of probing the system without causing it to blow up in one’s face. As any engineer could perform only a finite number of manipulations and measurements and could record their outcomes only at finite resolution, either of these problems can be represented by the abstract problem of determining, by a finite number of manipulations and observations, both the complete set of states and the complete set of allowed state transitions of an abstract finite-state machine (FSM). The abstract problem of fully characterizing an FSM with a finite sequence of non-destructive operations is the classical ‘system identification’ problem. Ashby (1956), Moore (1956), and others proved that the classical system identification problem is unsolvable: while finite sequences of operations can establish lower limits on the number of degrees of freedom and hence the potential behavioral complexity of a device, no finite sequence of operations can establish an upper limit on the number of degrees of freedom or the potential behavioral complexity of a device. The very next manipulation of any device may result in completely unexpected behavior, regardless of one’s previous experience of manipulating it.

    All organisms are, at all times, clearly in the position of a reverse engineer: all organisms face a local environment of unknown complexity, about which they can store in memory at most a finite amount of information. The very next interaction of any organism with its environment may result in an arbitrarily large surprise. This predicament is rendered more severe by a time-varying environment, and more severe still by competing organisms. Any organism’s limited memory therefore contains, at best, only an approximate model of its environment, one that may be proven grossly inadequate at any moment."


    Gibs auf!

    Es war sehr früh am Morgen, die Straßen rein und leer, ich ging zum Bahnhof. Als Ich eine Turmuhr mit meiner Uhr verglich, sah ich, daß es schon viel später war, als ich geglaubt hatte, ich mußte mich sehr beeilen, der Schrecken über diese Entdeckung ließ mich im Weg unsicher werden, ich kannte mich in dieser Stadt noch nicht sehr gut aus, glücklicherweise war ein Schutzmann in der Nähe, ich lief zu ihm und fragte ihn atemlos nach dem Weg. Er lächelte und sagte:

    »Von mir willst du den Weg erfahren?«

    »Ja«, sagte ich, »da ich ihn selbst nicht finden kann.«

    »Gibs auf, gibs auf«, sagte er und wandte sich mit einem großen Schwunge ab, so wie Leute, die mit ihrem Lachen allein sein wollen.

    It was very early in the morning, the streets streets clean and empty, I went to the train station. As I checked my watch against the tower clock, I saw that it was already much later than I had thought, I had to hurry, the fear of this discovery left me uncertain, I didn't yet know my way very well in this city, luckily there was a policeman nearby, I ran to him and breathlessly asked him the way.

    He laughed and said:

    "You want to know the way from me?"

    "Yes", I said, "As I cannot find the way myself."

    "Give it up, give it up" he said and turned himself around with a great swing, like people do, who want to be alone with their laughter.
     
    Constance and Soupie like this.
  10. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Reverse engineering the world: a commentary on Hoffman, Singh, and Prakash, “The interface theory of perception”

    What is evolution hiding?

    "HSK’s paper, on the other hand, is controversial. It claims that biological evolution hides the truth from living organisms, and it presents impressive computational experiments to back up that claim. This just feels wrong. Surely we know something about the way the world works. We found the Higgs boson, after all.

    This is a case where, it seems to me, a computer science perspective really is useful. A ‘high-level’ programming language like java hides the truth very effectively–a java programmer can be completely ignorant of the hardware, the display and memory managers, the device drivers, the operating system, and every other component or property of the end-user’s computer system except its java compiler and still design and build an extraordinarily useful piece of software. High-level languages like java are not less useful because they hide all of this information, they are more useful. How can they hide the truth and still be more useful? They can do this because they abstract out useful and complex behaviors of the underlying hardware. Providing ‘sin(x)’ as a predefined function in a programming language, much less providing something like ‘print’ that has to deal with dozens of manufacturer-specific device drivers, requires not just a huge abstraction of what the hardware is doing, but a huge abstraction that is also general: it works not just for one physical system, but for many physical systems that could, in principle, share no structural components or causal processes whatsoever. A successful programming language does not just hide the truth, it abstracts, generalizes, summarizes, and then relabels the truth in a way that increases functionality and minimizes effort."

    A "high-level" programming language like English also hides the truth very effectively!

    "They can do this because they abstract out useful and complex behaviors of the underlying hardware. Providing ‘sin(x)’ as a predefined function in a programming language, much less providing something like ‘print’ that has to deal with dozens of manufacturer-specific device drivers, requires not just a huge abstraction of what the hardware is doing, but a huge abstraction that is also general: it works not just for one physical system, but for many physical systems that could, in principle, share no structural components or causal processes whatsoever."

    This is where coding has been going for a long time - the next step is what Stephen Wolfram describes for his system that he says will be able to take a description in natural language and write the code for it - so programming becomes more and more conceptual.

    Block programming languages like Scratch are another good example - and have been around since the late 90s?

    But the bigger picture is our attitude:

    "What is Eve-olution Hiding?" - asks Chris Fields of Sonoma, California.

    "And that, according to HSK, is precisely what evolution does. Evolution optimizes fitness, and fitness is just another word for (nothing left to lose) efficient functionality. This is a bold statement, and it suggests a bold hypothesis: we should expect ‘higher’ organisms, like ‘high-level’ programming languages, to encode less of the truth about the ‘hardware’ of the world, and to do so in a way that is more useful than the ways that ‘lower’ organisms do it. This sounds paradoxical, but it is not: we are surrounded by, and our culture and economy are increasingly driven by, devices that implement exactly this principle. We should, moreover, expect organisms to be organized hierarchically as information processors, with virtual machines that ‘know more’ about the hardware of the world closer to the bottom of the hierarchy and virtual machines that ‘know less’ about the hardware of the world closer to the top. We should expect cellular metabolism, for example, to encode lower-level information about nutritional chemistry than organismal metabolism. We should expect metabolism to encode lower-level information about the physical structure of the world than cognition.

    • And we should expect the limbic system to encode lower-level information about the affordances of the world than the cortex.
    These are all testable predictions, and they are all at least prima facie plausible."
     
    Soupie likes this.
  11. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    rps20170519_064508.jpg
     

    Attached Files:

    Soupie likes this.
  12. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    "Why its old man Searle!"

    "That's right and I'd of gotten away with it too, if it hadn't been for you meddling kids!"

    sketch-1495194791755.png

    Co-starring David Chalmers as Shaggy.
     
    Soupie likes this.
  13. Soupie

    Soupie Paranormal Adept

    Joined:
    Jul 21, 2013
    Messages:
    1,987
    Likes Received:
    916
    Location:
    Unbound Telesis
    It's not so much "free will" that I was after but more so what role feelings, perceptions, and conceptions play in behavior.

    Physicalists want to say that all that's needed to explain behavior is neural processes. If they're right, then consciousness (what it's like) is epiphenomenal.

    However, it seems like consciousness (what it's like) which includes feelings, perceptions, and conceptions play a central role in behavior.

    So what gives? What gives imo is the notion/feeling that the mind and body are ontologically distinct. I argue that they are ontologically equal, but perceptually distinct.

    The point I'm making is that the mind—like the body—is constituted of many processes. Typically this is hard for us to see, but can become clear to us when our own or someone else's mind begins to "malfunction."

    The problem of overdetermination is a problem for dualists as well. If all behavior can be explained via physical processes, why and how is there a non-physical mind which seems to play a role in guiding behavior?

    The answer that both physicalists and dualists might consider is that the body is "merely" how the mind appears to itself when it is perceived via the senses. (Note that this is distinct from introspection.)
     
  14. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Conscious Realists want to say that all that's needed to explain behavior is conscious agents. If they're right, then neural processes (matter) is epiphenomenal.

    However, it seems like neurons (matter) which includes rest mass and extension in space plays a central role in behavior.

    So what gives? What gives imo is the notion/feeling that the mind and body are ontologically distinct. I argue that they are ontologically equal, but perceptually distinct.

    The point I'm making is that the body—like the mind—is constituted of many processes. Typically this is hard for us to see, but can become clear to us when our own or someone else's body begins to "malfunction."

    The problem of overdetermination is a problem for conscious realists as well. If all behavior can be explained via conscious agents, why and how is there a physical body which seems to play a role in guiding behavior?

    One might consider that the mind is "merely" how the body appears to itself when it is perceived via the senses.



     
    Soupie likes this.
  15. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    http://www.affective-science.org/pubs/2006/Barrett2006kinds.pdf

    This is Barrett's paper on whether emotions are natural kinds ... we looked at it a while back.

    from the conclusion

    If the science of emotion is to proceed, then we must evaluate the empirical status of the natural-kind view and treat alternative models seriously, even if they do not match commonsense or deeply held beliefs. Doing so may expose the road to a new and more successful scientific paradigm with which to understand the nature of emotion. An alternative paradigm need not deny the existence of emotions, but might deny emotions any explanatory power. A new paradigm would not deny the importance of evolutionarily preserved responses, but might deny emotions any privileged status as innate neural circuits or modules. A new paradigm should never deny the important research findings of prior decades. Rather, it is a requirement that such research be reinterpreted within the newer framework if that framework is to be viable.
     
    Last edited: May 20, 2017 at 6:28 PM
  16. Soupie

    Soupie Paranormal Adept

    Joined:
    Jul 21, 2013
    Messages:
    1,987
    Likes Received:
    916
    Location:
    Unbound Telesis
    Yes, and the response to this is that there really are X, mind-independent processes but that our phenomenal representation of them as neuronal processes is not veridical.

    Just how much of reality our neural processes perception really captures is not known.

    But the logic holds up. Your example above is actually a great way of showing this.

    Edit: I would replace "conscious agents" with consciousness, being, or pure experience.
     
  17. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Objects of consciousness

    In the process of perception, a conscious agent interacts with the world and, in consequence, has conscious experiences.

    In the process of decision, a conscious agent chooses what actions to take based on the conscious experiences it has.


    Does Hoffman show how the CA has conscious experiences in consequence of interacting with the world? Or how actions are chosen on the basis of conscious experience? Without that, this could be read as an epiphenomenalist account: in consequence of interacting with the world, the CA has (passive) conscious experiences - a conscious agent chooses (active) based on (passive) the conscious experiences it has - there is still a gap here between the experiencing and the choosing.

    In terms of mental causation - can we actually see in our own experience where this happens? Is there a solid chain of evidence? Even in a very deliberate act - learning something new, practicing a skill before it becomes automatic - my experience seems to me to have gaps so that it is hard to find the exact point at which one could find mental causation. So on that basis I would, in my personal experience, amend your statement:

    However, it seems like consciousness (what it's like) which includes feelings, perceptions, and conceptions play a central role in behavior.


    To clarify what that "central role" is ...

    It seems (to me) that what it's like which includes feelings, perceptions, and conceptions is present (at times) in behavior but never seems to go all the way down in terms of a one to one correlation with my behavior - in that phenomenality is constantly morphing, a blooming, buzzing confusion that is closely related to my actions but never couples directly with it. This doesn't take away from my sense of the importance of that experience - but that is because my sense of self and meaning, etc can only be rooted in an awareness of it - nor does it take away from my sense of freedom to act - but I still can't see into that moment when my experience becomes action - no matter how thin - (in part because phenomenal experience isn't instantaneous) and that's the crux of causality that there is no space between cause and effect. The concerns over epiphenomenality need no more space than this to persist.

    CR doesn't then get us any further with these particular concerns about mental causation just because it posits Conscious Agents (not consciousness and not phenomenal consciousness) as the root of reality.
     
    Last edited: May 20, 2017 at 6:26 PM
    Soupie likes this.
  18. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    And the response to that is ...

    upload_2017-5-20_12-29-8.png

    Where physicalism appears as a beautiful young lady and CR appears as the crone - once you've seen it both ways, you can shift back and forth between the two, conscious agents are no more veridical than neurons.

    And I think that's the reason that it might not be very surprising that:

    We show that one particular object, the quantum free particle, has a wave function that is identical in form to the harmonic functions that characterize the asymptotic dynamics of conscious agents; particles are vibrations not of strings but of interacting conscious agents.
     
    Last edited: May 20, 2017 at 6:47 PM
  19. Soupie

    Soupie Paranormal Adept

    Joined:
    Jul 21, 2013
    Messages:
    1,987
    Likes Received:
    916
    Location:
    Unbound Telesis
    But only if one is willing to grant consciousness (feeling or "what it's like") as fundamental. Which most physicalists are not willing to do, wanting instead for consciousness to be something that emerges from neural processes.

    Once one considers that feeling may be one-and-the-same as being, then one can flip back and forth.

    However if one insists that feeling emerges from being, then one is faced with the HP.
     
  20. smcder

    smcder Paranormal Adept

    Joined:
    Feb 6, 2016
    Messages:
    1,518
    Likes Received:
    873
    Location:
    Arkansas, USA
    Edit: I would replace "conscious agents" with consciousness, being, or pure experience.

    I'm not sure it makes sense to say that "consciousness, being or pure experience" is all that's required to explain behavior. Or even to talk about pure experience - is there such a thing as "pure behavior"? consciousness, being and experience are defined in terms of other concepts - I think you need something like Conscious Agents and as soon as you do, you have an abstraction that is going to get modeled and will then look a lot like conscious agents or neurons (mathematically). There's just a basic structure that has to be there to account for reality, I think that's what Hoffman's theory may show us - except that's where physics already is - trying to offer the simplest abstract model of reality.
     
Loading...

Share This Page