• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Consciousness and the Paranormal — Part 10

Free episodes:

Status
Not open for further replies.
I have contrasted the DACtoc methodology with the hard problem and IIT and benchmarked it against the dominant trends in theories of consciousness captured in GePe. I have identified as the missing piece in the puzzle of consciousness, its function of extracting norms from the hidden states of the social world in order to optimize parallel real-time action control. I have argued that the trend to turn away from the questions on the ontology and function of consciousness is dissatisfying from an intellectual perspective. More importantly, it also discharges science from its responsibility towards building a sustainable and dignified society. If science is supposed to provide explanations, predictions and control of natural phenomena then science's success should also be measured in terms of its impact. It should not only be able to contribute to pressing challenges in the domains of education, health and well-being but especially due to the secular turn in modern Western societies, provide a foundation for the grounding of our metaphysics. Answering the question of what consciousness is and how physical systems can give rise to it, stands at the centre of knowing what it is to be human and to face up to the fundamental challenges of our time and any time in which conscious beings have existed and will exist in the future

Synthetic consciousness: the distributed adaptive control perspective

MIT Mind Machine Project

Neuroscience is helping us build a machine with consciousness

https://www.techemergence.com/conscious-artificial-intelligence/

Consciousness...bah humbug Mike.

A waste of computational resources.

Liberate the mind from the tyranny of consciousness.
 
Or oxygen as is sometimes the case.



Happens all the time, Politicians and televangelists spring to mind.

Who, I may note, are often financially and reproductively successful.

Illusion, mere steam off the cognitive locomotive- the organism runs cooler, faster sans consciousness. We cling only due to an insidious Darwinism.
 
I think that @Pharoah will find this article especially useful in his development of HCT, and that the rest of us will find it insightful in our effort to understand what consciousness is and how it emerges in natural evolution:

Front Psychol. 2016; 7: 1954.
Published online 2016 Dec 22. doi: 10.3389/fpsyg.2016.01954
PMCID: PMC5177968


The Transition to Minimal Consciousness through the Evolution of Associative Learning

Zohar Z. Bronfman,1,2,* Simona Ginsburg,3 and Eva Jablonka1,4
1The Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, Tel Aviv, Israel
2School of Psychology, Tel Aviv University, Tel Aviv, Israel
3Department of Natural Science, The Open University of Israel, Raanana, Israel
4The Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel


Go to:
Abstract
The minimal state of consciousness is sentience. This includes any phenomenal sensory experience – exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach.

Keywords: evolution of associative learning, evolution of consciousness, the distribution problem, learning and consciousness, evolutionary transitions

Mind can be understood only by showing how mind is evolved
(Spencer, 1890, p. 291).

Go to:
Introduction

One way to study a major evolutionary change, such as the transition to consciousness, would be to discover a trait that is necessary for the transition. This would make it possible to identify the evolutionarily most elementary form of consciousness that is free of the baggage of later-evolved structures and processes. The transition from inanimate matter to life shares interesting conceptual parallels with the emergence of consciousness. We use the approach of the Hungarian theoretical chemist Gánti (1975) and Gánti et al. (2003) to the study of minimal life as a heuristic for the study of the evolutionary transition to consciousness (for a detailed discussion of this heuristics see Ginsburg and Jablonka, 2015).

Gánti started by compiling a list of properties that jointly characterize minimal life and constructed a toy model (the chemoton) instantiating them. He suggested that one of the capacities of a minimal life system could be used as a marker of the evolutionary transition to sustainable minimal life. His specific suggestion, which was later sharpened and developed by Szathmáry and Maynard Smith (1995), was that the capacity for unlimited heredity marks the transition from non-life to sustainable life: only a system capable of producing hereditary variants that far exceed the number of potential challenges it is likely to face would permit long-term persistence of traits and cumulative evolution. Moreover, a system enabling unlimited heredity requires that the information-carrying subsystem is maintained by self-sustaining metabolic dynamics enclosed by a membrane – features like those exhibited by a proto-cell, an acknowledged minimal living system. Hence, once a transition marker is identified it allows the “reverse engineering” of the system that enables it. . . ."

The Transition to Minimal Consciousness through the Evolution of Associative Learning
 
When you look at the Time Scale Guesses (90% confidence predictions) graphic here

We asked 33 AI researchers when they believe (with 90% confidence) that artificial intelligence will be capable of self-aware consciousness. Some of the answers surprised us. To explore the expert opinions, scroll in the interactive graphic .

Clicking on a date range (example: “2061-2100”) will take you to all of the respondents who guessed that date range as realistic for AI consciousness.

Only one of the 33 researchers ticked "likely never".

What can i say but.

25 Famous Predictions That Were Proven To Be Horribly Wrong

"There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will." - Albert Einstein, 1932

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." - Western Union internal memo, 1876

"Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia." - Dr. Dionysius Lardner, 1830
 
Last edited by a moderator:
The abstract to the above paper, and the paper as a whole, enable us to regulate our use of the terms 'sentience' and 'consciousness', identifying 'sentience' as "the minimal state of consciousness.
 
When you look at the Time Scale Guesses (90% confidence predictions) graphic here

Only one of the 33 researchers ticked "likely never"

What can i say but.

25 Famous Predictions That Were Proven To Be Horribly Wrong

"There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will." - Albert Einstein, 1932

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." - Western Union internal memo, 1876

"Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia." - Dr. Dionysius Lardner, 1830

Appropos of....?
 
When you look at the Time Scale Guesses (90% confidence predictions) graphic here

We asked 33 AI researchers when they believe (with 90% confidence) that artificial intelligence will be capable of self-aware consciousness. Some of the answers surprised us. To explore the expert opinions, scroll in the interactive graphic .

Clicking on a date range (example: “2061-2100”) will take you to all of the respondents who guessed that date range as realistic for AI consciousness.

Only one of the 33 researchers ticked "likely never".

What can i say but.

25 Famous Predictions That Were Proven To Be Horribly Wrong

"There is not the slightest indication that nuclear energy will ever be obtainable. It would mean that the atom would have to be shattered at will." - Albert Einstein, 1932

"This 'telephone' has too many shortcomings to be seriously considered as a means of communication. The device is inherently of no value to us." - Western Union internal memo, 1876

"Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia." - Dr. Dionysius Lardner, 1830

Not sure what the game is...but I'll play along:

9 Historical Predictions That Actually Came True - Reader's Digest
 
Yes, Thank You.
That was the same point i was making. That 32 of the 33 researchers predict that AI will be capable of self aware consciousness, which speaks to the nature of consciousness being a function of integrated information on a suitably complex substrate and not limited to biological ones.

I predict that your 9 will be 10 eventually................ (or perhaps 11 if we count my last statement lol)
 
I think that @Pharoah will find this article especially useful in his development of HCT, and that the rest of us will find it insightful in our effort to understand what consciousness is and how it emerges in natural evolution:

Front Psychol. 2016; 7: 1954.
Published online 2016 Dec 22. doi: 10.3389/fpsyg.2016.01954
PMCID: PMC5177968


The Transition to Minimal Consciousness through the Evolution of Associative Learning

Zohar Z. Bronfman,1,2,* Simona Ginsburg,3 and Eva Jablonka1,4
1The Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, Tel Aviv, Israel
2School of Psychology, Tel Aviv University, Tel Aviv, Israel
3Department of Natural Science, The Open University of Israel, Raanana, Israel
4The Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel


Go to:
Abstract
The minimal state of consciousness is sentience. This includes any phenomenal sensory experience – exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience). We end with a discussion of the implications of our proposal for the distribution of consciousness in the animal kingdom, suggesting testable predictions, and revisiting the ongoing debate about the function of minimal consciousness in light of our approach.

Keywords: evolution of associative learning, evolution of consciousness, the distribution problem, learning and consciousness, evolutionary transitions

Mind can be understood only by showing how mind is evolved
(Spencer, 1890, p. 291).

Go to:
Introduction

One way to study a major evolutionary change, such as the transition to consciousness, would be to discover a trait that is necessary for the transition. This would make it possible to identify the evolutionarily most elementary form of consciousness that is free of the baggage of later-evolved structures and processes. The transition from inanimate matter to life shares interesting conceptual parallels with the emergence of consciousness. We use the approach of the Hungarian theoretical chemist Gánti (1975) and Gánti et al. (2003) to the study of minimal life as a heuristic for the study of the evolutionary transition to consciousness (for a detailed discussion of this heuristics see Ginsburg and Jablonka, 2015).

Gánti started by compiling a list of properties that jointly characterize minimal life and constructed a toy model (the chemoton) instantiating them. He suggested that one of the capacities of a minimal life system could be used as a marker of the evolutionary transition to sustainable minimal life. His specific suggestion, which was later sharpened and developed by Szathmáry and Maynard Smith (1995), was that the capacity for unlimited heredity marks the transition from non-life to sustainable life: only a system capable of producing hereditary variants that far exceed the number of potential challenges it is likely to face would permit long-term persistence of traits and cumulative evolution. Moreover, a system enabling unlimited heredity requires that the information-carrying subsystem is maintained by self-sustaining metabolic dynamics enclosed by a membrane – features like those exhibited by a proto-cell, an acknowledged minimal living system. Hence, once a transition marker is identified it allows the “reverse engineering” of the system that enables it. . . ."

The Transition to Minimal Consciousness through the Evolution of Associative Learning

I'm reading Pharoah's

The Emergence of Qualitative attribution, Phenomenal experience and Being

Problem spots have been: mechanism, emergence, levels and downward causation

These articles have been a big help:

Mechanisms in Science (Stanford Encyclopedia of Philosophy)

and the 2 Emmeche papers on emergence:
@Soupie

Levels, Emergence, and Three Versions of Downward Causation
EXPLAINING EMERGENCE:- towards an ontology of levels

Sounds like this one will help with "sentience" as a term.

This bit is fascinating:

We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience), its phylogenetically earliest sustainable manifestation and the driver of its evolution. We define and describe UAL at the behavioral and functional level and argue that the structural-anatomical implementations of this mode of learning in different taxa entail subjective feelings (sentience).
 
The minimal state of consciousness is sentience. This includes any phenomenal sensory experience – exteroceptive, such as vision and olfaction; interoceptive, such as pain and hunger; or proprioceptive, such as the sense of bodily position and movement. We propose unlimited associative learning (UAL) as the marker of the evolutionary transition to minimal consciousness (or sentience)


The operative word here is includes. As opposed to comprised.

Usage
Include has a broader meaning than comprise. In the sentence the accommodation comprises 2 bedrooms, bathroom, kitchen, and living room, the word comprise implies that there is no accommodation other than that listed. Include can be used in this way too, but it is also used in a non-restrictive way, implying that there may be other things not specifically mentioned that are part of the same category, as in the price includes a special welcome pack.

And if associative learning is the path to minimal consciousness then.

Associative Learning for a Robot Intelligence

Is that you, HAL? No, it's NEIL: Google, US Navy pour money into 'associative' AI brain

The future of associative learning - IEEE Conference Publication
 
The operative word here is includes. As opposed to comprised.

Usage
Include has a broader meaning than comprise. In the sentence the accommodation comprises 2 bedrooms, bathroom, kitchen, and living room, the word comprise implies that there is no accommodation other than that listed. Include can be used in this way too, but it is also used in a non-restrictive way, implying that there may be other things not specifically mentioned that are part of the same category, as in the price includes a special welcome pack.

And if associative learning is the path to minimal consciousness then

Associative Learning for a Robot Intelligence

Is that you, HAL? No, it's NEIL: Google, US Navy pour money into 'associative' AI brain

The future of associative learning - IEEE Conference Publication

while TRUE
IF conversation (AI)
PASTE link​
ELSE change conversation
 
IF the conversation is about AI

Of course it is, as it pertains to the question of the nature of consciousness.

The-Naked-Gun-From-the-Files-of-Police-Squad.jpg


(AI virtual emoji )
 
If Leslie Nielsen wasn't dead and buried, he might have made a great President, don't you think? You want goofy, he had it in spades.

I would have .... wait, I did vote for him.

I really liked him in Forbidden Planet.

My dad actually met Walter Pidgeon ... let's see, back in the early 1960s. He was staying in Sikeston, Missouri - I believe and he rented a room from a woman who knew Pidgeon socially and he came to visit for some period of time.

See ... Mike, that's how you segue ... ;-)
 
AI and Consciousness: Theoretical foundations and current approaches



In the last ten years there has been a growing interest towards the field of artificial consciousness. Several researchers, also from traditional Artificial Intelligence, addressed the hypothesis of designing and implementing models for artificial consciousness (sometimes referred to as machine consciousness or synthetic consciousness) – on one hand there is hope of being able to design a model for consciousness, on the other hand the actual implementations of such models could be helpful for understanding consciousness (Baars, 1988; Minsky, 1991; McCarthy, 1995; Edelman and Tononi, 2000; Jennings, 2000; Aleksander, 2001; Baars, 2002; Franklin, 2003; Kuipers, 2005; Adami, 2006; Minsky, 2006; Chella and Manzotti, 2007).

The traditional field of Artificial Intelligence is thus flanked by the seminal field of artificial or machine consciousness (sometimes machine or synthetic consciousness) aimed at reproducing the relevant features of consciousness using non biological components. According to Ricardo Sanz, there are three motivations to pursue artificial consciousness (Sanz, 2005):

1) implementing and designing machines resembling human beings (cognitive robotics);

2) understanding the nature of consciousness (cognitive science);

3) implementing and designing more efficient control systems.

The current generation of systems for man-machine interaction shows impressive performances with respect to the mechanics and the control of movements; see for example the anthropomorphic robots produced by the Japanese companies and universities. However, these robots, currently at the state of the art, present only limited capabilities of perception, reasoning and action in novel and unstructured environments. Moreover, the capabilities of user-robot interaction are standardized and well defined.

A new generation of robots and softbots aimed at interacting with humans in an unconstrained environment shall need a better awareness of their surroundings and of the relevant events, objects, and agents. In short, the new generation of robots and softbots shall need some form of “artificial consciousness”.

Epigenetic robotics and synthetic approaches to robotics based on psychological and biological models have elicited many of the differences between the artificial and mental studies of consciousness, while the importance of the interaction between the brain, the body and the surrounding environment has been pointed out (Chrisley, 2003; Rockwell, 2005; Chella and Manzotti, 2007; Manzotti, 2007).


In the field of artificial intelligence there has been a considerable interest towards consciousness. Marvin Minsky was one of the first to point out that “some machines are already potentially more conscious than are people, and that further enhancements would be relatively easy to make
. However, this does not imply that those machines would thereby, automatically, become much more intelligent. This is because it is one thing to have access to data, but another thing to know how to make good use of it.” (Minsky, 1991)

The target of researchers involved in recent work on artificial consciousness is twofold: the nature of phenomenal consciousness (the so-called hard problem) and the active role of consciousness in controlling and planning the behaviour of an agent. We do not know yet if it is possible to solve the two aspects separately.

The goal of the workshop is to examine the theoretical foundations of artificial consciousness as well as to analyze current approaches to artificial consciousness.

According to Owen Holland (Holland, 2003) and following Searle's distinction between Weak and Strong AI, it is possible to distinguish between Weak Artificial Consciousness and Strong Artificial Consciousness:

  • Weak Artificial Consciousness: design and construction of machine that simulates consciousness or cognitive processes usually correlated with consciousness.

  • Strong Artificial Consciousness: design and construction of conscious machines.
Most of the people currently working in the field of Artificial Consciousness would embrace the former definition. In any case, the boundaries between the two are not always easy to define. For instance, if a machine could exhibit all behaviours normally associated with a conscious being, could we reasonably deny it the status of conscious machine? Conversely, if a machine could exhibit all such behaviours, is it really possible it might not be subjectively conscious?

Most mammals seem to show some kind of consciousness – in particular, human beings. Therefore, it is highly probable that the kind of cognitive architecture responsible for consciousness has some evolutionary advantage. Although it is still difficult to single out a precise functional role for consciousness, many believe that consciousness endorses more robust autonomy, higher resilience, more general capability for problem-solving, reflexivity, and self-awareness (Atkinson, Thomas et al., 2000; McDermott, 2001; Franklin, 2003; Bongard, Zykov et al., 2006)

Consciousness and Artificial Intelligence
 
Extraterrestrial life on other planets and the development of artificial intelligence by the most advanced civilizations is discussed with Philosophy Professor Susan Schneider. We also look at the nature of consciousness and how it relates to AI in this short Antidote clip from the full length interview with host Michael Parker.


And we segue into the ET aspect of this discussion.
 
I would have .... wait, I did vote for him.

I really liked him in Forbidden Planet.

My dad actually met Walter Pidgeon ... let's see, back in the early 1960s. He was staying in Sikeston, Missouri - I believe and he rented a room from a woman who knew Pidgeon socially and he came to visit for some period of time.

See ... Mike, that's how you segue ... ;-)
The rare big budget sci-fi flick, very much a forerunner to Star Trek. I suspect William Shatner actually channeled Nielsen (a fellow Canadian) in portraying Captain Kirk.
 
Steve [@smcder] can you do your internet search magic and help me find an online copy of this paper:

Stern D. "Pre-reflexive experience and its passage to reflexive experience: A developmental view." Journal of Consciousness Studies. 2009;16( 10–12):307–331.

Thanks.
 
Status
Not open for further replies.
Back
Top