• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Dr. John Alexander — September 14, 2014

Those traits have evolved in the long evolution of embodied consciousness demonstrating 'affectivity' even in primitive species. (See Panksepp) They are not abstract concepts that can be expressed in algorithms. Asimov already confronted the problem of robots so 'intelligent' that they think for themselves out of a base of limited experience and judgment that is nothing like ours. Why Bill Joy and other pioneers of AI fled the field as it developed.
Why we have compassion is an interesting topic, and in my way of thinking evolved because of our deep social structure; we exist as a species because we are social animals. Our particular social groupings require intelligence, cooperation, and compassion with our tribes.

Especially when we go to war with other tribes.

Now, if we're smart, we'll hardware such things into our artificial progeny.

Given that we're already allowing drones to autonomously identify targets and kill them without human intervention, I don't think we're that smart.
 
I've seen this question before, and have always thought that:
#1 it seemed to take 3.8 BY for life to randomly evolve intelligence, so not only has it not really been necessary for the continuation of life, but not really advantageous before now.
#2 it seemed to really take off when protohumans developed social structures/sexual selections that started to select for intelligence -- i.e. we helped put the pressure on ourselves to evolve it.



Actually, no.

From the mighty wikipedia:


In short, meaning is a sociological or semantic layer, and is subject to speculative and subjective interpretation. Information theory (happens to be my background) is a mathematical subject, and doesn't generally like to cross into such domains.

Actually this is also demonstrably not true, at least in the universal sense. We as humans may operate this way, but I highly doubt plants assign meaning to tracking the seasons; they just respond to stimuli and adjust what they do.

And if experience could be shared by non-local resonance, then in 3.8BY my sense is that nature would have found a way to exploit this by now, and we'd all be walking around with a sixth sense that allowed me to avoid the speeding ticket I just got.

And rabbits would be a hell of a lot more plentiful.

Here is the link to the Mitchell paper I've quoted from, "A Dyadic Theory of Consciousness":

Dyadic Model Part 1

And here is the link to the paper by Jaak Panksepp that I referred to parenthetically, "The emergence of primary anoetic consciousness in episodic memory":

Frontiers | The Emergence of Primary Anoetic Consciousness in Episodic Memory | Frontiers in Behavioral Neuroscience

They are the two best recent papers I've read approaching consciousness from scientific perspectives (specifically quantum information theory and entanglement in the first case and biological physiology, psychology, and neuroscience in the second case). I and another poster, a philosopher of mind, posted these for discussion in the C&P (Consciousness and the Paranormal, Part 2) thread. I hope you will have time to read them; I'd be interested in your responses. Also are you familiar with the Integrated Information Theory of consciousness devised by Giulio Tononi? We've discussed that theory with limited understanding and your background in information theory would help us out a lot in that thread. Hope you will join the thread. Here's the link to the current page from which you can backtrack if needed:

Consciousness and the Paranormal — Part 2 | Page 37 | The Paracast Community Forums
 
Last edited:
Those traits have evolved in the long evolution of embodied consciousness demonstrating 'affectivity' even in primitive species. (See Panksepp) They are not abstract concepts that can be expressed in algorithms. Asimov already confronted the problem of robots so 'intelligent' that they think for themselves out of a base of limited experience and judgment that is nothing like ours. Why Bill Joy and other pioneers of AI fled the field as it developed.

I would of course beg to differ, and the keyword in your counterpoint is "expressed". For example, while it remains uncertain whether or not an AI ( or anyone for that matter ) actually possesses compassion, I can readily imagine a situation where an AI can express the compassion of its creator by determining if someone is suffering from any of a number of measurable conditions and then performing tasks expressly designed to treat and reduce that suffering.

Now, assuming that the machine is also intelligent and capable of a complex self-programming, who are we to say that at some point, it might not come to also feel compassion? Evolving while expressing compassion and seeking to understand why it is doing that would seem to lead inevitably toward a condition whereby an intelligent being would become personally invested in their work, and isn't that where the feeling of compassion comes from, some sort of personal investment?
 
Last edited:
Some of the optimism expressed here in what we are capable of is simply outstanding, well excet for you, ufology. You're being a real Debbie Downer on all this imaginative positive thinking.
Maybe you need to re-read my posts. I've provided some rather optimistic counterpoint to the negativity surrounding the evolution of AI technology.
 
Last edited:
Here is the link to the Mitchell paper I've quoted from, "A Dyadic Theory of Consciousness":

Dyadic Model Part 1

And here is the link to the paper by Jaak Panksepp that I referred to parenthetically, "The emergence of primary anoetic consciousness in episodic memory":

Frontiers | The Emergence of Primary Anoetic Consciousness in Episodic Memory | Frontiers in Behavioral Neuroscience
Man, this cat needs to learn how to just say what he means.

The primary gist of the underlying concept, I think, is here:
"In other words, we do not experience the brain mechanisms of learning and memory, only their results."​

Groovy. The brain's the hardware, our mind is the running app(s), and there's an OS down there between the meat and the mind.

There's some lower level processes that allow the mind (likely read-only) access to what the OS is doing, the rest is abstracted away.

This bit is cool, I didn't know that:
"For instance, research on emotion imitation (Hennenlotter et al., 2009) showed that denervation of muscles necessary to the facial expression of emotion leads to changes in central circuitry of emotion."​

But this bit:
"The difference between anoetic self-experience and semantic self-concept can be illustrated by an example of an elderly woman in later stages of Alzheimer’s dementia described by Klein (2013a) who kept her anoetic self-experience intact but lost her episodic self. The women experienced a variety of memory problems typically associated with late stages of dementia (e.g., loss of personal recollections, difficulties in object naming, word finding difficulties, temporal disorientation, etc.). In contrast, interviewing revealed that she maintained a sense of herself as an entity, albeit one beset by confusion."​

Can be far, far more simply explained by stating that there is a difference between memory and a sense of self. They can be impaired or enhanced separately.

Again, my view is our mind is a series of subsystems that have accreted in billions 'n billions of years of random evolutionary advantages, particularly accelerated in the past few dozen million years say.

It's how nature happened to create us.

And likely to be how we cobble together a general-purpose AI... out of specialized artificial cognition subsystems.

"Reflective Consciousness" is the realm of the "big mind" of buddhism. It's the me that watches me. In fact, as Hofstadter demonstrates, there's no limit to this recursion. There can be a me that watches me watching me. And so on, turtles all the way down, baby.

They lost me with the "Episodic Memory and Autonoetic Consciousness" stuff. I mean, it seems pretty simple to think about recalling memories from your long-term storage subsystem, and then thinking about how you feel about them -- "episodic" memory. Not sure what "semantic" memory is, except that I understand what both "semantic" and "memory" mean, so I'll suppose it means "the meaning of memory" or perhaps how different memories are linked?

I'll skip the "warmth of remembrance" stuff because I couldn't make heads or tails of the point, except that some memories give you good feelings and some give you not so good feelings, and these engage different parts of the brain.

So I'll skip to the conclusion.

"unknowing consciousness, namely anoetic consciousness, allows various primordial affective feelings, and the related affective information processing of learning and memory mechanisms"​

Groovy. There's hooks in the OS to allow consciousness to recall memory and think/feel about it. I'm with you there.

"In contrast to noetic consciousness, autonoetic consciousness refers to the reflective capacity to mentally represent a continuing existence one that is embedded in specific episodic contexts and associated with remembered experiences with affective quality – from “warmth and intimacy” to “dread and alienation.”"​

So we have a continuity of sense of self built-in that helps us feel things about ourselves and our place in our story. Still with you.

"As a self-generative, self-knowing state, attention in autonoetic consciousness can thus be directed to memories of the past... It is accompanied with a sense of personal agency; that is, the belief that I am the cause of my thoughts and actions, a sense of personal ownership; that is, the feeling that my thoughts and acts belong to me, and the ability to think about time as an unfolding of personal happenings centered about the self"​

K. We live in our own context, and even though I was a completely different person when I was 5, I still think of that person as me.

"This form of mentalizing, surely most highly developed in humans, is heavily mediated by medial temporal lobe (hippocampal) and frontal lobe evolution and microstructure"​

And all this stuff happens in the brain. What I completely and fundamentally fail to understand is how talking about what happens in our cognition tells us much about how it emerges. So maybe I missed the point, and I certainly missed anything about quantum information theory and entanglement -- there's been nothing I've seen stating anything except there might be quantum events that are relevant in understanding neurotransmitter uptake.

Oh, and sorry, I skipped Mitchell, mostly because I can't stand the guy after reading paragraph 2.

"Energy and information are basic attributes in nature. Information is defined as mere patterns of energy."​

No. You can choose to interpret information as entropy, in other words how systems are ordered. And anything that gets ordered in the universe can be interpreted as 'energy' after Einstein, but that's like describing my buick as it's mass-energy equivalent entropic state. Or, a whole lotta hand waiving going on.

There may also be a fundamental limit in the amount of information that can be quantized in energy:
Information converted to energy - physicsworld.com

Which is quite different than the base energy state of the universe (background). Which in other words, means that there is quite a bit of energy in the universe that doesn't contain information.

Oh, and black holes may consume information for lunch, yet contain energy. It's a pattern of energy with zero information.

"Therefore energy and information may be viewed as dyadically coupled from the origin of the universe."​
I disagree for reasons above.

"The organization of energy is the basis of all existence;"​
Sure. We'd be dead as a doornail without being structured.

"... and information is the basis of all knowing."​
And this is where we go sideways again. The author is confusing "information" in the mathematical sense, and epistemology. That's a big damn gap to jump.

"Our universe is an evolving universe which has self organized both matter and information, and displays both existence and knowing. “Knowing” is used in a general sense of apprehending and utilizing information."
I don't even know where to begin with this one. It's self-organizaiton is temporary and localized at best, and the universe in general is tending to thermodynamic disorder, not order. It's kind of why time flows in one direction. And just because we happen to display knowing doesn't mean the universe does.

The only way in my mind this sentence makes any sense is if you ran the big bang backward. The highest organized state the universe has achieved was right before then.
 
I would of course beg to differ, and the keyword in your counterpoint is "expressed". For example, while it remains uncertain whether or not an AI ( or anyone for that matter ) actually possesses compassion, I can readily imagine a situation where an AI can express the compassion of its creator by determining if someone is suffering from any of a number of measurable conditions and then performing tasks expressly designed to treat and reduce that suffering.

Now, assuming that the machine is also intelligent and capable of a complex self-programming, who are we to say that at some point, it might not come to also feel compassion? Evolving while expressing compassion and seeking to understand why it is doing that would seem to lead inevitably toward a condition whereby an intelligent being would become personally invested in their work, and isn't that where the feeling of compassion comes from, some sort of personal investment?
My logic is simple.

We feel compassion. Therefore we should be able to simulate an intelligence that could also feel compassion. I am of the opinion there's no magic there, we're an emergent process of our meat.

There's no reason why we couldn't reasonably accurately simulate the human mind, right down to the atomic level, and let 'er rip. Given enough processing power, of course.

There's no reason to think that it wouldn't become intelligent if you let it run long enough, and gave it stimuli. It's a cool experiment.

Blue Brain Project - Wikipedia, the free encyclopedia
 
I prefer the term Synthetic intelligence over AI, its a subtle but imo significant distinction.

There hase been some speculation (ive already posted the quotes) that SI is the logical evolutionary destination of intelligence and that biological intellect is but a fleeting step along the path.
Personally is suspect this is true, its a logical expression of the survival directive we see in all facets of life.
If so its more likely ET(s) are transbiological in nature than not.

I also think any SI will carry the "flavour" of its biological progenitors, either by design or by absorbtion, just as any child absorbs the language and cultural values of its progenitors.

In this scenario "they" would have no more interest is "us" than you would in the factory workers who built your new car.
 
Last edited by a moderator:
At least you have a smaller chance of dying if it's a taser rather than a gun. Consider how that might have reduced some of the well-publicized problems. Maybe "stand your ground" laws should be modified to eliminate guns and use tasers out of the home.

Don't know about the relationship question for someone holding stock in a company. If that's the case, millions of people might have relationships with Apple, Google and other large companies.
 
Don't know about the relationship question for someone holding stock in a company. If that's the case, millions of people might have relationships with Apple, Google and other large companies.

That's a great comparison, Gene, although I will have to disagree with your suggestion that tasers would be a more appropriate alternative to guns in the "Stand Your Ground" context. Both Pennsylvania and Florida have "Stand Your Ground" laws which are often used for self-defense purposes in a much-less publicized fashion than the Trayvon Martin affair. Your suggestion is a novel one, especially given the non-lethal harm caused by teasers. However, most "Stand Your Ground" situations require the use of guns given the gun-related threat that's being imposed upon the defender.
 
The real question is whether Stand Your Ground laws are protecting people or causing more people to be injured or killed. It's a big question.

This may be a question that goes unanswered for a long time, much like the existence of UFOs. It's unfortunate to think that cases such as Trayvon Martin, being more recent and more mainstream than any recent UFO sighting, make an answer to the "Stand Your Ground" question more ascertainable than a definitive answer to the UFO question. Then again, the prospective impact of the UFO answer will have a decidedly greater impact than that of the "Stand Your Ground" question.

Everyone will have a different opinion on the "Stand Your Ground" question, usually based on their social, cultural, and political backgrounds, and I think that diversity is a great catalyst for debate and finding the right answer. My hope is that the forums, for which I am a new member, will provide that debate and help me develop my own answer to the UFO question.
 
oh yeah, you'll get that @AlienEsq and a whole lot more especially as you sink into those traditionally polarizing discussions like gun control, climate change, UFO's and whether or not Bigfoot is a shapeshifter. don't even get started on who made crop circles…

as far as the Taser piece, he sounded like a bit of an apologist for the Taser as a prime example of non-lethal force, but tell that to all the families who lost loved ones to some electric jolts fired by those who serve and protect. that's not a slag on the Police BTW as that's a righteous job especially when done righteously, as few have the courage to do it. But as a tool it does not seem to be used properly just yet, but certainly better then shooting bullets. Cameras on them recording everything will be a very interesting and welcome addition to how police and citizens interact with each other. That's a paradigm shifter right there.
 
. . .meaning is a sociological or semantic layer, and is subject to speculative and subjective interpretation. Information theory (happens to be my background) is a mathematical subject, and doesn't generally like to cross into such domains.

Information theory is currently being applied to many other domains and disciplines in which it might or might not be appropriate or adequate. The interdisciplinary field of consciousness studies is one such field, which is why we have been discussing information theory on and off for months in the C&P thread. Tononi's IIT theory is a recent example that has received some attention in consciousness studies but not much response yet, which is why I thought you might be interested in the thread and help us to interpret Tononi's system. But from this post and your subsequent one (re the Panksepp paper), it now seems to me that you might not be interested in the questions and issues discussed in that thread.

You also replied to this quote from Edgar Mitchell:

At very simple levels of living matter, behaviors such as the search for food, mating opportunities, predator avoidance, etc. require that information from the environment be perceived and given meaning. And since information does not carry within the signal, but is just a pattern of energy to be interpreted, assigning a meaning is an evolved, learned behavior. Learning is precisely the activity of giving meaning to information and retaining the meaning for future use. Non-local resonance allows experience to be shared.

Your reply:

Actually this is also demonstrably not true, at least in the universal sense. We as humans may operate this way, but I highly doubt plants assign meaning to tracking the seasons; they just respond to stimuli and adjust what they do.

Unfortunately, citing 'stimulus-response' theory (originating in mid-20th C. Behaviorism) doesn't answer the question [beyond conditioning of certain responses in rats and dogs] 'what is the stimulus' [or stimuli] in nature to a given behavior in a natural organism and 'what is the organism's response'? Or the question 'what is the meaning of that response?' in terms of the evolution of species {and of consciousness}. Those are the questions Panksepp and his colleagues in the biological sciences (including neuroscience) attempt to find answers for. We all want to understand nature, and most of us understand that nature, life, consciousness, and mind require more than mathematics to account for them.

You continued:

And if experience could be shared by non-local resonance, then in 3.8BY my sense is that nature would have found a way to exploit this by now, and we'd all be walking around with a sixth sense that allowed me to avoid the speeding ticket I just got.

Half (or maybe more) of the quantum physicists in the world recognize non-locality and also universal entanglement of quantum information operating in nature and mind. We'll have to wait and see how it all shakes out in another hundred years of theory and experiment.
 
Last edited:
Man, this cat needs to learn how to just say what he means.

The primary gist of the underlying concept, I think, is here:
"In other words, we do not experience the brain mechanisms of learning and memory, only their results."​

Groovy. The brain's the hardware, our mind is the running app(s), and there's an OS down there between the meat and the mind.

There's some lower level processes that allow the mind (likely read-only) access to what the OS is doing, the rest is abstracted away.

I know that makes sense to computationalists, but there are many other specialists (psychologists, philosophers, cognitive neuroscientists, biologists, ethologists, and more) for whom it does not.


This bit is cool, I didn't know that:
"For instance, research on emotion imitation (Hennenlotter et al., 2009) showed that denervation of muscles necessary to the facial expression of emotion leads to changes in central circuitry of emotion."​

But this bit:
"The difference between anoetic self-experience and semantic self-concept can be illustrated by an example of an elderly woman in later stages of Alzheimer’s dementia described by Klein (2013a) who kept her anoetic self-experience intact but lost her episodic self. The women experienced a variety of memory problems typically associated with late stages of dementia (e.g., loss of personal recollections, difficulties in object naming, word finding difficulties, temporal disorientation, etc.). In contrast, interviewing revealed that she maintained a sense of herself as an entity, albeit one beset by confusion."​

Can be far, far more simply explained by stating that there is a difference between memory and a sense of self. They can be impaired or enhanced separately.

Yes, there is a distinction to be made between memory and sense of self. Memory has long been considered to be necessary for the construction of a sense of 'self'. The marvel Panksepp points out is that in the loss, the absence, of memory, through stroke or other brain damage, an individual consciousness can still maintain its sense of self. We see this also in cases of amnesia, when an individual wakes up from an accident of some sort with no remembrance at all of his/her name, identity, origin, or past life, yet functions in this world to which she or he has formerly been accustomed. What does this signify concerning the nature of consciousness and selfhood, personhood, personality? .

Again, my view is our mind is a series of subsystems that have accreted in billions 'n billions of years of random evolutionary advantages, particularly accelerated in the past few dozen million years say.

An accretion is not an integrated system. An embodied consciousness is a complex integrated system, integrated within itself and also integrated with its environment. The difference is elaborated in Varela and Thompson's research into self-organizing dissipative systems from the single cell to the human being. This paper is an overview of their approach, referred to as neurophenomenology:

http://brainimaging.waisman.wisc.edu/~lutz/ET&AL&DC.Neuropheno_intro_2004.pdf

It's late and I'm going to stop here, but will continue tomorrow.
 
Back
Top