• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free versions of recent episodes:

Hm, I'm not convinced transhumans would be bored, especially if there does turn out to be (intelligent) life elsewhere in the universe. But it is an interesting question we can essentially capture in this way: Is God bored?


I once was able to ask Ray Kurzweil this very question directly via email. I found a printed copy of the email, and I will scan and upload it here in the future.

It might essentially come down to a resource grab; the transhuman who had the most resources -- whose mind was embodied in the majority of the material of the universe -- would be able to overpower all other transhumans by brute force. So once one tranhuman had control of over 51% of the material universe, they could simply overpower the remaining 49%.

However, like real warfare, it's not just about power and resources but strategy and intelligence. as well Some transhumans may be less powerful than others, but also smarter. Yes, this essentially speculation about a war between gods. There might be an intelligence arms race.

There's also the potential that the universe is infinite and that transhumans or AI could spread infinitely. They may even overlap. For example, it's conceivable that this had already occurred in our universe, and one entity has expanded its mind to the point where this entity is embodied by the entire universe. However, this entity may allow other entities to form within it, for example, us.


Ants are surely adaptive, no disagreements there. They may even have a level of "active" neural adaptability too. But would we really argue that ants are more adaptable than humans?


I would argue that. You could call it passive adaption or active adaption, or dumb adaptation and teleologic adaptation. Either way, it's still natural.

Of course, this all assumes that life and mind are not supernatural. We can't say they aren't for sure just yet.

"I'm not convinced transhumans would be bored, especially if there does turn out to be (intelligent) life elsewhere in the universe."

Can you say more about this? (Boredom ~ alien life)

Did you find the email from Kurzweil?
 
There will be early adopters, late adopter, and luddites.

Cybrogs and prosthetics will indeed be the way to go. People will use them for all the same reasons we use them today:

Plastic surgery, glasses, cognitive enhancing medications, video games for pleasure, and social media to stay informed of the social doings of others.

Transhumanist technologies will provide the same functions but be more integrated with our bodies. They will sell themselves to a large portion of the human population.


Have you seen what the Major from Ghost in the Shell looks like? Sold.


I think communication and empathy are considered to be femanine qualities — rightly or wrongly — and I think Western culture at least can be said to have become more feminine in this sense.

To a much smaller degree, I think we are seeing the hive mind at work right now with social media and "equality." People want the right to "feel comfortable" even at the expense of the liberty of others, and they use social media to coordinate and "present" their message like never before. The hive mind could be quite oppressive to some and quite comforting to others.

I think the borg do live among us... I think most all of us are more hivemindish than we might believe.

If the transition to transhumanism is gradual — but who knows — I think the hive mind will emerge with built in checks and balances.

I personally have no interest in being directly connected to the mind of any other person. Perhaps ill need to assimilated? In order to upgrade to iCyberBrain 5.1, mental independence won't be an option. Scary.


As our merging with our technology excellerates, I think there will be many unpredicatable results. There will be many failures. There may be one failure too many; a failure or combination of failures that result in the extinction of humans or worse: the destruction of Life or of the earth itself.

Maybe some instance of new tech will result in a transhuman who is less then human or lacking conscious because of some missing ingredient — empathy, emotion, or some unknown quantum connection. We might recognize the failure and avoid it moving forward — and learn a bit about what makes us human in the process — or the resulting entity will destroy us all.

Of course, these are potential futures regardless of transhumanism.


You seem to believe that a move to transhumanism and/or a hivemind would result in a decrease in adaptibility. Is that accurate?

Im not sure thats the case. Im probably missing something. Yes, immortal transhumans would no longer need to replicate and thus would no longer have the benefit of DNA recombination or DNA mutation — however, they could do these things if they chose, mind you — but they would have other, perhaps more robust, ways to adapt.

Is it conceivable that many or all transhumans might succomb to a powerful, non-organic virus? Of course. However, the fact that we here in this discussion are aware of this leads me to believe they would be aware of this as well. Yes, transhumans would be vulnerable to viruses and hackers, etc. In this regard, diversity will always have a place.

On the other hand, you've pointed out yourself how adaptable hiveminds such as ants have been here on Earth.


I looked back over this thread but havent been able to determine what VS means. Could someone explain?

There will be early adopters, late adopter, and luddites.

I was asking more how is and will Transhumanism be sold? Researchers/developers have to sell someone on funding as well as selling to consumers. I understand what you mean by "sell themselves" but Apple has a healthy advertising budget ... and part of getting people to buy stuff is to change the way they look at themselves.

We can look to the past, but we can also ask "What's the same and what's different about Transhumanism?" In this case we are changing who and what people are. Biophobia could be the cornerstone of an advertising campaign. We have a well developed market for perfect bodies, for youth and for perfect hygiene - the market that buys all the personal care and anti aging and nutritional supplements includes almost everyone as does the market for shiny new cars ... let's just put them together. That's my initial thought if someone came to me to sell the idea. On the other hand - look at the hygiene products for cars - rows and rows of wax and cleaner and spray, etc ... I suspect there will be a secondary market for Transhuman cyborg parts ... I really don't think it will work to give you a perfect self repairing body for a one time fee and then you never go in for maintenance etc ... not initially ... what the "market" will become if that even makes sense over time is another matter ... but initially it seems to me if this starts in the near future it will be thought of as a product with a life cycle in need of maintenance, etc ... at the very least software upgrades will need to be applied.

That's an interesting question ... what becomes of the market?

Selling the hive mind might be a moot point ... ever increasing connectivity may take care of it ... and once enough of your friends upload and say "the water's great ... come on in!" you probably will. On the other hand late adopters and luddites might stay back long enough to see some real negatives and organize against it.

Have you seen what the Major from Ghost in the Shell looks like? Sold.

I watched this series with my kid several years ago ... fortunately I look more like Batou.

I think communication and empathy are considered to be femanine qualities — rightly or wrongly — and I think Western culture at least can be said to have become more feminine in this sense.

I need to find the Lewis quote so you can see it in context.

As our merging with our technology excellerates, I think there will be many unpredicatable results. There will be many failures. There may be one failure too many; a failure or combination of failures that result in the extinction of humans or worse: the destruction of Life or of the earth itself.

Maybe some instance of new tech will result in a transhuman who is less then human or lacking conscious because of some missing ingredient — empathy, emotion, or some unknown quantum connection. We might recognize the failure and avoid it moving forward — and learn a bit about what makes us human in the process — or the resulting entity will destroy us all.

Of course, these are potential futures regardless of transhumanism.

So do you see Transhumanism as inevitable? It seems to me there is a lot of certainty on this thread and just in the West generally that we will live in an increasingly technologically sophisticated society, that progress in this sense is at least linear ... or that we will undergo an apocalypse - this sense is rooted in recent history (last 300 years or so - coinciding with fossil fuels). But that's not been the case historically - civilizations have distinct life cycles (Oswald Spengler) followed by collapse over a period of time with a return to baseline (agrarian) society and then the cycle starts over. What makes or could make this time different do you think?

You seem to believe that a move to transhumanism and/or a hivemind would result in a decrease in adaptibility. Is that accurate?

Could a hivemind and Trans Sapiens co-exist? Is there room for both strategies and some kind of complex competition/cooperation relationship that increases adaptability for both?
 
Last edited by a moderator:
I didn't know this is a recent finding. I thought it was almost a given. I seemed to have heard some time ago that one of those jumping insects (i think it was one of those lime green wedged shaped ones-Planthoppers? ) built up some kind of torque in their legs using a similar methodology.
 
Last edited:
Constance said:
What would a totalized hive mind operating on an artificial computer substrate be able to 'adapt to'? Apparently not to changing conditions in the physical, natural world since, as Tononi and Koch have also now recognized, a computer intelligence would be capable of almost no experience of the world.
I was able to finish the article "Consciousness: Here, There but Not Everywhere." While Tonini and Koch do unequivocally state that according to their interpretation of IIT digital brain simulations would not have experiences, they importantly do not rule out artificial intelligence, i.e., conscious non-biological systems.

http://arxiv.org/ftp/arxiv/papers/1405/1405.7089.pdf

Note 14 at the end of the paper:

"In the extreme case, any digital computer running software can ultimately be mimicked by a Turing Machine with a large state transition matrix, a moving head that writes and erases, and a very, very long memory tape – in that case, causal power would reside in the moving head that follows one out of a few instructions at a time.

On the other hand, there is no reason why a hardware level, neuromorphic model of the human brain system that does not rely on software running on a digital computer, could not approximate, one day, our level of consciousness (Schmuker, Pfeil et al. 2014)."
So essentially, according to its authors, the current iteration of IIT does not contradict the concept of substrate-independent mind.

(There are many more interesting nuggets offered in the paper. More about those in the C&P thread later.)
 
I was able to finish the article "Consciousness: Here, There but Not Everywhere." While Tonini and Koch do unequivocally state that according to their interpretation of IIT digital brain simulations would not have experiences ...
Well, not quite "unequivocally":

"Finally, what about a computer whose software simulates in detail not just our behavior, but even the biophysics of synapses, axons, neurons and so on, of the relevant portion of the human brain (Markram 2006)? Could such a digital simulacrum ever be conscious? Functionalism again would say yes, even more forcefully. For in this case all the relevant interactions within our brain, not just our input-output behavior, would have been replicated faithfully. Why should we not grant to this simulacrum the same consciousness we grant to a fellow human?" ( Page 8 - http://arxiv.org/ftp/arxiv/papers/1405/1405.7089.pdf )

Although they go on to say that according to the principles of IIT, the above would not be justified, that point is certainly debatable, especially considering their premise and their assumptions about existing computer architecture. They are obviously not hardware engineers, programmers, or even up-to-speed on the latest developments in computer modeling of the brain. All we need to do is look at the morphogenic systems being developed by the Human Brain Project to see that the objections of IIT are weak indeed.
 
Well, not quite "unequivocally":

"Finally, what about a computer whose software simulates in detail not just our behavior, but even the biophysics of synapses, axons, neurons and so on, of the relevant portion of the human brain (Markram 2006)? Could such a digital simulacrum ever be conscious? Functionalism again would say yes, even more forcefully. For in this case all the relevant interactions within our brain, not just our input-output behavior, would have been replicated faithfully. Why should we not grant to this simulacrum the same consciousness we grant to a fellow human?" ( Page 8 - http://arxiv.org/ftp/arxiv/papers/1405/1405.7089.pdf )

Although they go on to say that according to the principles of IIT, the above would not be justified, that point is certainly debatable, especially considering their premise and their assumptions about existing computer architecture. They are obviously not hardware engineers, programmers, or even up-to-speed on the latest developments in computer modeling of the brain. All we need to do is look at the morphogenic systems being developed by the Human Brain Project to see that the objections of IIT are weak indeed.
The authors unequivocally state that their interpretation of IIT rules out the possibility that digital simulations of brains can has experiences; however, this tells us little about the reality of whether digital simulations of brains can have experiences.

What I gather from their model is that subjective experience is synonymous with integrated information. Why integrated information should have -- or is? -- the property of "what it's like" I don't know. They seem to take it as a given, like mass. It's just a fundamental aspect of our universe. Why integrated information could only be produced by certain, physical systems, again I don't know.

I would tend to believe that the ends would justify the means. Thus, if a digital system could produce integrated information, then it too should have/be the property of "what it is like."

The authors also say this in the paper:

[T]he relevant level for human consciousness is likely to be neurons at the scale of 100 millisecond rather than molecules at the nanosecond scale.
I'm not sure how to interpret this (or really any of this). Are they implying that integrated information can only be produced by hardware of a certain scale and operating speed? Perhaps for human-like consciousness, but it wouldn't make sense to put such parameters on integrated information across the board.

I need to re-read the paper and the one referenced in the notes.
 
The authors unequivocally state that their interpretation of IIT rules out the possibility that digital simulations of brains can has experiences ...
Perhaps we're differing a little on our interpretation of "unequivocally". The word "unequivocal" allows for no doubt or misinterpretation ( Encarta ). However, in the quote I provided the citation for ( above ), they planted doubt first by suggesting a computational version of the brain should have consciousness, and secondly by giving rather flimsy reasons for why IIT wouldn't justify that assumption. Also, when one says that someone has stated something, especially unequivocally, the proper form is to use a direct quote and a citation. I didn't find any string of text where they actually say, "IIT rules out the possibility that digital simulations of brains can have experiences." But if you can supply an exact quote and page number, then that would be helpful. I just plugged in the word "experience" into the search tool and went through all the results. It's possible I missed it.
 
I didn't know this is a recent finding. I thought it was almost a given. I seemed to have heard some time ago that one of those jumping insects (i think it was one of those lime green wedged shaped ones-Planthoppers? ) built up some kind of torque in their legs using a similar methodology.

Well, it's not unequivocal ... here is a direct quote:

"To the best of my knowledge, it's the first demonstration of functioning gears in any animal," said study researcher Malcolm Burrows, an emeritus professor of neurobiology at the University of Cambridge in the United Kingdom."

Excluding of course the ones in my head.
 
Last edited by a moderator:
Perhaps we're differing a little on our interpretation of "unequivocally". The word "unequivocal" allows for no doubt or misinterpretation ( Encarta ). However, in the quote I provided the citation for ( above ), they planted doubt first by suggesting a computational version of the brain should have consciousness, and secondly by giving rather flimsy reasons for why IIT wouldn't justify that assumption. Also, when one says that someone has stated something, especially unequivocally, the proper form is to use a direct quote and a citation. I didn't find any string of text where they actually say, "IIT rules out the possibility that digital simulations of brains can have experiences." But if you can supply an exact quote and page number, then that would be helpful. I just plugged in the word "experience" into the search tool and went through all the results. It's possible I missed it.
It's also on page 8, in the same section as your longer quote, but at the end:

Therefore, just like a computer simulation of a giant star will not bend space time around the machine, a simulation of our conscious brain will not have consciousness.

However, as noted above, the authors unequivocally do not rule out the possibility that artificial, physical brains might have human-like consciousness.

Thus artificial digital brains are out, artificial physical brains are in, according to the authors of IIT.
 
So the next question is what is meant by 'artificial neurons'?


Berger has set his sights on building artificial neural cells, initially to act as a cortical prosthesis for individuals who have lost brain cells to neurological diseases such as Alzheimer’s. But eventually, his lab’s efforts may usher in a new era in biologically inspired computing and information processing.

The USC team has built circuits that model 100 neurons; their goal is to construct a 10,000-neuron chip model for implantation in primate hippocampus.

The Max Planck Institute in Germany is another center of research on neural-silicon hybrids. Recently, RA Kaul and P. Fromhertz from the Institute and NI Syed from the University of Calgary reported in Physical Review Letters on direct interfacing between a silicon chip and a biological excitatory synapse. The team constructed a silicon-neuron hybrid circuit by culturing a presynaptic nerve cell atop a capacitor and transistor gate and a postsynaptic nerve cell atop a second transistor gate.

Neural Silicon Hybrid Chips, artificial neurons, Ted Berger, University of Southern California (USC), cortical prosthesis, biomimetic microelectronics

Neil Fraser: Hardware: Artificial Neuron
 
I didn't know this is a recent finding. I thought it was almost a given. I seemed to have heard some time ago that one of those jumping insects (i think it was one of those lime green wedged shaped ones-Planthoppers? ) built up some kind of torque in their legs using a similar methodology.

You might also be thinking of the Ripapartamus ... it has gears on either side of it's jaws.
 
So the next question is what is meant by 'artificial neurons'?

Exactly. If you read the section carefully, the premise for their conclusion is based on the principle of human brains and computational systems being identical in physical structure, and uses a straw man argument as an example. Therefore neither reason is a coherent argument against the possibility of consciousness ( experience ) arising from a computational system. Their only valid point is that computational systems are different from human brains. Therefore while it is reasonable to assume that because of the differences, the experiences possessed by a computational system would probably be different, it's not reasonable to assume that they simply could not exist at all. We simply don't know enough either way to draw such a conclusion.
 
So the next question is what is meant by 'artificial neurons'?

Exactly. If you read the section carefully, the premise for their conclusion is based on the principle of human brains and computational systems being identical in physical structure, and uses a straw man argument as an example. Therefore neither reason is a coherent argument against the possibility of consciousness ( experience ) arising from a computational system. Their only valid point is that computational systems are different from human brains. Therefore while it is reasonable to assume that because of the differences, the experiences possessed by a computational system would probably be different, it's not reasonable to assume that they simply could not exist at all. We simply don't know enough either way to draw such a conclusion.

Hey, I just asked the next logical question following Soupie's summary of what Tononi-Koch presented. There was no subtext in my question, so why are you addressing this response to me?
 
Back
Top