• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free episodes:

I find the item interesting for a number of reasons but if you read between the lines I get the feeling that there are many at the top of the social/economic food chain that fear all of these changes.
The main reason is a possible loss of social control for them.

I could expand on this but you get the idea.
 
I have a question for mike: on the first page of this thread, which I've just started reading, you quote Paul Davies as having said "I think it very likely -in fact inevitable-that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of intelligence in the universe." Do you have a citation for that quote? Thanks.
 
I also have a question and a comment for Kim323 based on this paragraph from one his posts on page 1 of this thread:

"And then there's the whole "problem," and that's precisely the word researchers into the nature of consciousness call the mind/body, just-what-is-consciousness study: is consciousness merely the product of neurochemical processes, the firing of neurons, an "epiphenomenon" of physical processes in the brain only, or is there a component also of something apart from those physical processes, too? Some of this was covered in the consciousness thread a week or so ago, and I recommended some good books to read. It's a fascinating subject."

My question is: where can I find "the consciousness thread" you referred to?

My comment is to say that I agree with your questions in the quoted paragraph. Re: your last question there, "is there a component also of something apart from those physical processes too?," I'd say (and sense that you might agree) that there's quite a lot that exists in consciousness apart from neurochemical processes: primarily the whole embodiment of consciousness in and as biological existence (the innumerable ways in which we interact with the world through the body, through consciousness in and of the body), and also the subliminal levels of our consciousness that influence and inform what we call 'waking' consciousness, including the collective unconscious and the personal subconscious.
 
XE2lW5Q.jpg
 
I have a question for mike: on the first page of this thread, which I've just started reading, you quote Paul Davies as having said "I think it very likely -in fact inevitable-that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of intelligence in the universe." Do you have a citation for that quote? Thanks.

Paul Davies, a British-born theoretical physicist, cosmologist, astrobiologist and Director of the Beyond Center for Fundamental Concepts in Science and Co-Director of the Cosmology Initiative at Arizona State University, says in his new book The Eerie Silence that any aliens exploring the universe will be AI-empowered machines. Not only are machines better able to endure extended exposure to the conditions of space, but they have the potential to develop intelligence far beyond the capacity of the human brain.
"I think it very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of the universe," Davies writes. "If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature."
"Biological Intelligence is a Fleeting Phase in the Evolution of the Universe" (Weekend Feature)
 
I'm on the verge of downloading a Nook book by physicist Roger Penrose: "The Emperor's New Mind". I take it Sir Roger's premise is that strong AI, the notion that conventional computer hardware can be made self-aware with adequate processing power and the right algorithms, is basically flawed. Since Penrose's mind is one of the greatest on the scene today, I am curious to know why he thinks this. Perhaps I will even understand why he thinks this after reading the book.
 
The brains of two rats on different continents have been made to act in tandem. When the first, in Brazil, uses its whiskers to choose between two stimuli, an implant records its brain activity and signals to a similar device in the brain of a rat in the United States. The US rat then usually makes the same choice on the same task.

Miguel Nicolelis, a neuroscientist at Duke University in Durham, North Carolina, says that this system allows one rat to use the senses of another, incorporating information from its far-away partner into its own representation of the world. “It’s not telepathy. It’s not the Borg,” he says. “But we created a new central nervous system made of two brains.”


Mind-Meld Unites two Rats | Mind-Computer
 
And now for the next step

The interface was achieved at 94.0±3.0% accuracy, with a time delay of 1.59±1.07 sec from the thought-initiation to the creation of the tail movement. Our results demonstrate the feasibility of a computer-mediated BBI that links central neural functions between two biological entities

PLOS ONE: Non-Invasive Brain-to-Brain Interface (BBI): Establishing Functional Links between Two Brains

Human brain linked to rats brain and taking control of its motor functions........

Interspecies telepathy: human thoughts make rat move - tech - 03 April 2013 - New Scientist

Interspecies telepathy, proof of concept
 
We are rapidly growing more intimate with our technology. Computers started out as large remote machines in air-conditioned rooms tended by white-coated technicians. Subsequently they moved onto our desks, then under our arms, and now in our pockets. Soon, we’ll routinely put them inside our bodies and brains. Ultimately we will become more nonbiological than biological.
The compelling benefits in overcoming profound diseases and disabilities will keep these technologies on a rapid course, but medical applications represent only the early adoption phase. As the technologies become established, there will be no barriers to using them for the expansion of human potential. In my view, expanding our potential is precisely the primary distinction of our species.
Moreover, all of the underlying technologies are accelerating. The power of computation has grown at a double exponential rate for all of the past century, and will continue to do so well into this century through the power of three-dimensional computing. Communication bandwidths and the pace of brain reverse-engineering are also quickening. Meanwhile, according to my models, the size of technology is shrinking at a rate of 5.6 per linear dimension per decade, which will make nanotechnology ubiquitous during the 2020s.
By the end of this decade, computing will disappear as a separate technology that we need to carry with us. We’ll routinely have high-resolution images encompassing the entire visual field written directly to our retinas from our eyeglasses and contact lenses (the Department of Defense is already using technology along these lines from Microvision, a company based in Bothell, Washington). We’ll have very-high-speed wireless connection to the Internet at all times. The electronics for all of this will be embedded in our clothing. Circa 2010, these very personal computers will enable us to meet with each other in full-immersion, visual-auditory, virtual-reality environments as well as augment our vision with location- and time-specific information at all times.

Human Body Version 2.0

The most important application of circa-2030 nanobots will be to literally expand our minds. We’re limited today to a mere hundred trillion interneuronal connections; we will be able to augment these by adding virtual connections via nanobot communication. This will provide us with the opportunity to vastly expand our pattern recognition abilities, memories, and overall thinking capacity as well as directly interface with powerful forms of nonbiological intelligence.

It’s important to note that once nonbiological intelligence gets a foothold in our brains (a threshold we’ve already passed), it will grow exponentially, as is the accelerating nature of information-based technologies. A one-inch cube of nanotube circuitry (which is already working at smaller scales in laboratories) will be at least a million times more powerful than the human brain. By 2040, the nonbiological portion of our intelligence will be far more powerful than the biological portion. It will, however, still be part of the human-machine civilization, having been derived from human intelligence, i.e., created by humans (or machines created by humans) and based at least in part on the reverse-engineering of the human nervous system.

Stephen Hawking recently commented in the German magazine Focusthat computer intelligence will surpass that of humans within a few decades. He advocated that we “develop as quickly as possible technologies that make possible a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it.” Hawking can take comfort that the development program he is recommending is well under way.
 
Last edited by a moderator:
To The Point: Another variation on the "is Google dumbing down our memory" talking point

Getting to Know Our Mind-Reading Smartphone Apps - To the Point on KCRW

Ive noticed this myself, and its a double edged sword

I have friends who you could call movie buffs, they see an actor and can rattle off all the parts they ever played.

Me i use IMDB, if i see a face and have one of those "where do i know him/her" from i just look up the show im watching, find the actor and track back.

But while the amount of info we hold in our heads drops, the amount we have access to has grown in a massive way.

I reconcile this with a functionality rationale.

IMDB as an example will give me the answer i seek close enough to 100 percent of the time, wracking my memory for the results often leads to simply saying i dont know. Ive seen the face but i'll be buggered if i can recall where.

So yes at a micro level we are dumbing down, but at a macro level we actually have access to more information with a higher degree of accuracy in the data.
 
Ive noticed this myself, and its a double edged sword ...
If I understand you correctly, I was just having a similar conversation the other day after being out at a get together with some relatives. My uncle who is very down to Earth mentioned that he has no interest in things seemingly supernatural or mysterious and would prefer to acquire knowledge about the environment he is familiar with, learning the labels and functions of everyday things that have practical value. In contrast, I find that storing information in my head about things that are already known is like practicing for a game of trivia. I skim past all the stuff we've already figured out just to get to cutting edge. The rest I can look up on Wikipedia or Google or Britannica or whatever. I wonder if this reflects a sort of paradigm shift between the generations where youth are moving away from valuing what we know, to how we think?
 
Last edited:
If I understand you correctly, I was just having a similar conversation the other day after being out at a get together with some relatives. My uncle who is very down to Earth mentioned that he has no interest in things seemingly supernatural or mysterious and would prefer to acquire knowledge about the environment he is familiar with, learning the labels and functions of everyday things that have practical value. In contrast, I find that storing information in my head about things that are already known is like practicing for a game of trivia. I skim past all the stuff we've already figured out just to get to cutting edge. The rest I can look up on Wikipedia or Google or Britannica or whatever. I wonder if this reflects a sort of paradigm shift between the generations where youth are moving away from valuing what we know, to how we think?

Ive always had a really good memory myself (though to be honest age is starting to dull it)
So ive always done well on trivia night at the pub.

But yes there has been a shift predating even i think the internets

I remember when small portable calculators hit the market, eventually becoming those credit card sized ones they would give away for free if you had a roll of film developed.

Using one in a maths exam was considered cheating and you were not allowed to take one into the exam, years later this changed and they were allowed.
The thinking was the calculator just did the leg work, Knowing how to make the calculation was what mattered, not memorising your times table.

I think given the proof of concept research on the artificial Hippocampus or Chippocampus as they refer to it, combined with WiFi will see us eventually storing our memorys "in the cloud" one day

Scientists develop 'brain chip'
A "brain chip" could be used to replace the "memory centre" in patients affected by strokes, epilepsy or Alzheimer's disease, it has been claimed.

US scientists say a silicon chip could be used to replace the hippocampus, where the storage of memories is coordinated.

I suspect this is where we are destined to go as an evolutionary process and that what we are seeing with the internets is but a step along that path.

A grander question is, is this just happenstance or part of a purposeful plan on the part of agents unknown ;)

And again the double edged sword comes into play

On the one hand not storing our memorys in our brains seems like an ability lost
But on the other hand, we would be able to store and access more, and it would not degrade like biological memory due to a parity check sub routine as is used in computer systems today to ensure the integrity of data
 
I remember when small portable calculators hit the market, eventually becoming those credit card sized ones they would give away for free if you had a roll of film developed. Using one in a maths exam was considered cheating and you were not allowed to take one into the exam, years later this changed and they were allowed.The thinking was the calculator just did the leg work, Knowing how to make the calculation was what mattered, not memorising your times table.
That is a really good example many of us middle aged people are familiar with. Personally, I think there is an advantage to memorizing basic math ( like times tables ), but beyond what's needed to count one's change and do our taxes, it get's pointless to have to do long hand math.
On the one hand not storing our memorys in our brains seems like an ability lost. But on the other hand, we would be able to store and access more, and it would not degrade like biological memory due to a parity check sub routine as is used in computer systems today to ensure the integrity of data
I won't be going for brain implants any time soon. Not when I can look it up almost as quickly as I need it on a PC.

Plus I don't think we're actually losing memory power so much as deciding more selectively what memories matter to what we're actually doing. In a way, discarding everything that counts as relatively useless trivia leaves more room for what is important ( assuming it doesn't all get filled up by video game trivia and Second Life identities ).
 
Last edited:
I would do it, especially if it were recording all my sensory input to the cloud, Lets take the wedding day example

In ages past an artist might have painted a portrait of such an event, then photography gave us albums of pictures to recall the day.
Now we video such events

But imagine being able to load the experience and relive it as a perfect simulation.

Obviously there are negative implications here too

Ppl might choose to spend too much time reliving past experiences at the expense of gaining new ones

But imagine swapping memfiles for your wedding aniversary, experiencing what it was like for your partner and visa versa ?

Or sharing it with your children.
 
I would do it, especially if it were recording all my sensory input to the cloud, Lets take the wedding day example: In ages past an artist might have painted a portrait of such an event, then photography gave us albums of pictures to recall the day. Now we video such events. But imagine being able to load the experience and relive it as a perfect simulation. Obviously there are negative implications here too. Ppl might choose to spend too much time reliving past experiences at the expense of gaining new ones. But imagine swapping memfiles for your wedding aniversary, experiencing what it was like for your partner and visa versa ? Or sharing it with your children.
Once that level of technology is available at zero risk with 100% integration and upgradeable without the need for additional operations or external power supplies, then maybe if they haven't bio-engineered something even better, I'd probably sign up. Your wedding day example also brings up some interesting possibilities, especially when divorce time rolls around. It would be so much easier just to delete the memories we don't want and Photoshop your X out of all the important pictures ;) ( assuming your trial version hasn't expired ).
 
Found a good (imo) article today


(i) The human brain is a machine.
(ii) We will have the capacity to emulate this machine (before long).
(iii) If we emulate this machine, there will be AI.
—————-
(iv) Absent defeaters, there will be AI (before long).

The first premise is suggested by what we know of biology (and indeed by what we know of physics). Every organ of the body appears to be a machine: that is, a complex system comprised of law-governed parts interacting in a law-governed way. The brain is no exception. The second premise follows from the claims that microphysical processes can be simulated arbitrarily closely and that any machine can be emulated by simulating microphysical processes arbitrarily closely.

It is also suggested by the progress of science and technology more generally: we are gradually increasing our understanding of biological machines and increasing our capacity to simulate them, and there do not seem to be limits to progress here. The third premise follows from the definitional claim that if we emulate the brain this will replicate approximate patterns of human behaviour, along with the claim that such replication will result in AI. The conclusion follows from the premises along with the definitional claim that absent defeaters, systems will manifest their relevant capacities.
One might resist the argument in various ways. One could argue that the brain is more than a machine; one could argue that we will never have the capacity to emulate it; and one could argue that emulating it need not produce AI. Various existing forms of resistance to AI take each of theseforms. For example, J.R. Lucas (1961) has argued that for reasons tied to G¨odel’s theorem, humans are more sophisticated than any machine. Hubert Dreyfus (1972) and Roger Penrose(1994) have argued that human cognitive activity can never be emulated by any computational machine. John Searle (1980) and Ned Block (1981) have argued that even if we can emulate the human brain, it does not follow that the emulation itself has a mind or is intelligent.

I have argued elsewhere that all of these objections fail.

But for present purposes, we can set many of them to one side. To reply to the Lucas, Penrose, and Dreyfus objections, we can note that nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies onsubsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain.




http://consc.net/papers/singularity.pdf

Papers on AI and Computation (David Chalmers)

David Chalmers

He makes imo a good point here

Another argument for premise 1 is the evolutionary argument, which runs as follows
.
(i) Evolution produced human-level intelligence.
(ii) If evolution produced human-level intelligence, then we can produce AI (before
long).
—————-
(iii) Absent defeaters, there will be AI (before long).
Here, the thought is that since evolution produced human-level intelligence, this sort of intelligence is not entirely unattainable. Furthermore, evolution operates without requiring any antecedent intelligence or forethought. If evolution can produce something in this unintelligent manner, then in principle humans should be able to produce it much faster, by using our intelligence.
 
Last edited by a moderator:
Back
Top