• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Substrate-independent minds

Free episodes:

Using one in a maths exam was considered cheating and you were not allowed to take one into the exam, years later this changed and they were allowed.
The thinking was the calculator just did the leg work, Knowing how to make the calculation was what mattered, not memorising your times table.


havent read any further yet mike, so dont know any what any replies say, my take on the above is abject failure, knowing how to make a calculation on a calculator, is of no use in every day life, where the result matters there and then.

let me give an example of a muppit fail.

tesco over charged me for some bedding, the descrepency was 8.70, a manager was called to the help desk, an organ grinder, not a £7 an hour monkey, he agrees with me about being over charged, easy just give me 8.70.

Before he could refund he had to do some paper-work, and at this point it became clear he was a calculator generation kid, he tried to do the sum's in his head, fidgitted and faffed about, eventually says 'i will just go and get a calculator', to which i replied, look m8 deduct £10.00, and add 1.30, it took a minute to sink in, but he got the simplicity in the end, and credit to him he was suitably embarrassed, using a calculator in education is not doing applicable everyday maths.
A calculator eliminates simple problem solving skill's, as the above example, and they never move on to more advanced mental jymnastic's, the educational system failed him, even tho his 'qualification's', give him a head start over most of his contempories.
As a employer, it is a serious problem.

This is an everyday occurrance in one form or another these day.
 
Last edited:
That is a really good example many of us middle aged people are familiar with. Personally, I think there is an advantage to memorizing basic math ( like times tables ), but beyond what's needed to count one's change and do our taxes, it get's pointless to have to do long hand math.

I won't be going for brain implants any time soon. Not when I can look it up almost as quickly as I need it on a PC.

Plus I don't think we're actually losing memory power so much as deciding more selectively what memories matter to what we're actually doing. In a way, discarding everything that counts as relatively useless trivia leaves more room for what is important ( assuming it doesn't all get filled up by video game trivia and Second Life identities ).


i have always used alittle trick on people when asked to do a sum in passing, ya know when your in company, and they blurt out something like whats x times x, as they are thinking about something.

an example

89 x 23

as soon as they ask, my brain reverses the problem, 23 x 89.

then i will say 'you what', and they repeat the question, to which i pretend to think about for 1 second, then answer 2,047, because i had broken the problem down, and reconstructed the answer as he repeated himself.

like this

23 x 89

10 x 89 [90] = 10 x 90 [ 90 add a naught]
= x 2
1800

-20

1780

3x 80 [3x8 add a naught]

=240

3 x9

=27

240 + 27 = 267

= 1780 + 267 = 1980 + 67 = 2047

thats mental jymnastic's, and they just think wow, it took him half a second, whereas it took around 3/4 or 5 seconds depending on the steps needed to easily deconstruct the problem, it may have been 89 x 23 -147, that would be a 4 second problem now [ im getting old ], if i hadnt finished the calculation by the time they repeated themselves, i would say something like 'oh right' to give myself time to add the broken down problem back up,, you can see it in their face/eyes what their thinking, but i tricked them.
 
Last edited:
Found a good (imo) article today ...

Excellent stuff there. As you are aware, we've been touching on a lot of this stuff over on the Philosophy, Science, and the Unexplained thread as well, particularly the take that Chalmers has on the so-called "hard problem of consciousness". This problem has led to some rather trying back and forth exchanges over there, and I have to hand it to @smcder for retaining his cool. By the time I start feeling the pressure, the other guy has usually blown their top, but instead we've both just backed off and put it on the back burner ( the Trickster has not gotten the better of us ! ). Now, thanks to your links I ran across Chalmers' own website where a number of his original papers can be found. I'm currently working through Facing Up To The Problem of Consciousness, and I've run smack-dab into the same issues there again ( as was expected ).

With respect to the topic of this thread ( Substrate Independent Minds ), the totality of the information I've run across so far forms the picture that while minds are materially separated from the substrate, they are not independent of it, and this has forced me to revaluate my position on AI and consciousness in the context of digital processing. Whereas I used to think that consciousness was simply a matter of the right programming and sufficient processing power, I'm no longer sure we can take the old approach of modelling algorithms and programming them to run on a computational device, and expect consciousness to emerge. In fact I'd even go a step further and say I'm almost sure that if it happens as a result of that approach, it will be purely by accident.
 
I'm interested in what others here mean by 'information' and in finding out what that term means to physicists, biologists, philosophers of science, and computer scientists working on AI. I just ran a quick search on the terms 'information' and 'data' and the first item on the list was the attempt at definition linked below, which might be a useful opening to discussing what I think is a significant difference in meaning that could be explored in this thread.

Data vs Information - Difference and Comparison | Diffen
 
I'm interested in what others here mean by 'information' and in finding out what that term means to physicists, biologists, philosophers of science, and computer scientists working on AI. I just ran a quick search on the terms 'information' and 'data' and the first item on the list was the attempt at definition linked below, which might be a useful opening to discussing what I think is a significant difference in meaning that could be explored in this thread.

Data vs Information - Difference and Comparison | Diffen

There is enough evidence to suggest that data and information are essentially the same thing, with information also being synonymous with thoughts, ideas, and concepts. For example, in computing, a bit is defined as the smallest unit of information handled by a computer ( Encarta Computer Dictionary ). Yet a single bit doesn't really tell us much about it's relationship to other bits of information, and I would submit that it's that relationship that the distinctions in the definitions you linked us to were getting at. Taking these factors into consideration, the distinctions are more like this:

----------------- Information ---------------------------
-------- Data ---------|-- Thoughts Ideas Concepts ------
Units of communication | Organized units of communication
 
Last edited:
Using one in a maths exam was considered cheating and you were not allowed to take one into the exam, years later this changed and they were allowed.
The thinking was the calculator just did the leg work, Knowing how to make the calculation was what mattered, not memorising your times table.


havent read any further yet mike, so dont know any what any replies say, my take on the above is abject failure, knowing how to make a calculation on a calculator, is of no use in every day life, where the result matters there and then.

let me give an example of a muppit fail.

tesco over charged me for some bedding, the descrepency was 8.70, a manager was called to the help desk, an organ grinder, not a £7 an hour monkey, he agrees with me about being over charged, easy just give me 8.70.

Before he could refund he had to do some paper-work, and at this point it became clear he was a calculator generation kid, he tried to do the sum's in his head, fidgitted and faffed about, eventually says 'i will just go and get a calculator', to which i replied, look m8 deduct £10.00, and add 1.30, it took a minute to sink in, but he got the simplicity in the end, and credit to him he was suitably embarrassed, using a calculator in education is not doing applicable everyday maths.
A calculator eliminates simple problem solving skill's, as the above example, and they never move on to more advanced mental jymnastic's, the educational system failed him, even tho his 'qualification's', give him a head start over most of his contempories.
As a employer, it is a serious problem.

This is an everyday occurrance in one form or another these day.

I have found this to be a problem with young engineers who you would think would be able to do the math needed in their heads.... well most can not.
When I did my acoustic engineering exams a calculator was not allowed as it was as you say seen as cheating, but I agree it is more than that as dependence on it stunts the basic problem solving skills needed to think through solutions to system problems we have to solve on the fly and very quickly as a qualified engineer latter on.

Not saying my math is exceptional but at least I can do the sums in my head as you pointed out many cashiers and managers can not.

I teach sound engineering from time to time and what I have found is many kids just do not have the basic practical problem solving skills needed to be really good at it. There are one or two in a class that really shine and get the concepts as fast as I can teach them... here is the point, these kids on closer examination always have high Math and English skills.

when I say kids I mean 18 to 20 year old students... to me they are kids :p

The first class on the first day when I teach these courses (I do two a year), is the "Theory of Sound" and I don't sugar coat it... they get the Math because you can not understand the physics behind it if I don't teach it, but the second reason I hit them with the theory right away is simply to sort out the wheat from the chaff. I can tell who is going to do well by the looks in the students eyes, most eyes glaze over but some do not and they are the ones who will stick it out and do well in the long run.

What frightens me is the theory I am teaching I was taught as a thirteen year old high schools student in science class... so what has happened?

It is true that to learn to mix music you do not have to have great math skills as you can do it by ear, in fact I know a number of very good front of house mixers like this... But to be a good systems engineer you need to have at least average math skills.

More to the point you need exceptional problem solving skills and a mind like an iron trap and the sad fact is out of a class of 20 or 30 students I may find one or two students if I am lucky who will cut it in the long run.

sorry for the rant.
 
It is true that to learn to mix music you do not have to have great math skills as you can do it by ear, in fact I know a number of very good front of house mixers like this... But to be a good systems engineer you need to have at least average math skills.

That's an interesting statement and it supports the phenomenological understanding of consciousness, which I think cannot be made commensurable with closed computational theories about consciousness. How much do we do "by ear" in our ongoing physical integration with what we sense {and connect with, process, integrate at many levels of our being) in our ongoing experience of the physical environment -- that which comes to us from outside ourselves and affects us 'body and soul . . . and mind'?
 
Excellent stuff there. As you are aware, we've been touching on a lot of this stuff over on the Philosophy, Science, and the Unexplained thread as well, particularly the take that Chalmers has on the so-called "hard problem of consciousness". This problem has led to some rather trying back and forth exchanges over there, and I have to hand it to @smcder for retaining his cool. By the time I start feeling the pressure, the other guy has usually blown their top, but instead we've both just backed off and put it on the back burner ( the Trickster has not gotten the better of us ! ). Now, thanks to your links I ran across Chalmers' own website where a number of his original papers can be found. I'm currently working through Facing Up To The Problem of Consciousness, and I've run smack-dab into the same issues there again ( as was expected ).

With respect to the topic of this thread ( Substrate Independent Minds ), the totality of the information I've run across so far forms the picture that while minds are materially separated from the substrate, they are not independent of it, and this has forced me to revaluate my position on AI and consciousness in the context of digital processing. Whereas I used to think that consciousness was simply a matter of the right programming and sufficient processing power, I'm no longer sure we can take the old approach of modelling algorithms and programming them to run on a computational device, and expect consciousness to emerge. In fact I'd even go a step further and say I'm almost sure that if it happens as a result of that approach, it will be purely by accident.

Chalmers touches on that in his paper

Perhaps the most important remaining form of resistance is the claim that the brain is not a mechanical system at all, or at least that nonmechanical processes play a role in its functioning that cannot be emulated. This view is most naturally combined with a sort of Cartesian dualism holding that some aspects of mentality (such as consciousness) are nonphysical and nevertheless play a substantial role in affecting brain processes and behavior.


If there are nonphysical processes like this, it might be that they could nevertheless be emulated or artificially created, but this is not obvious. If these processes cannot be emulated or artificially created, then it may be that human-level AI is impossible .Although I am sympathetic with some forms of dualism about consciousness, I do not think that there is much evidence for the strong form of Cartesian dualism that this objection requires.

The weight of evidence to date suggests that the brain is mechanical, and I think that even if consciousness plays a causal role in generating behavior, there is not much reason to think that its role is not emulable.


I tend to agree with this myself, i think its more likely all of me resides an inch or so behind my eyeballs, that i am essentially a biological machine, and that like any machine it can be emulated, if evolution can produce this device, so too can we.
Much in the same way as evolution produced an ability to fly in some bioforms, an ability we can now duplicate
 
There is enough evidence to suggest that data and information are essentially the same thing, with information also being synonymous with thoughts, ideas, and concepts. For example, in computing, a bit is defined as the smallest unit of information handled by a computer ( Encarta Computer Dictionary ). Yet a single bit doesn't really tell us much about it's relationship to other bits of information, and I would submit that it's that relationship that the distinctions in the definitions you linked us to were getting at. Taking these factors into consideration, the distinctions are more like this:

----------------- Information ---------------------------
-------- Data ---------|-- Thoughts Ideas Concepts ------
Units of communication | Organized units of communication

Here's another way of looking at it: Data vs. information - ID2100 - Rethinking Information Systems and Technology
 
Along the same line, we'd do well to consult this paper and the two works discussed in it:

RFID: Human Agency and Meaning in Information-Intensive Environments

“RFID: Human Agency and Meaning in Information-Intensive Environments”, Theory, Culture & Society March/May 2009 26: 47-72.

N. Katherine Hayles

Abstract

"RFID tags, small microchips no bigger than grains of rice, are currently being embedded in product labels, clothing, credit cards, and the environment, among other sites. Activated by the appropriate receiver, they transmit information ranging from product information such as manufacturing date, delivery route, and location where the item was purchased to (in the case of credit cards) the name, address, and credit history of the person holding the card. Active RFIDs have the capacity to transmit data without having to be activated by a receiver; they can be linked with embedded sensors to allow continuous monitoring of environmental conditions, applications that interest both environmental groups and the US military. The amount of information accessible through and generated by RFIDs is so huge that it may well overwhelm all existing data sources and become, from the viewpoint of human time limitations, essentially infinite. What to make of these technologies will be interrogated through two contemporary fictions, David Mitchell's Cloud Atlas and Philip K. Dick's Ubik . Cloud Atlas focuses on epistemological questions — who knows what about whom, in a futuristic society where all citizens wear embedded RFID tags and are subject to constant surveillance. Resistance takes the form not so much of evasion (tactical moves in a complex political Situation) but rather as a struggle to transmit information to present and future stakeholders in a world on the brink of catastrophe. Ubik, by contrast, focuses on deeper ontological questions about the nature of reality itself. Both texts point to the necessity to reconceptualize information as ethical action embedded in contexts and not merely as a quantitative measure of probabilities."

I'm going to order a copy of Ubik and hope some posters in this thread might want to discuss the ideas developed in that novel by Philip Dick and also in David Mitchell's Cloud Atlas.
 
Excellent stuff there. As you are aware, we've been touching on a lot of this stuff over on the Philosophy, Science, and the Unexplained thread as well, particularly the take that Chalmers has on the so-called "hard problem of consciousness". This problem has led to some rather trying back and forth exchanges over there, and I have to hand it to @smcder for retaining his cool. By the time I start feeling the pressure, the other guy has usually blown their top, but instead we've both just backed off and put it on the back burner ( the Trickster has not gotten the better of us ! ). Now, thanks to your links I ran across Chalmers' own website where a number of his original papers can be found. I'm currently working through Facing Up To The Problem of Consciousness, and I've run smack-dab into the same issues there again ( as was expected ).

With respect to the topic of this thread ( Substrate Independent Minds ), the totality of the information I've run across so far forms the picture that while minds are materially separated from the substrate, they are not independent of it, and this has forced me to revaluate my position on AI and consciousness in the context of digital processing. Whereas I used to think that consciousness was simply a matter of the right programming and sufficient processing power, I'm no longer sure we can take the old approach of modelling algorithms and programming them to run on a computational device, and expect consciousness to emerge. In fact I'd even go a step further and say I'm almost sure that if it happens as a result of that approach, it will be purely by accident.

I worry about you sometimes, man. I posted Chalmers' site several times on the other thread and specifically referred to Facing Up To The Hard Problem of Consciousness and organizational invariance more than once - and I thought you had read this material already. Glad to see we are all on the same page now.
 
Chalmers touches on that in his paper

Perhaps the most important remaining form of resistance is the claim that the brain is not a mechanical system at all, or at least that nonmechanical processes play a role in its functioning that cannot be emulated. This view is most naturally combined with a sort of Cartesian dualism holding that some aspects of mentality (such as consciousness) are nonphysical and nevertheless play a substantial role in affecting brain processes and behavior.


If there are nonphysical processes like this, it might be that they could nevertheless be emulated or artificially created, but this is not obvious. If these processes cannot be emulated or artificially created, then it may be that human-level AI is impossible .Although I am sympathetic with some forms of dualism about consciousness, I do not think that there is much evidence for the strong form of Cartesian dualism that this objection requires.

The weight of evidence to date suggests that the brain is mechanical, and I think that even if consciousness plays a causal role in generating behavior, there is not much reason to think that its role is not emulable.


I tend to agree with this myself, i think its more likely all of me resides an inch or so behind my eyeballs, that i am essentially a biological machine, and that like any machine it can be emulated, if evolution can produce this device, so too can we.
Much in the same way as evolution produced an ability to fly in some bioforms, an ability we can now duplicate

Well, we do fly but not in the same way as most "bioforms" . . .

My own version of the Turing test is a little more stringent - in order to acknowledge strong AI and admit it as a part of my daily life, I require that a machine be capable of getting drunk on an appropriate substance and removing it's outer covering and embarrassing itself in front of its peers by emitting a stream of random bits - or at least in front of another machine with input receptacles primarily of the opposite gender to its own.
 
Well, we do fly but not in the same way as most "bioforms" . . .

.

Quite right, we emulate the process, SI wont be Human conciousness, it will be an emulation of it

But in addition to emulating it, we've also improved it, we fly higher, faster and with greater payloads than any biological flight mechansim...............
 
It would be funny if it happened and we didnt notice it


The internet is a new lifeform that shows the first signs of intelligence. So says brain scientist and serial entrepreneur Jeff Stibel.
He argues that the physical wiring of the internet is much like a rudimentary brain and some of the actions and interactions that take place on it are similar to the processes that we see in the brain.
 
Well, we do fly but not in the same way as most "bioforms" . . .

My own version of the Turing test is a little more stringent - in order to acknowledge strong AI and admit it as a part of my daily life, I require that a machine be capable of getting drunk on an appropriate substance and removing it's outer covering and embarrassing itself in front of its peers by emitting a stream of random bits - or at least in front of another machine with input receptacles primarily of the opposite gender to its own.

And I think this is fair - because it's what I require of all the human intelligences I am in close contact with . . . ;-)
 
Reminds me of the funny story about the firm that hired a robot, only to regret it come the office Xmas party.
It was staggering around leaking oil and feeling up the word processors
 
Reminds me of the funny story about the firm that hired a robot, only to regret it come the office Xmas party.
It was staggering around leaking oil and feeling up the word processors
http://archive.org/stream/TheSuperMentalTrainingBook/SuperMentalTraining_djvu.txt

rofl - . . . "Robot and Frank" (2012 - Frank Langella) is a nice little film about artificial intelligence, it actually leaves central questions about the robots consciousness (and/or ego) unanswered so it was very interesting to me as opposed to films in which robots are basically just human - worth a look if you like films
 
Excellent stuff there. As you are aware, we've been touching on a lot of this stuff over on the Philosophy, Science, and the Unexplained thread as well, particularly the take that Chalmers has on the so-called "hard problem of consciousness". This problem has led to some rather trying back and forth exchanges over there, and I have to hand it to @smcder for retaining his cool. By the time I start feeling the pressure, the other guy has usually blown their top, but instead we've both just backed off and put it on the back burner ( the Trickster has not gotten the better of us ! ). Now, thanks to your links I ran across Chalmers' own website where a number of his original papers can be found. I'm currently working through Facing Up To The Problem of Consciousness, and I've run smack-dab into the same issues there again ( as was expected ).

With respect to the topic of this thread ( Substrate Independent Minds ), the totality of the information I've run across so far forms the picture that while minds are materially separated from the substrate, they are not independent of it, and this has forced me to revaluate my position on AI and consciousness in the context of digital processing. Whereas I used to think that consciousness was simply a matter of the right programming and sufficient processing power, I'm no longer sure we can take the old approach of modelling algorithms and programming them to run on a computational device, and expect consciousness to emerge. In fact I'd even go a step further and say I'm almost sure that if it happens as a result of that approach, it will be purely by accident.

Wonderful. I'm impressed by your flexibility and swift turn-around and not surprised that it was reading Chalmers that did it. :)
 
Extracts from review comments at amazon concerning Philip K. Dick's novel Ubik:

Glen Runciter runs a lucrative business—deploying his teams of anti-psychics to corporate clients who want privacy and security from psychic spies. But when he and his top team are ambushed by a rival, he is gravely injured and placed in “half-life,” a dreamlike state of suspended animation. Soon, though, the surviving members of the team begin experiencing some strange phenomena, such as Runciter’s face appearing on coins and the world seeming to move backward in time. As consumables deteriorate and technology gets ever more primitive, the group needs to find out what is causing the shifts and what a mysterious product called Ubik has to do with it all.

Chip works for Glen Runciter's anti-psi security agency, which hires out its talents to block telepathic snooping and paranormal dirty tricks. When its special team tackles a big job on the Moon, something goes terribly wrong. Runciter is killed, it seems--but messages from him now appear on toilet walls, traffic tickets, or product labels. Meanwhile, fragments of reality are timeslipping into past versions: Joe Chip's beloved stereo system reverts to a hand-cranked 78 player with bamboo needles. Why does Runciter's face appear on U.S. coins? Why the repeated ads for a hard-to-find universal panacea called Ubik ("safe when taken as directed")?
 
Wonderful. I'm impressed by your flexibility and swift turn-around and not surprised that it was reading Chalmers that did it. :)

Thanks Constance. It takes a little experience with me to understand what I'm about. When it comes to Chalmers' role in forming my present view, I still have the same issues with the formulation and presentation of the "hard problem" as I did before. However the part that remains valid IMO is that Chalmers succeeds in impressing upon us that our subjective experience is a real situation, and not something to be written off as trivial. When combined with the other facets of our discussion, the result seems to be that consciousness depends on more than signal processing alone. It may well be that the specific way that signals are processed are the key. This means that it may still be possible to create consciousness, but it will probably take more than linear processing to do the job. I'm not sure if you're interested in discussing that any further, but I'm glad you're talking to me again :) .
 
Back
Top