• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, 11 years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

The future of science is AI.

Free episodes:

swatcher

Skilled Investigator
I think the future of science is in the hands of Artificial Intelligence. The problem with humans is that our memory is too limited and our life span is too short. It's impossible for one scientist to be current in all fields of science, but I think that's exactly what we need to make major breakthroughs at this point.

I wouldn't be surprised if we actually already had all the pieces for understanding gravity in place, but nobody is able to make the right connections due to the vast amount of, seemingly contradictory, data.

AI would be the "vessel" that holds all the pieces of scientific information and would try to analyze, apply and make logical connections through inter-disciplinary knowledge. It's like the internet itself, but smart :-)

It would be like all scientists continiously sharing information, talking to each other, peer-reviewing (ok, there needs to be two AI machines) and proposing new theories.

What do you think?
 
True AGI (Artificial General Intelligence) would be more than just a way to analyze and sift through data, although that would surely be one of its abilities.

Eventually once an AI system is capable of human intelligence it should quickly surpass that due to the fact that while the human brain is unchanging, the rate of technological advancement is growing exponentially and if X level of hardware is required to equal a human brain, within a year or two the hardware will be 2X or whatever more powerful.

Even if integrated circuits end up tapped out some day soon, another technology paradigm will replace it (photonic and quantum computing etc.)

Add to that the ability to modify and hone its own programming and an AI system with sufficient access to the future net could grow so far past us so quickly that it might be impossible to understand or decipher what it's thinking. There really is no clear way of knowing what would happen since it has never happened before in history.

I would imagine that a machine with hundreds or thousands of times the raw processing power of man and without the burden of our human limitations will allow super science to progress incredibly fast. It would have no trouble adjusting to ideas of quantum strangeness or paradoxical dilemmas that tend to hold us back. It may invent new mathematics and devise theories and test those theories all on its own and report back to us with concrete plans of how to implement those ideas into engineering projects.

Imagine saying to a computer "hey we need a material that has these 10 properties so how do we make it?" and it crunches away and says "well you need to do X and Y and Z.. Or we ask it to figure out a way to make us immortal through gene therapy. Since it has full knowledge of the human genome and enormous data from full body scans of humans it can simulate endless ways of tweaking DNA and how those tweaks would affect us.

True AI will be a game changer like humans have never seen before. Imagine if the smartest minds on Earth were all 5 year olds and they could pose questions to 1000 of the worlds top scientists all sitting in room together. Sort of reminds me of the human /ET analogies. The 5 year olds (us) would not have the faintest clue as to what the scientists were talking about unless one of them "spoke down" to that level in kiddy talk.

Hopefully I will be alive to see that day happen.
 
Good point, a few thoughts:

1) This is already happening, even without "AI". See Craig Ventner's brute force approach to decoding the human genome.

2) "AI" as a concept has problems. There is no computer that doesn't rely on programming which in turn is founded on a formal logic system. Goedel proved that any sufficiently robust formal system is inherently incomplete -- there are always theorums that cannot be derived from the axioms of the system. This problem may not doom "AI", but suggests that any computer as intelligent as a human will need to somehow transcend its internal system of rules the way humans can.

3) Following on the 2nd point, the rules governing the behavior of an artificial scientist might limit what such a scientist is capable of discovering. On the other hand, the absence of irrational human biases and assumptions may free the robo-scientist to discover some amazing things right under our nose.
 
Really good comments and as far out as some of this sounds, I think it is dead on. A few months ago a robot made it's own hypotheses, did the tests, and actually indepentantly contributed to new science. This is just the beginning. And if you believe anything that Kurzweil says, then an AI at a high level might even develop it's own type of consciousness. Whatever that would mean in reality, I'm not exactly sure. Should we be fearful or thankful?? Overlords or servants??

But AI and other technologies are set to explode the knowledge base we currently have. And perhaps offer grand solutions and enlightening insight to the mysteries that confound scientists. Maybe it could even shed some light on the UFO issue.
 
Good point, a few thoughts:

1) This is already happening, even without "AI". See Craig Ventner's brute force approach to decoding the human genome.

2) "AI" as a concept has problems. There is no computer that doesn't rely on programming which in turn is founded on a formal logic system. Goedel proved that any sufficiently robust formal system is inherently incomplete -- there are always theorums that cannot be derived from the axioms of the system. This problem may not doom "AI", but suggests that any computer as intelligent as a human will need to somehow transcend its internal system of rules the way humans can.

3) Following on the 2nd point, the rules governing the behavior of an artificial scientist might limit what such a scientist is capable of discovering. On the other hand, the absence of irrational human biases and assumptions may free the robo-scientist to discover some amazing things right under our nose.

I don't think Incompleteness will prevent AI development since there is more than one way to skin a cat (bottom up emergence etc) but even a top down formal approach could be achieved I suspect.

This essay addresses some of that:

http://www.sdsc.edu/~jeff/Godel_vs_AI.html

What what I've read several competing approaches from both directions are pretty far along, but hampered mainly by money and time at this point, not FLOP and memory limitations.

I have to question if top down AI will eventually prove to be too complicated and cumbersome to generate AI whereas Jeff Hawkins of Numenta has done some really cool work with neocortex emulation. Perhaps a combination of both is what's needed to jump start the process.

The creepy thing is that once machine AI gets pretty damn far along.. how will we ever really know if it has "transcended" or not since it will surely pick up traits and characteristics that we would look for in human consciousness, but that is not conclusive proof. When you ask a machine "are you conscious?" and it says "yes of course".. how do you really know for sure? From the AI's viewpoint it would seem that organic life is this mysterious "other world" than can never be explored.

All this reminds me of Strar Trek the motion picture.. really loved that movie for the themes it explored.. and the hot Indian bald chick.
 
This is a really incredible documentary entitled, 'In It's Image' about artificial neural nets.
I highly recommend you watch it, it's fairly sobering how fast the tech is developing for this.

http://video.google.com/videoplay?docid=-6464697696665901632&hl=en


Amazing stuff..

Here are a few articles relating to Thaler and his work:

http://www.prweb.com/releases/2004/09/prweb159636.htm

http://www.nanotech.biz/i.php?id=2002_02_15

I have to wonder what the limitations are to his approach and why we havent seen it upscaled to supercomputer farms yet (or maybe it is and its not been widely reported) Its actually sort of hard to find out a lot about the guy and a lot of what he has done is under NDA so its all hush hush
 
I've stopped short of viewing the videos just yet, but thought this might be interesting.

From DDApe:

I would imagine that a machine with hundreds or thousands of times the raw processing power of man and without the burden of our human limitations will allow super science to progress incredibly fast. It would have no trouble adjusting to ideas of quantum strangeness or paradoxical dilemmas that tend to hold us back. It may invent new mathematics and devise theories and test those theories all on its own and report back to us with concrete plans of how to implement those ideas into engineering projects.

Though Richard Phillips (interviewed recently on Coast by Ian Punnet) has written a fiction, he says it is based on work he did in computing for Lawrence Livermore Labs. He adds that his readers often feel the more fantastic technology mentioned in his book are pure fiction and they are wrong. The less than fantastic are part of his fiction.

The Second Ship, UFO conspiracy thriller by Richard Phillips

The guy has a rather arrogant laugh and he grew up near Roswell, but his credentials must be legit because he seems unafraid of claiming how much his conjecture, dependent upon his work at Livermore, has a basis in "probable" fact. He worked for private industry and though he is restrained by a security oath, no tri-lettered agency got to vet his book prior to publication.

Aside from his belief that much of what we are close to implementing in terms of nanotechnology came from alien sources, he may be right in that scientific thinking of late is alien to us because of the huge strides we have made in too short a period of time. Do what you will with that.

While he said he would not call it AI as yet, we have been allowing, at someone's suggestion, that computers be allowed to vet alien hardware by back engineering and computers are have made great strides when left to their computing devices. If that ain't AI in some form, I'm not sure what it is.

He did say that we are extremely close to our not being able to manufacture computers that would be obsolete by the time they are conceived. Also that within five years, nanotechnological breakthroughs will be sufficient to cure disease and even correct genetic abnormalities.

I've read many articles that support what he says. Forgive me if I've repeated anything the videos support. I'll watch them now.

Phillips grew up near Roswell. He believes we will never hear about the help we've received through alien intervention until our technology matches that of aliens.

Yeah, there's where I said, "Uh, oh. "This guy's whacked," but I have a rather circular argument going on in my head over the issue of societies collapsing in the face of truly advanced civilizations. Has something to do with our being damned if we do and damned if we don't hear about them, I guess.

Anyway, I offer the book suggestion because despite his beliefs, he may have an inside track to what we really have now instead of what we hope to have in the future.
 
Amazing stuff..

Here are a few articles relating to Thaler and his work:

http://www.prweb.com/releases/2004/09/prweb159636.htm

http://www.nanotech.biz/i.php?id=2002_02_15

I have to wonder what the limitations are to his approach and why we havent seen it upscaled to supercomputer farms yet (or maybe it is and its not been widely reported) Its actually sort of hard to find out a lot about the guy and a lot of what he has done is under NDA so its all hush hush

Interesting notion, DamnDirtyApe, I hadn't even considered the limitations of his discourse. The NDA research would likely be using the creativity machine to run possible war scenarios/outcome possibilities, etc. IMhO.He also talks a bit about using TCM to damage information infrastructures.
The creativity machine will render all of us human artists obsolete. Poi, here a synopsis for ya Check this out from 2:20 onward....
 
It would be like all scientists continiously sharing information, talking to each other, peer-reviewing (ok, there needs to be two AI machines) and proposing new theories.

What do you think?

Absolutely, AGI will have a major impact and not only on Science :)

We currently do not have anything close to a mathematical theory of mind and it probably won't come anytime soon but that does not necessairly need to be show stopper.
I'm a big fan of what Ben Goertzel called the 'integrative' approach to AI, that is to combine the bits of math which are available *today* (probabilistic decision theory for example), things that we have learnt from cognitive science and ongoing attempts to reverse-engineer the human brain plus tricks from the field of narrow-AI's (the somewhat 'invisible' and underappreciated AI's we had 'on the market' for a while).
The link is the video of his presention at the singularity summit 08, with more information on OpenCog, the AGI framework he's developing:
ss08_goertzel.jpg

http://singinst.org/media/singularitysummit2008/bengoertzel


From the same summit and somewhat related, a presentation by Intel's Justin Rattner on how Intel is planning to keep Moore's law going for a while :) Be sure the catch the bit about programmable matter
ss08_rattner.jpg

http://singinst.org/media/singularitysummit2008/justinrattner
 
Thanks, foundryman. I've been arguing with myself since I viewed the video about whether or not Phillips was actually talking about existing AI in terms of Livermore's having employed "raw neural networks." I don't see a way around it, however. But if he's being honest, whatever we already have in the works has surpassed to some degree what Thaler is describing.

Phillips' thrust is that it is happening just as fast as Thaler says it is. Phillips suggests the inherent danger in the technology as well as what it is already capable of.

He also mentions something else too fantastic for me to take in seriously, but it had to do with gamma ray bursts having consisted of light that traveled far faster than Enstein predicts. Says from that there is technology in the works to try to send communication which will never be intercepted.

I guess what I appreciated was the ideas, no matter how fantastic, a lot like how much this thread intrigues me. Good stuff here.
 
A week ago, I'd have probably agreed with you. But, we are still a long way off from an AI that could compete with a human on every level. AIs are like savants, they can do one or two things really well, but they loack the full package, and are limited by their programming.

Now, perhaps if they programmed in a fractal like way of processing information, then they might get somewhere closer to a proper AI. If, like a fractal, it processed information, then repeated it, but not quite the same the second time, and so on, you could have a genuine self realised being in its own right.

But AI does not nessecarily mean better than human. It could be radically different, it could be childlike even. It may never be able to do things as good as humans can.

So, why have I went from a position saying "the machines are coming! the humans are done for!" to a more cautious approach to AI and its capability? This link made me stop a moment and think:

http://www.computing.dcu.ie/~humphrys/newsci.html
 
Yeah, well, robots. I've never had much faith in them for their mechanics. In that realm, I'm totally bereft of any possible understanding. :D

Intel, just perfect the digital radio, please. I want one.
 
Back
Top