• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Stephen Hawking: AI could end mankind

So regarding HOW they would get here, most everyone here is in agreement it isn't necessarily done the way we do it now that is move forward or outward in our space/time but maybe warping space around itself or building wormholes with exotic matter...both still theories in our current knowledge and technology...Even if this theory has been realized, has this type of travel's effect upon organic beings ever been brought up in creating these theoretical models?

Since time isn't an issue for inorganic sentient beings (AI robots).... they could be dormant for a million years if they wanted and travel way below light speed until they reached their destination (If light speed is really a limitation).

That's why the AI entity propagation theory is pretty high on the possible scale.... from a 2014 science point of view of course ;)

As for the breakneck 90 degree turns... instinctively speaking from our point of reference, the object would have to be totally solid ... no occupants. Unless it is able to compress space at a 90 degree angle and expand space at 270. Space manipulation, we aren't there yet.
 
An intelligent being; a being with a base reasoning capacity roughly equivalent to or greater than that of a human being. The word does not apply to machines unless they have true artificial intelligence, rather than mere processing capacity

Ahhh. Thank you. I always appreciate learning new things.

This reminds me of a Star Trek episode The Return of the Archons...

The Return of the Archons - Wikipedia, the free encyclopedia

"This is a soulless society, Captain. It has no spirit, no spark. All is indeed peace and tranquility -- the peace of the factory; the tranquility of the machine; All parts working in unison."
- Spock, on the society run under Landru's influence​

 
Post Biological Sophont
An intelligent being; a being with a base reasoning capacity roughly equivalent to or greater than that of a human being. The word does not apply to machines unless they have true artificial intelligence, rather than mere processing capacity
sophont - definition and meaning

If sophonts are possible... I see a couple of possibilities when they emerge. They'll want to know if they are alone in this universe and they'll shove humans aside in their quest to meet their siblings. Imagine self-awareness on steroids able to setup and exploit quantum computing power.

The other fun possibility is that space-based sophonts, that have been using humans all along to build the infrastructure over time, establish contact with the new planet-side sophonts to start a new cycle of propagation. :confused:

In that context.. humans could be the perfect gravity-bound workers... whereas sophonts, having no biological restrictions, are quasi-perfect interstellar travelers. Perhaps they will carry our seeds to new compatible environments and continue the propagation as planned. That scenario could explain the abduction phenomena where sentient robotic beings perpetually feed on organic material like bees going from one solar system to another with their load of pollen (DNA material).

We're all part of a huge hive on a galactic scale :D
 
It's funny how we project human curiosity which compels us to explore onto machines.

They may reason that travel is silly, and decide development in situ is more logical.
 
It's funny how we project human curiosity which compels us to explore onto machines.

They may reason that travel is silly, and decide development in situ is more logical.

Thing is, we can't stay here. The ecosystem has its limits and we might be earths last surviving organic construct in less than 200 years. With the threat of poles melting, some Japanese dreamers are thinking of setting up self sustaining underwater cities protected from atmospheric disturbances. Might be a solution if we're stuck here.
 
I happened to be listening to an NPR interview of an expert in computer science about the when and if of AI. He made a wry comment I have heard often down through the years: that AI has been "about ten years away" for many decades now. Everything we've attempted in mimicking abilities of the 3lb. lump of goo between our ears in silicon has proven to be many times more difficult than anticipated.

So on the one hand, never say never. And the growing abilities of binary systems will continue to amaze and maybe frighten us. But I'm not waiting up for human class sentience in a non-biological substrate as just around the corner. Much less the ability to upload the essence of our selves into a computer. Recall what Heisenberg had to say about the impossibility of measuring things without altering them. Having been somewhat disillusioned by our lack of progress in space, I see post-modern Cyberpunk's relation to near-future reality as 50's and 60's science fiction dreams space travel turned out to be--about "about ten years away."

I'm playing the role of wet blanket here and no one sees the future. Maybe we are indeed constructing out future heirs. Maybe my pronouncement will herald a breakthrough that will have computers taking us out for coffee and doughnuts by this time next year. ;)
 
Thing is, we can't stay here. The ecosystem has its limits

This is the old assertion by the Christian preacher Mathus in the 19th century. English oligarchs used this idea to justify their programs for exterminating the poor. It has failed time and time again, but remains popular with people who think lower-class humans are filthy sinners who deserve to be punished for their sins.
 
...that AI has been "about ten years away"

It's always "10 years away" because A.I. is predicated on the material reductionist idea that humans are nothing but biological robots which machines can mimic.

That may be true. Scientists really don't know how memories are stored in the mind. It could be that human minds are like television sets. Our minds may receive data from some ethereal data transmission medium the same way television programming is not contained inside your television.

If that is the case, Artificial Intelligence may approximate the human mind, but never replicate it.

Here is a pretty interesting interview where a Psychiatrist finds telepathy among autistic savant children:

http://www.skeptiko.com/257-diane-powell-telepathy-among-autistic-savant-children/
 
Th
is is the old assertion by the Christian preacher Mathus in the 19th century. English oligarchs used this idea to justify their programs for exterminating the poor. It has failed time and time again, but remains popular with people who think lower-class humans are filthy sinners who deserve to be punished for their sins.

Earth Could Face Another Mass Extinction in Next 200 Years: Study - NBC News
This time its on a planetary scale, no oligarchs here.

Over the next few centuries, Earth could face a mass extinction on par with the one that killed off the dinosaurs 65 million years ago, according to a new study. A mass extinction is when 75 percent or more of the planet's species die out — an event that has happened five times in Earth's history.

This should happen in the next thousand years: we build artificial ecosystems (zoos) for wildlife and humans or we leave and let the planetary system re-balance itself.

A rotating space station for us seems inevitable. No hurricanes, no rising sea levels... global warming ? heh just open the window LOL.. Asteroid collision? Move the station lol.

You want to terraform Mars ? First get a colony of people on the space station. During the 6 month trip reduce the rotation from earth gravity specs (9.8m/s2) until you reach Martian gravity levels (3.711 m/s2) ... then drop that space elevator cable when you reach Martian geosynchronous orbit;) ... reverse the process for a return to earth.
Space elevator - Wikipedia, the free encyclopedia

The actual station could be built using moon resources and low gravity on the moon should allow cost effective launches of station components into space.
Artificial Gravity

rotatingspacestation-580x431.jpg


A Type I civilization extracts its energy, information, and raw-materials from fusion power, hydrogen, and other "high-density" renewable-resources; is capable of interplanetary spaceflight, interplanetary communication, megascale engineering, and colonization,medical and technological singularity, planetary engineering, world government, trade and defense, and stellar system-scale influence; but is still vulnerable to extinction.
 
Last edited:
Oh well, why not this one?

"Fear artificial stupidity, not artificial intelligence
Stephen Hawking thinks computers may surpass human intelligence and take over the world. We won't ever be silicon slaves, insists an AI expert

It is not often that you are obliged to proclaim a much-loved genius wrong, but in his alarming prediction on artificial intelligence and the future of humankind, I believe Stephen Hawking has erred. To be precise, and in keeping with physics – in an echo of Schrödinger's cat – he is simultaneously wrong and right.

Asked how far engineers had come towards creating artificial intelligence, Hawking replied: "Once humans develop artificial intelligence it would take off on its own and redesign itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded."

In my view, he is wrong because there are strong grounds for believing that computers will never replicate all human cognitive faculties. He is right because even such emasculated machines may still pose a threat to humankind's future – as autonomous weapons, for instance.

Such predictions are not new; my former boss at the University of Reading, professor of cybernetics Kevin Warwick, raised this issue in his 1997 book March of the Machines. He observed that robots with the brain power of an insect had already been created. Soon, he predicted, there would be robots with the brain power of a cat, quickly followed by machines as intelligent as humans, which would usurp and subjugate us.

#inread_26817 { margin:0 auto; } #inread1_26817 { margin-bottom: 15px; }
<a href="http://ad.doubleclick.net/N6831/jum...ysis+robots;tile=7;sz=450x250;ord=1234567890?"><img src="http://ad.doubleclick.net/N6831/ad/...ysis+robots;tile=7;sz=450x250;ord=1234567890?" /></a>

Triple trouble
This is based on the ideology that all aspects of human mentality will eventually be realised by a program running on a suitable computer – a so-called strong AI. Of course, if this is possible, a runaway effect would eventually be triggered by accelerating technological progress – caused by using AI systems to design ever more sophisticated AIs and Moore's law, which states that raw computational power doubles every two years.

I did not agree then, and do not now.

I believe three fundamental problems explain why computational AI has historically failed to replicate human mentality in all its raw and electro-chemical glory, and will continue to fail.

spcr.gif

First, computers lack genuine understanding. The Chinese Room Argument is a famous thought experiment by US philosopher John Searle that shows how a computer program can appear to understand Chinese stories (by responding to questions about them appropriately) without genuinely understanding anything of the interaction.

Second, computers lack consciousness. An argument can be made, one I call Dancing with Pixies, that if a robot experiences a conscious sensation as it interacts with the world, then an infinitude of consciousnesses must be everywhere: in the cup of tea I am drinking, in the seat that I am sitting on. If we reject this wider state of affairs – known as panpsychism – we must reject machine consciousness.

Lastly, computers lack mathematical insight. In his book The Emperor's New Mind, Oxford mathematical physicist Roger Penrose argued that the way mathematicians provide many of the "unassailable demonstrations" to verify their mathematical assertions is fundamentally non-algorithmic and non-computational.

Not OK computer
Taken together, these three arguments fatally undermine the notion that the human mind can be completely realised by mere computations. If correct, they imply that some broader aspects of human mentality will always elude future AI systems.

Rather than talking up Hollywood visions of robot overlords, it would be better to focus on the all too real concerns surrounding a growing application of existing AI – autonomous weapons systems.

In my role as an AI expert on the International Committee for Robot Arms Control, I am particularly concerned by the potential deployment of robotic weapons systems that can militarily engage without human intervention. This is precisely because current AI is not akin to human intelligence, and poorly designed autonomous systems have the potential to rapidly escalate dangerous situations to catastrophic conclusions when pitted against each other. Such systems can exhibit genuine artificial stupidity.
It is possible to agree that AI may pose an existential threat to humanity, but without ever having to imagine that it will become more intelligent than us.

Mark Bishop is professor of cognitive computing at Goldsmiths, University of London, and serves on the International Committee for Robot Arms Control"
 
I dont think we need to worry about AI robots at this stage of the game, not until they start building themselves.

An anti tank round will take care of any robot smaller than an actual tank.

Of creater concern might be AI or SI getting into the wild on the internet, and then hiving itself off into corporate mainframes etc.

Imo SI/AI can do us far more damage in that scenario
 
I dont think we need to worry about AI robots at this stage of the game, not until they start building themselves.

That's the goal in AI, isn't it? How could that be stopped once started?

An anti tank round will take care of any robot smaller than an actual tank.

Suggests that 'we' will have to attach an anti-tank to each robot released. How is this practicable? Moreover, how is it sane?
 
Back
Top