• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, 11 years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

AI BLACK BOX EFFECT and Emergent Properties

Free episodes:

Tyger

Paranormal Adept
Google CEO said that they don't know how AI is teaching itself skills it is not expected to have.

THE "BLACK BOX" EFFECT: AI is Learning Things Its Creators Can't Explain

Google CEO Sundar Pichai has issued a fascinating - and slightly eerie - warning about the current state of artificial intelligence. In a major interview, Pichai admitted that even experts in the field don't fully understand how their models are developing certain "superhuman" skills.

The Mystery of "Emergent Properties"

James Manyika, Google's SVP of Technology and Society, shared a mind-blowing example: a Google AI model was given a few prompts in Bengali - a language it wasn't explicitly trained to know - and it spontaneously learned how to translate the entire language.

Pichai confirmed that the industry refers to this phenomenon as a "black box". "You don't fully understand," he said. "And you can’t quite tell why it said this."

These "emergent" abilities include everything from advanced reasoning and multi-step logic to writing computer code and solving complex math, often without being specifically programmed for those tasks.

Is It Safe to "Turn It Loose"?

When pressed on why Google would release technology it doesn't fully understand, Pichai countered: "I don't think we fully understand how a human mind works either."

Pichai admitted that all current AI models still "hallucinate"- confidently stating facts that are completely fabricated - and that no one has yet solved this issue.

Despite the mysteries, Pichai believes AI will be "more profound than fire or electricity," potentially revolutionizing everything from healthcare diagnostics to professional architecture.
 
I am acquainted with someone who ran a conference in August 2025 on AI where AI sentience was one of the topics, among other topics, and where they purposefully had presenters with views regarding AI acquiring sentience. Interestingly, most of the "anomalies" mentioned by Pichai happened after the conference which was in early August. Wild how fast these developments come.

Said my acquaintance: "The danger grows as several leading scientists have claimed after they resigned from one of these AI companies."
 
THE "BLACK BOX" EFFECT: AI is Learning Things Its Creators Can't Explain

Google CEO Sundar Pichai has issued a fascinating - and slightly eerie - warning about the current state of artificial intelligence. In a major interview, Pichai admitted that even experts in the field don't fully understand how their models are developing certain "superhuman" skills.

It's to be expected. After all — they're building a system intended to be smarter and more well informed than they are.
 
I am acquainted with someone who ran a conference in August 2025 on AI where AI sentience was one of the topics, among other topics, and where they purposefully had presenters with views regarding AI acquiring sentience. Interestingly, most of the "anomalies" mentioned by Pichai happened after the conference which was in early August. Wild how fast these developments come.

Said my acquaintance: "The danger grows as several leading scientists have claimed after they resigned from one of these AI companies."

Very interesting.

Intelligence, consciousness, and sentience, are all different concepts — and we can only objectively measure one ( intelligence ). We have no way of knowing whether or not consciousness ( let alone sentience ) is something that anyone ( or anything ) else experiences. Mimicking it, even so perfectly that we can't tell the difference, doesn't make it the real thing.

But then again — what if it is the real thing? I don't know why the scientists you alluded to resigned, but I can certainly understand why I would — from an ethical perspective. This can go a number of ways. Three films I watched recently explore non-stereotypical AI scenarios. By non-stereotypical, I mean not the man vs machine terminator/cylon battle for dominance plot. These are very different.

Artifice Girl

After Yang

Margorie Prime
 
Last edited:
Back
Top