• NEW! LOWEST RATES EVER -- SUPPORT THE SHOW AND ENJOY THE VERY BEST PREMIUM PARACAST EXPERIENCE! Welcome to The Paracast+, eight years young! For a low subscription fee, you can download the ad-free version of The Paracast and the exclusive, member-only, After The Paracast bonus podcast, featuring color commentary, exclusive interviews, the continuation of interviews that began on the main episode of The Paracast. We also offer lifetime memberships! Flash! Take advantage of our lowest rates ever! Act now! It's easier than ever to susbcribe! You can sign up right here!

    Subscribe to The Paracast Newsletter!

Issac Asimov Memorial Debate: Is Universe a Simulation?


Ok, what’s the circumstantial evidence that leads you to think it’s plausible?

The Asimov debate covers some of that, but off the top of my head here are some thoughts on the idea.

1. The first assumption is that such technology is possible, and that is based on our increasing ability to model virtual universes with the technology we have now, and an extrapolation of the rate of progress for supercomputing, which suggests that eventually a system will be developed that is capable of modeling environments down to the molecular level that follow algorithms modeled on the way nature behaves.


If this is possible ( and it seems reasonable to me to believe it is ), then it's just a matter of time for us. However why should we think we'd be the first creatures in an infinite original universe to develop the capacity? Maybe we are, but given the odds, it doesn't seem likely, and therefore if it's already been done, then it's likely been done many times by many races, in which case the likelihood that we're in one of those constructs could in theory be greater than the likelihood of us being in the topmost layer ( whatever that is ).


This is sort of similar to an argument made by Nick Bostrom: Are You Living in a Simulation?

2. A computational construct seems to offer plausible explanations for curious phenomena in physics like the "cosmic speed limit", "spooky action at a distance" and "particle wave duality".

3. The fundamental forces of nature are associated with particles, but even particles are abstract ideas about whatever actually composes what we think of as material reality, and nobody knows how those particles or strings ( or whatever the case may be ) are imparted with the properties ( forces ) associated with them.

However as they are imparted with the forces associated with them, then logically, something is doing the "imparting". In a computational model, the imparting consists of rules of the associated algorithms. From the perspective of those within the construct, those algorithms are transparent and seemingly imparted in some mysterious way. The computational construct offers a non-mysterious way to explain that situation, at least for universes within such constructs.

4. Anecdotes where people report what seem to be other realities of some sort suggest multiple universes rather than only one. Other universes fit well with the computational construct model.
 
Last edited:
The Asimov debate covers some of that, but off the top of my head here are some thoughts on the idea.

1. The first assumption is that such technology is possible, and that is based on our increasing ability to model virtual universes with the technology we have now, and an extrapolation of the rate of progress for supercomputing, which suggests that eventually a system will be developed that is capable of modeling environments down to the molecular level that follow algorithms modeled on the way nature behaves.


If this is possible ( and it seems reasonable to me to believe it is ), then it's just a matter of time for us. However why should we think we'd be the first creatures in an infinite original universe to develop the capacity? Maybe we are, but given the odds, it doesn't seem likely, and therefore if it's already been done, then it's likely been done many times by many races, in which case the likelihood that we're in one of those constructs could in theory be greater than the likelihood of us being in the topmost layer ( whatever that is ).

It is possible. It's just improbable.

An aspect of Turing machines is that they can simulate themselves; i.e. one Turing machine can emulate any other Turing machine as long as it's Turing-complete.

The hitch is that you need a larger Turing machine to simulate a smaller one. So any virtual universe would by necessity be capable of more computation than the universe it was emulating, which in itself would be universe-sized. See, the universe itself and all it's physical processes can be thought of as computation steps. So, no computer that existed in this universe could accurately model this universe.

Now, it's possible some larger universe could have a computer in it that would simulate this one. But why would it?

You raise an interesting point, however. If the 'outside' universe were infinite, it would be possible to simulate a finite/bounded universe, of which this is one. Our universe is not infinite.


This is sort of similar to an argument made by Nick Bostrom: Are You Living in a Simulation?

2. A computational construct seems to offer plausible explanations for curious phenomena in physics like the "cosmic speed limit", "spooky action at a distance" and "particle wave duality".

Not really. Many of the fundamental forces 'froze out' of the unified plasma that existed after the big bang. If the universe were virtual, there would actually be no need for those things to exist at all. It's more likely that they would be abstracted and simplistic, because the universe appears to be extremely complex, even when we're not paying attention to it. Why?

If you look at what made 3-D games possible, for example, it's things like only modelling what the user is looking at that make the simulation possible. It's a cheat, a hack: only model what the user sees, and fake the rest. One might try to make an argument that this is wave/particle duality in action, except the thing that collapses the wave/particle superposition doesn't have to be conscious.


3. The fundamental forces of nature are associated with particles, but even particles are abstract ideas about whatever actually composes what we think of as material reality, and nobody knows how those particles or strings ( or whatever the case may be ) are imparted with the properties ( forces ) associated with them.
I'm not seeing how this is an argument for anything except "we don't know how the universe works yet."


However as they are imparted with the forces associated with them, then logically, something is doing the "imparting". In a computational model, the imparting consists of rules of the associated algorithms. From the perspective of those within the construct, those algorithms are transparent and seemingly imparted in some mysterious way. The computational construct offers a non-mysterious way to explain that situation, at least for universes within such constructs.
This is causality, and is actually an interesting problem. It's one that the information theory of physics deals with rather neatly. However, this is still a model for us to look at the universe as, not as what the universe is itself.


4. Anecdotes where people report what seem to be other realities of some sort suggest multiple universes rather than only one. Other universes fit well with the computational construct model.
Agreed on this one. However, the multiverse model may also be a good explanation.
 
It is possible. It's just improbable.
I gave what seems to be a good reason to think exactly the opposite: To quote:

"Why should we think we'd be the first creatures in an infinite original universe to develop the capacity [ to develop complex computational constructs ] ? Maybe we are, but given the odds, it doesn't seem likely, and therefore if it's already been done, then it's likely been done many times by many races, in which case the likelihood that we're in one of those constructs could in theory be greater than the likelihood of us being in the topmost layer ( whatever that is )."

In other words, to make the assumption that it's not probable requires us to assume that we're one of ( or the only ) advanced race in the entire universe to ever conceive of the idea and begin working on the technology to realize it. I don't think that is a reasonable assumption. Even if this is the topmost layer, given the age and size of our universe, the probability that there are beings far more advanced than we are seems astronomically high. So I think it's exactly backward from what your suggesting.

In other words it is possible no other beings in the entire universe have ever conceived of creating technology to model universes, but it doesn't seem likely that we're among the most advanced ( or only advanced ) beings in the entire universe. But maybe I'm missing some sort of reasoning there that you can hep clarify by explaining why you think we are?



An aspect of Turing machines is that they can simulate themselves; i.e. one Turing machine can emulate any other Turing machine as long as it's Turing-complete.

The hitch is that you need a larger Turing machine to simulate a smaller one. So any virtual universe would by necessity be capable of more computation than the universe it was emulating, which in itself would be universe-sized. See, the universe itself and all it's physical processes can be thought of as computation steps. So, no computer that existed in this universe could accurately model this universe.

Now, it's possible some larger universe could have a computer in it that would simulate this one. But why would it?

You raise an interesting point, however. If the 'outside' universe were infinite, it would be possible to simulate a finite/bounded universe, of which this is one. Our universe is not infinite.
Right. Which brings up the concept of multiverses, which can be seen as independent programs. We might be instance 7 of Universe 3.0.exe.
Not really. Many of the fundamental forces 'froze out' of the unified plasma that existed after the big bang. If the universe were virtual, there would actually be no need for those things to exist at all. It's more likely that they would be abstracted and simplistic, because the universe appears to be extremely complex, even when we're not paying attention to it. Why?
"Freezing out" doesn't explain how the forces of nature came into being or why they'd be mapped onto their associated particles. This is a problem that nobody knows the answer to. For all intent and purpose, particles are forces and not particles that have "forces", but even that doesn't resolve anything.

Also what we find is that complexity arises out of simplicity through iterations e.g. fractals. Put the algorithms into action and what happens? You get a universe that seems to begin from nothing ( zero iterations of all algorithms ) to the sudden existence of the results of running the program ( the sudden existence of the universe out of nothing ) that would then go through a period of rapid expansion because the initial processing demands would be low, and so on, seemingly paralleling what we'd expect to see.

If you look at what made 3-D games possible, for example, it's things like only modelling what the user is looking at that make the simulation possible. It's a cheat, a hack: only model what the user sees, and fake the rest. One might try to make an argument that this is wave/particle duality in action, except the thing that collapses the wave/particle superposition doesn't have to be conscious.
Right. But our systems are also in their infancy. What sort of power will they have in a thousand years? I don't know, but I wouldn't bet that modelling a universe would be beyond their capacity.
I'm not seeing how this is an argument for anything except "we don't know how the universe works yet."
Right. I did say it was circumstantial and suggestive rather than conclusive.
This is causality, and is actually an interesting problem. It's one that the information theory of physics deals with rather neatly. However, this is still a model for us to look at the universe as, not as what the universe is itself.
That depends on which universe we're talking about. The computational model doesn't solve the biggest problem ( the turtles all the way down problem ), but it could solve the immediate one, and that would be huge step if it were the case.
Agreed on this one. However, the multiverse model may also be a good explanation.
The concept of multiverses fits neatly into the computational model ( as indicated above ). Why should a near infinitely powerful system be confined to running only one construct? Running a few in parallel to study certain differences might be advantageous to running them serially.
 
Last edited:
I gave what seems to be a good reason to think exactly the opposite: To quote:

"Why should we think we'd be the first creatures in an infinite original universe to develop the capacity [ to develop complex computational constructs ] ? Maybe we are, but given the odds, it doesn't seem likely, and therefore if it's already been done, then it's likely been done many times by many races, in which case the likelihood that we're in one of those constructs could in theory be greater than the likelihood of us being in the topmost layer ( whatever that is )."

In other words, to make the assumption that it's not probable requires us to assume that we're one of ( or the only ) advanced race in the entire universe to ever conceive of the idea and begin working on the technology to realize it. I don't think that is a reasonable assumption. Even if this is the topmost layer, given the age and size of our universe, the probability that there are beings far more advanced than we are seems astronomically high. So I think it's exactly backward from what your suggesting.

In other words it is possible no other beings in the entire universe have ever conceived of creating technology to model universes, but it doesn't seem likely that we're among the most advanced ( or only advanced ) beings in the entire universe. But maybe I'm missing some sort of reasoning there that you can hep clarify by explaining why you think we are?

Because it would be very hard, you'd have a very hard time getting information out of it, and it would take a long time.

By hard I mean hard. Computation doesn't come for free. You couldn't model our universe even if you converted each atom of the earth into a computation engine - there are fundamental limits to computation.

You couldn't model our universe even if you converted every atom in the universe to a computation engine. You'd have to model a smaller universe in the universe.

It's actually a very interesting problem.
Limits of computation - Wikipedia

These calculations were actually referenced in Kurzweil's fun Singularity is Near which I wholeheartedly recommend.

    • In The Singularity is Near, Ray Kurzweil cites the calculations of Seth Lloyd that a universal-scale computer is capable of 1090 operations per second. The mass of the universe can be estimated at 3 × 1052 kilograms. If all matter in the universe was turned into a black hole it would have a lifetime of 2.8 × 10139 seconds before evaporating due to Hawking radiation. During that lifetime such a universal-scale black hole computer would perform 2.8 × 10229 operations.[9]
But let's say you took Jupiter and converted it into a computer so you could simulate a small, simple universe. You would do this by essentially compressing it into a black hole.

The problem is now how do you program it, and how do you get information out of it. Programming it is maybe possible by collapsing Jupiter in such a way to create the starting state. Getting information out of a black hole is maybe possible if Hawking is right, but the information transfer would also be bounded. See, it would rely on quantum entanglement in the hawking radiation given off by the black hole, that also makes it slowly evaporate.

This would not allow for a lot of information to exit the computer. It also means that you might not be able to ask it anything once it's running. You could perhaps get it to answer a question, but you couldn't ask it a new one once it's running, and the answers might not come at all (due to the halting problem) and even if they did, it would be simple.

Right. Which brings up the concept of multiverses, which can be seen as independent programs. We might be instance 7 of Universe 3.0.exe.

This would multiply the problem above, yes?


"Freezing out" doesn't explain how the forces of nature came into being or why they'd be mapped onto their associated particles. This is a problem that nobody knows the answer to. For all intent and purpose, particles are forces and not particles that have "forces", but even that doesn't resolve anything.
Agreed. The universe is truly weird.

Also what we find is that complexity arises out of simplicity through iterations e.g. fractals. Put the algorithms into action and what happens? You get a universe that seems to begin from nothing ( zero iterations of all algorithms ) to the sudden existence of the results of running the program ( the sudden existence of the universe out of nothing ) that would then go through a period of rapid expansion because the initial processing demands would be low, and so on, seemingly paralleling what we'd expect to see.

You're right that you can start with a simple state and use it to drive a complex state. Fractals and strange attractors are great examples of that.


Right. But our systems are also in their infancy. What sort of power will they have in a thousand years? I don't know, but I wouldn't bet that modelling a universe would be beyond their capacity.

Computation has limits. If I were investigating this, I'd look for approximations and examples of compression. That's how we model complex stuff - we decide what we don't care about, and we don't bother computing it. We also get rid of most non-random information - this is what compression is.


Right. I did say it was circumstantial and suggestive rather than conclusive.
It feels like we're developing a belief system here, to be honest.


That depends on which universe we're talking about. The computational model doesn't solve the biggest problem ( the turtles all the way down problem ), but it could solve the immediate one, and that would be huge step if it were the case.
The concept of multiverses fits neatly into the computational model ( as indicated above ). Why should a near infinitely powerful system be confined to running only one construct? Running a few in parallel to study certain differences might be advantageous to running them serially.

A near infinitely powerful computer would require a nearly infinitely large universe to run it in, and consequently nearly infinite consciousness to construct it - our universe is finite and bounded. It's possible there are universes that aren't.

But even in the multiverse model, those universes are all finite and bounded.

It's an interesting question.
 
Because it would be very hard, you'd have a very hard time getting information out of it, and it would take a long time.
Difficulty is a relative concept. Only a hundred years ago, the thigh tech stuff we have today may have been conceived of in some fashion, but considered unattainable. 200 years ago it's doubtful most of it was even conceived of, and before that, if it was thought of at all, it was purely speculation and philosophy in the minds of very few. Even today, the workings of a lot of tech is a mystery to the end user. So a thousand years from now? I wouldn't bet against technology being developed that hasn't even been imagined yet. And that's only in the next thousand years. What about two or three thousand?
By hard I mean hard. Computation doesn't come for free. You couldn't model our universe even if you converted each atom of the earth into a computation engine - there are fundamental limits to computation. You couldn't model our universe even if you converted every atom in the universe to a computation engine. You'd have to model a smaller universe in the universe.
Why couldn't our observable universe be a "smaller universe"? We only have a limited range of observation. So for all we know, somewhere out there in the blackness beyond is where the limits of the construct are.
It's actually a very interesting problem.
Limits of computation - Wikipedia

These calculations were actually referenced in Kurzweil's fun Singularity is Near which I wholeheartedly recommend.
Ya. That's up my alley alright.
But let's say you took Jupiter and converted it into a computer so you could simulate a small, simple universe. You would do this by essentially compressing it into a black hole.

The problem is now how do you program it, and how do you get information out of it. Programming it is maybe possible by collapsing Jupiter in such a way to create the starting state. Getting information out of a black hole is maybe possible if Hawking is right, but the information transfer would also be bounded. See, it would rely on quantum entanglement in the hawking radiation given off by the black hole, that also makes it slowly evaporate.

This would not allow for a lot of information to exit the computer. It also means that you might not be able to ask it anything once it's running. You could perhaps get it to answer a question, but you couldn't ask it a new one once it's running, and the answers might not come at all (due to the halting problem) and even if they did, it would be simple.

This would multiply the problem above, yes?
It was first believed that an optical disc couldn't hold enough data to store a VHS movie. Technological limitations have a long history of being overcome by new technology and ways of doing things. Beyond saying that, I'm no computer scientist, so I don't have the answers. I can only perform logical analysis on a surface level.

So for example if purely photonic quantum computing is a real possibility, then we're actually dealing with beyond light-speed calculations involving superposition and entanglement. That would be orders of magnitude more powerful than the systems we have now, hypothetically capable of instantaneous calculations, or even more strangely, if some of the stuff I've read out there is true, like particles in the present affecting particles in the past, then hypothetically computation can happen before the question is even posed.


So yes. It's a mind boggling problem for us now, but given that we have these conceptual workarounds at our own primitive stage of development. Maybe they're not unlike the way atoms were first hypothesized by the ancient Greeks. So I remain of the opinion that in another thousand years there will be solutions barely imaginable to most people now. And if that's the case, then it seems unlikely that we'd be the first species in the universe as a whole to come up with them, and therefore the probability we were talking about would again be the reverse of your assumption.
It feels like we're developing a belief system here, to be honest.
There's a difference between developing beliefs about possibilities based on reason, and believing something is actually the way some weakly substantiated theory or another suggests that it could be. The computational construct theory is a personal favorite because it seems better than most ancient mythology, and as seen in the Asimov debate, it is being taken seriously by bright modern minds. Cosmologists don't seem to have a better theory either. But I'm not ready to sign-up with Matrixism just yet: Matrixism: The path of the One, The Matrix religion
A near infinitely powerful computer would require a nearly infinitely large universe to run it in, and consequently nearly infinite consciousness to construct it - our universe is finite and bounded. It's possible there are universes that aren't.
But even in the multiverse model, those universes are all finite and bounded.
It's an interesting question.
Indeed. My two problems with the computational cosmological model are the ideas of memory storage and consciousness. It's one thing for a super-system to be able to run a set of instructions from moment to moment, and another to store the results of everything on some "cloud". At best I'd suggest that only a few choice facets would be practical to store in memory no matter how powerful the processor itself would be. The other is consciousness. We don't know how that works, so even if a computational model of the human brain can be constructed ( as they're doing as we speak ), it's a leap in logic to assume that it would be conscious. But who knows how that line of inquiry will unfold over the next thousand years? An AI with consciousness? Is it really that far fetched?
 
Last edited:
I guess what I'm struggling with is that how is it different than saying God made the universe?

We're talking about beings that can't exist in this universe, and yet powerful enough to create it. Powerful enough to create a 14.5 BY simulation with sufficient complexity to create general intelligence, that can itself create civilizations.

That's a good functional approximation of God, isn't it?
 
Constraints on the Universe as a numerical Simulation: https://www.researchgate.net/profil...on-the-Universe-as-a-Numerical-Simulation.pdf

Notes: This is the paper Zoreh refers to in the Asimov Debate.
This is an interesting approach, no doubt. I think it's basically saying that there should be an underlying uniformity (rotational symmetry) in cosmic rays if we're a simulation.

I'm not sure what this means:
unimproved Wilson fermion discretization
So I'm now reading this:
Fermion doubling - Wikipedia
 
Ah, OK, I think I'm groking it now.

With the current developments in HPC and in algorithms it is now pos-
sible to simulate Quantum Chromodynamics (QCD), the fundamental force in nature that
gives rise to the strong nuclear force among protons and neutrons, and to nuclei and their
interactions. These simulations are currently performed in femto-sized universes where the
space-time continuum is replaced by a lattice, whose spatial and temporal sizes are of the
order of several femto-meters or fermis (1 fm = 10−15 m), and whose lattice spacings (dis-
cretization or pixelation) are fractions of fermis 1. This endeavor, generically referred to as
lattice gauge theory, or more specifically lattice QCD, is currently leading to new insights
into the nature of matter 2. Within the next decade, with the anticipated deployment of
exascale computing resources, it is expected that the nuclear forces will be determined from
QCD, refining and extending their current determinations from experiment, enabling pre-
dictions for processes in extreme environments, or of exotic forms of matter, not accessible
to laboratory experiments. Given the significant resources invested in determining the quan-
tum fluctuations of the fundamental fields which permeate our universe, and in calculating
nuclei from first principles (for recent works, see Refs. [4–6]), it stands to reason that future
simulation efforts will continue to extend to ever-smaller pixelations and ever-larger vol-
umes of space-time, from the femto-scale to the atomic scale, and ultimately to macroscopic
scales. If there are sufficient HPC resources available, then future scientists will likely make
the effort to perform complete simulations of molecules, cells, humans and even beyond

Constraints on the Universe as a Numerical Simulation (PDF Download Available). Available from: Constraints on the Universe as a Numerical Simulation (PDF Download Available) [accessed Sep 21, 2017].

The bolding is mine. I'll keep reading. I'm not sure the author has considered fundamental limits on computation, I'll check.
 
Oops, no, I think the author missed it.

There are, of course, many caveats
to this extrapolation. Foremost among them is the assumption that an effective Moore’s
Law will continue into the future, which requires technological and algorithmic developments
to continue as they have for the past 40 years. Related to this is the possible existence of
the technological singularity [23, 24], which could alter the curve in unpredictable ways.

Constraints on the Universe as a Numerical Simulation (PDF Download Available). Available from: Constraints on the Universe as a Numerical Simulation (PDF Download Available) [accessed Sep 21, 2017].

Moore's law has it's limits.
 
And... bingo, here it is:
That is, early
simulations use the computationally “cheapest” discretizations with no improvement.

I wish they had a comp sci guy involved in this. Look for approximations and compression.
 
I guess what I'm struggling with is that how is it different than saying God made the universe?
The difference between a computational construct and "God made the universe" is that in the event that it turns out that the universe we're in is a computational construct, those who see the architect as God have deified a universe creator, whereas those who simply recognize the existence of the construct see it as another facet of the universe to learn about.
We're talking about beings that can't exist in this universe, and yet powerful enough to create it. Powerful enough to create a 14.5 BY simulation with sufficient complexity to create general intelligence, that can itself create civilizations.
That's a good functional approximation of God, isn't it?
Only if one chooses to deify it. Otherwise it's subject to the same analysis as anything else, and in this example it only boils down to sheer scale and relative power. Are these sufficient reasons to deify something? Maybe for some it is. Not for me.
 
The difference between a computational construct and "God made the universe" is that in the event that it turns out that the universe we're in is a computational construct, those who see the architect as God have deified a universe creator, whereas those who simply recognize the existence of the construct see it as another facet of the universe to learn about.
Only if one chooses to deify it. Otherwise it's subject to the same analysis as anything else, and in this example it only boils down to sheer scale and relative power. Are these sufficient reasons to deify something? Maybe for some it is. Not for me.

What I'm talking about is logical equivalence.

I once wrote a paper in one of my philosophy classes arguing that god could not exist in this universe, and could never look at it.

It went roughly as follows:

Assertion: Quantum mechanics is a thing.
Assertion: QM says that light behaves simultaneously as a wave and a particle as long as you don't measure it.
Assertion: Once you measure light, it collapses the wavefront such that it either becomes a wave or a particle.
Assertion: God exists in this universe.
Assertion: God is omniscient.
Assertion: God exists for all time.
Deduction: All wave/particle superposition would collapse from the moment of the big bang until now, because God would be looking at all things for all time.
Deduction: We would therefore exist in a Newtonian universe.
Assertion: We don't live in a Newtonian universe.
Deduction: Therefore, either God does not exist in this universe, or God is not omniscient, or God does not exist at this time.

I think I got a B with a comment about being a smartass.
 
Deduction: Therefore, either God does not exist in this universe, or God is not omniscient, or God does not exist at this time.

IMHO, it might be that to give human creatures some real freedom of choice at some level, an omniscient, omnipotent Creator might fashion the created realm with a measure of what seems to us as stochasticity, not being out of control of the Creator, but rather giving creatures a measure of freedom to operate within the created realm.

These kinds of questions are great, and even assuming non-theism, there is a LOT to account for. Just one example. Why would a universe that was actually and really based on, derived from, and emergent out of inert, non-living, non-sentient substances ever produce entities that actually think the universe was the creation of an omnipotent Creator?

You may conclude such people are naïve and mistaken, but the thing is, someone like James Clerk Maxwell, who is recognized as near equal to Newton, had no problem with theism. John Polkinghorne is another modern example. Why would an absolutely inert universe ever be tilted in such a way to ever produce the idea of karma, pantheism, polytheism, deism or theism? I think that question is just as demanding as any other, but YMMV.

Congrats on your grade! I once got an A on a paper where I argued that there is no such thing as altruistic generosity, which I would not agree with today, hehe.
 
... Why would an absolutely inert universe ever be tilted in such a way to ever produce the idea of karma, pantheism, polytheism, deism or theism? I think that question is just as demanding as any other, but YMMV ...

Why type questions like the above are like loaded questions in the sense that they presuppose intent, when it's possible there's no intent involved. For example the answer could be that ideas like karma, pantheism, polytheism, deism or theisma are simply byproducts of the way the slightly more evolved monkey-like creatures on this particular rock think. So the question isn't really as demanding as you might have first thought.

A more objective why type question is: Why deify a universe creator ( or anything else for that matter ) ?

Some possible answers:

  1. I enjoy worshipping things.
  2. Getting other people to worship what I worship puts me in charge.
  3. If I'm in charge then I can give orders and punish people who don't worship.
  4. If I'm not in charge I better worship or the believers will do something bad to me.
  5. It makes for an interesting super-hero type story and fan-club.
  6. It justifies my need to believe in magical powers.
  7. I like to annoy non-theists with nonsense.
  8. Because __________________ ( fill in the blank ).
 
Last edited:
IMHO, it might be that to give human creatures some real freedom of choice at some level, an omniscient, omnipotent Creator might fashion the created realm with a measure of what seems to us as stochasticity, not being out of control of the Creator, but rather giving creatures a measure of freedom to operate within the created realm.

These kinds of questions are great, and even assuming non-theism, there is a LOT to account for. Just one example. Why would a universe that was actually and really based on, derived from, and emergent out of inert, non-living, non-sentient substances ever produce entities that actually think the universe was the creation of an omnipotent Creator?

You may conclude such people are naïve and mistaken, but the thing is, someone like James Clerk Maxwell, who is recognized as near equal to Newton, had no problem with theism. John Polkinghorne is another modern example. Why would an absolutely inert universe ever be tilted in such a way to ever produce the idea of karma, pantheism, polytheism, deism or theism? I think that question is just as demanding as any other, but YMMV.

Congrats on your grade! I once got an A on a paper where I argued that there is no such thing as altruistic generosity, which I would not agree with today, hehe.
Well, if our free will comes from quantum mechanics, then Gods existense would take it away.
 
By hard I mean hard. Computation doesn't come for free. You couldn't model our universe even if you converted each atom of the earth into a computation engine - there are fundamental limits to computation.
The essential problem with a lot of this discussion is that it is being done from our perspective, our understanding of physics. So, yes, from our perspective it would be very difficult to simulate a universe like our own. That is evidence of nothing. It could be that our universe was intentionally designed in this manner to prevent the constuction of infinitely deep universe simulations. To assume that any universe outside our own (the one running the simulation of us) functions on the same laws of physics is naive. As brought up early on in the video, were Mario to approach discerning what our universe is liked based on the laws and observable constructs of his own world, he would be trudging down a very wrong path.

In regards to referring to our programmers as god(s), if this is truly a simulation, there isn’t anything to debate. They would be our god, our creator. All the other stuff, omniscience, their plans, benevolence, malevolence, answering prayers, or that they pay us any mind at all is just the material of our own varied beliefs that we have made up. We might not even be the main focus of the simulation. We might simply be the Koopas, patiently waiting for Mario to get to our level.
 
Well, if our free will comes from quantum mechanics, then Gods existense would take it away.

I'm not sure I follow you exactly, so sorry. I think that you are saying that God's omniscience would of necessity collapse all wave functions in accord, and only in accord, with his will, eliminating everyone else's free will. If that's your intent, then that is a fair challenge, though it seems to me that a Creator could design even the quantum world in a way to include a measure of human freedom. Probably most people are sure they have freedom of decision, but I don't think that human freedom of necessity mitigates against a theistic aspect to reality. Admittedly my opinion.

Well, today I got an email back about my second journal submission this summer, and the editor has legitimate "concerns" that I will have to deal with.
So, syanora for some time.
 
Back
Top