hckrnws
It depends on which aspects you're trying to simulate accurately. If you consider the physically optimal implementation of any function (e.g. the optimal NAND gate), that system cannot be simulated in real time: the simulation will always be slower, pretty much by definition. Insofar that physics optimally implements itself, you cannot simulate reality in general without a massive performance hit (think about the recursive absurdity of the simulator simulating itself).
What you can try to do is simulate the most salient/important aspects of reality, but the more physically optimal these aspects become, the harder simulation of them will get. For instance, as our computers become faster and more efficient, fast simulation of our reality becomes progressively harder. That's without even considering the physical optimality of natural systems, of course, which I imagine is rather high.
Simulated time does not need to match real time.
Depends on the application. If you're playing a video game, you want real time. If you're trying to predict something, the simulation needs to run faster than real time. If you're trying to model a new technology or a recurrent situation, up to a few orders of magnitude of slowdown is quite acceptable.
> Depends on the application. If you're playing a video game, you want real time.
If the player is part of the simulation they won’t know.
I mean, sure, but the value proposition of a slow simulation containing slow minds is a little dubious, isn't it? It would be neither cheap nor useful, so why would it be made?
There's considerable value actually. You could simulate a whole economy and test economic theories if you could simulate a country full of minds, all with different interests and values. There's no need for this kind of simulation to be real-time.
That there's value in simulating lots of minds is the core premise of the simulation hypothesis.
We can't simply assert that this is valuable, we have to analyze it properly. Mass simulation will happen if the expected value of the simulation exceeds the opportunity cost of, well, everything else we could do with these resources instead.
First, the expected value... it is, frankly, questionable. Certainly it's great in the best case scenario where we stumble upon a good system and then put it in application, but how likely is that? I can see several cases where the entire value would be sunk:
* You will need to run not just one, but many simulations. However, the same argument that implies the simulation will be slower than reality also implies that it will be bigger than reality. The problem isn't merely that this simulation won't occur in real time, it's also that it's going to be physically enormous. Even running a single one would be quite the undertaking.
* It is quite possible that an economic system would work very well at first, only to collapse 50 years later for a variety of difficult-to-foresee reasons. If the simulation runs too slowly, we may have to commit to a system before the simulation can demonstrate its collapse. Or wait centuries, but then the initial conditions may have diverged far too much for the experiment to translate.
* We're not even guaranteed to glean any important insights from such a simulation. Civilization is a chaotic system that involves a ton of smart agents who constantly come up with paradigm-shifting inventions or new ways to bend and manipulate the system. It won't take much time before all you're doing is explore some random alternate future, although it's slower than real time, so it's actually some random alternate past. It's not useless, but it's middling.
* And then you have to implement changes. Very sensitive to the current conditions, which change faster than the simulator can keep up with. Good luck.
As for the second part of the equation, the opportunity cost: likely humongous. Insofar that simulation hardware is a general purpose substrate, there is a mind-boggling number of uses for it. If you can simulate minds faster than real time, sure, that's useful. If not, it's unclear to me how a poor imitation of a real thing would even crack the top 10 of things you'd want to do with this hardware.
Sorry for all the text. I think the whole debate can be summarized like this: the simulation hypothesis makes sense if simulation is cheap, it falls apart if it isn't. And I don't think simulation is going to be cheap (unless it's quite crude and approximate, in which case it won't tell us much about reality.)
> * You will need to run not just one, but many simulations. However, the same argument that implies the simulation will be slower than reality also implies that it will be bigger than reality.
You don't need to simulate reality, you only need to simulate the minds that perceive reality. Thus cuts computational costs by many orders of magnitude over a reality simulation. We don't have the understanding yet to be able to precisely quantify it, but just eliminating the dependency on quantum mechanics is huge.
> * It is quite possible that an economic system would work very well at first, only to collapse 50 years later for a variety of difficult-to-foresee reasons.
And climate models could diverge from reality 100 or 200 years later too, we still run those because reasonably accurate short to medium term information is better than none.
> (unless it's quite crude and approximate, in which case it won't tell us much about reality.)
Even our crude and approximate simulations of quantum chemistry, protein folding, weather, climate, orbital dynamics etc. tell us a lot.
> You don't need to simulate reality, you only need to simulate the minds that perceive reality. Thus cuts computational costs by many orders of magnitude over a reality simulation.
Ehhhhh... let's think about it.
First, that does not eliminate the speed bottleneck, which is determined by what you do decide to simulate accurately. It is... unclear whether human minds can be simulated efficiently. I would not be surprised if the best accurate simulated brain we can muster is ten times slower and ten times bigger than a real one. Or worse.
Second, surely you need to simulate more than minds. You need some kind of environment to put them in. But if you're giving them some pale ersatz of reality in which they cannot develop effective technology, because stuff stops working when they stop looking at it, I'd argue you're wasting them. Minds act very differently depending on their capabilities --- simulating minds without simulating reality is a bit like studying the behavior of birds with clipped wings.
> just eliminating the dependency on quantum mechanics is huge.
What makes you think it can be eliminated? I'm not saying minds require quantum mechanics in principle, but evolution doesn't try to make its designs robust to changes in the laws of physics. There's a sizable chance that simulation of a human brain fails if you ignore quantum effects, simply because that violates its calibration. You would therefore need to reimplement minds from first principles, but that's a whole other problem.
> And climate models could diverge from reality 100 or 200 years later too, we still run those because reasonably accurate short to medium term information is better than none.
These models run far, far, far faster than real time. You can't forecast a phenomenon using a simulation that's slower than the phenomenon itself.
Reminds me of an excellent conference talk I went to once - landslip modellers in BC, Canada spend a lot of time trying to prevent slips from damanging the railways that wind throught the mountainous terrain.
There was a lot of research into statistical models of slips – in particular boudler movement and how they break up – which didn't really work.
It turned out that game engines with good dynamic object desctruction are pretty good at modelling them, and they were able to get much better results using the game engine vs statistical models.
Also reminds me of how Plague Inc (a Miniclip game, pretty good if you haven't played) was very realisitc at modelling disease outbreaks, during ebola and covid times, prompting the makers to put out a statement.
[0] - https://arstechnica.com/gaming/2020/01/plague-inc-maker-dont...
> There was a lot of research into statistical models of slips – in particular boudler movement and how they break up – which didn't really work.
You remembered me history from Harrington Emerson (The twelve principles of efficiency).
As railroad worker, he frequently seen rails flooded by rain. Exist at least two solutions.
First, to build Embankment, so rails will be much over water level, which will cost at least few millions dollars. And sure, suffer all problems with slips in future.
Second, to dig trench near rails (in most cases it is obvious, where to dig) and just constantly pump out water from it, so all water from rain will be pumped out - it will cost few hundred dollars for digging and less than thousand dollars per year to support (100 years ago, now, sure more). No slips at all, because when ground is dry, no slip happen.
Guess, what usually chose management and why? :)
BTW, greetings from Kyiv, Ukraine, country now at war, and city, whole build on hills and over hundred years spent to build drainage system under hills so last decades slips are very rare.
“ Sweeney, Epic’s C.E.O., has the backstory of tech-founder lore—college dropout, headquarters in his parents’ basement, posture-ruining work ethic—and the stage presence of a spelling-bee contestant who’s dissociating.”
Okay. Made me laugh.
The simulation we're living in simulates reality down to the Planck length, so maybe that's the answer.
The Planck length is analogous to the pixel. Our parent universe has a much tinier Planck length, and hence it acts as an ultra high resolution monitor, on which it can fit all the pixels of our universe, as well as an abundance of parallel universe simulations, all displayed at the same monitor.
Like our parent simulators, we will need to increase the Planck length for the simulations we create (i.e., reduce their resolutions), so that we can fit an abundance of simulations in our universe and carry on this important tradition. The grand monitor above us will take on a fractal appearance, with very tiny monitors displayed within it.
The turtle on top of all of this — the tiniest possible monitor, rendered with the fewest conceivable pixels — our great-great-great granddaughter simulation — will be blocky Minecraft simulation. The people there will marvel that so much complexity can arise from so few "parts". Perhaps their superintelligence will be able to peel back the layers of their parent simulations to find our Hacker News comments, and their superintelligence will report back to them our words of wisdom. To them I say, "It wasn't easy, kids! Eat your vegetables."
I hate that we're arrogant enough to make analogies like this.
You just don’t get the joke
What's the most intuitive explanation for why a Planck length exists?
To simplify equations that create ratios between distance, time and energy. That's literally it. Using Planck units is a math trick to remove a bunch of constants from a bunch of equations.
The Planck length doesn't really have any known physical importance, at least no more than a meter does. There's some things that happens around a planck length (for example, the current theories predict that black holes need to be at least bigger than a planck length to exist, or alternatively the Planck length is the point where the quantum uncertainty completely overtakes any other classical theory) but particles can "move" distances less than a planck length (with lots of caveats, mainly because quantum uncertainty makes the notion of "moving" in the classical sense kinda weird and barely applicable, but still, they _can_).
A common belief is that the Planck length is kind of the "minimal distance" and start thinking that it's the "pixel of the universe" but there's no actual theoretical framework saying that. It's just a common misconception.
Planck length is where quantum gravity overtakes a nonrelativistic theory, quantum stuff happens on a 25 orders of magnitude larger scale. For comparison: the universe is about as much bigger than you.
I don't think universe/reality bothers with whys. It just is, just like everything else, deal with it. But it does put an effective stop on those fantasies that reality is like an endless fractal.
Yes, but that doesn't necessarily mean fidelity levels are recursive.
At macro scales this reality behaves as if continuous. It's only at low fidelity levels, and specifically at points of interaction, that it becomes discrete.
So if we are in a simulation, it seems to be one of a reality where real computing would have been possible. Given we almost certainly won't have access to real computers, it's unlikely we'll be simulating our own reality anywhere near as detailed as this one.
i'm sure dark matter is a consequence by some efficiency boosting lazy instantiation routine in the sim! ;-)
No, I think it's much more likely that dark matter is just a by-product of some very sloppy spaghetti coding.
We've already got Madden NFL 09, how more real do people want to get?
If you think back to Descartes and a lot of the philosophy about determinism and free will, they had quite a naive understanding of systems and determinism because their mental model of a complex system was a clockwork watch or a steam engine with lots of gears.
Because their argument was, that if the universe is deterministic everything seems a bit scary because whats going to happen is already 'known' and so we have no free will.
But now we have a better understanding of how mind bogglingly complex deterministic systems can be - broadly this is what Chaos Theory is all about - e.g. sensitive dependence on initial conditions - you can model how a magnetic pendulum moves between three magnets, but the tinyest infinitessimal change in the starting position makes a big difference to the eventual path. And we know that through various fundamental forces, every subatomic particle interacts with every other one in its light cone.
And so if we assume that the universe is deterministic, we can see it would be impossible to build a computer to exploit that determinism and 'calculate ahead' to see what state the universe would be in at some future point. Two main reasons - (a) the computer would inevitably use more atoms than the atoms it was trying to simulate and (b) sensitive dependence on initial conditions.
And so even in a deterministic universe, the future is literally unknowable. So you can think of the universe as a computer that is calculating its own future, one picosecond at a time. We can't jump ahead. (we can, of course, use simplified models to give us good guesses for specific short term situations)
That means determinism is nothing to worry about, in terms of free will. (and anyway, how would the opposite - randomness - be any better for free will?)
(so what is Free Will? Its not really a concept that makes any sense - "I chose to do X but in completely identical conditions I could have been free to choose Y" - how do you even test that? But if you want a colloquial definition: Broadly I'd say we have something that you might choose to call "free will" because what we do next is (usually) proximately determined by processes inside our skulls. And yes, if you want to trace the state of our brains back in time eventually all of it originated from outside our skulls, memories etc, but lets set all that historical complexity aside. What we do next is broadly determined by processes within us. As opposed to a rock rolling down a hill which is just subject to external forces. I think thats the only thing Free Will can really mean)
I often wonder why can there not exist some sort of algorithmic shortcut or compression method that allows us to: a) store the state of our universe inside our universe; and/or b) emulate our universe at > realtime?
If you could store the state of the universe using less than 100% of the universe, then you could use the algorithm recursively on the result, over and over again, to compress it infinitely -- which suggests that it contains no information. And if you could, generally speaking, simulate the universe faster than real time, you could simulate the simulator faster than it runs, meaning that you could construct a machine that predicts its own output and then purposefully outputs the exact opposite, leading to a contradiction.
Another argument is, imagine that you are building a machine that takes ingredients as an input on a tray, and outputs a baked cake. There's a nearly infinite number of ways to build such a machine. Some of them are faster than some others. Imagine the very fastest cake-baking machine that could physically exist. Any simulator will be slower than real time to simulate that particular machine -- because it is an optimal machine -- and any event that depends on the cakes it makes will have to wait, so it becomes a bottleneck of the system.
> That means determinism is nothing to worry about, in terms of free will. (and anyway, how would the opposite - randomness - be any better for free will?)
You've misunderstood the root of their concern, it isn't practical predictability but omniscient predestination.
I guess you're right, I had translated it into a modern analogous concern.
Its interesting to think about. Do the same arguments I've already made apply? So: If the universe is determinsitic then God only has omniscient predestination is he/they are able to model the whole universe is their etheral mind as a hyper detailed simulation. Because even God can't skip ahead in those calculations just because its 'deterministic'. Something actually has to do the work to do the "determining".
Even if their supernatural powers allow them to skirt around the huge size of the simulation and the problem of sensitive dependance on initial conditions its still a very redundant exercise because the simulation then becomes an exact copy of the universe. Lets say God can run this simulation faster than the original - then 'our' universe becomes a redundant laggard and the real action is happening inside the Gods Mind Model one.
And you're still in the situation where nobody knows what our supposedly 'determinsitic' brains are going to do until someone crunches the impossible universe sized numbers to work it out. So still in this scenario we have something akin to Free Will although I hold to my argument that Free Will as originally conceived is a meaningless confused concept that sounds very reassuring to people but doesn't make any sense (I did X but in completely identical conditions I would have been free to do Y).
If you don't agree with this assertion that a determinstic system has to be actually determined to find out where it goes - if you posit that God could just magically 'jump' to some future state of the universe without running the model in between - then all bets are off because even the supposed Free Will they imbued us with could also be subject to that same magical prediction process.
> Because even God can't skip ahead in those calculations just because its 'deterministic'. Something actually has to do the work to do the "determining".
I think you haven't internallized what "omniscient" actually means. Your statement is probably true about non-omniscient beings but I don't see how you can make that assertion about an omniscient state. More likely I suspect that you just don't believe omniscience is possible. There has been a LOT of theological discussion of this topic, most of which I am not familiar with.
I do believe in something like "omniscience", but base it in the ubiquity of math, not the omnipotence of an alleged creator. Mathematical structure exists independent of the human mind. When we calculate the result of a calculation or prove a new theorom, we are not creating something new, but uncovering pre-existing structure. This is why such truths can be independently discovered and verified. I think this mathematic structure is the fundemental nature of reality. Every possible variation of every possible simulation exists without needing any specifc "reality" to be running that simulation inside it. (Though, for every finite simulation, there would be infinite other finite simulations that contain it.)
Comment was deleted :(
we can't even perfectly simulate a single hydrogen atom (perturbative theory would need infinitely many terms to be exact)
I don't know about "perfectly", but we can simulate hydrogen from QCD; we just estimate the path integral directly.
I am not sure what you mean by hydrogen.
QCD is about the strong forces, i.e. about the nucleus, which is a proton or a deuteron in the case of hydrogen.
However nobody has ever succeeded to make a useful simulation of the proton or of any other hadron.
Such a simulation must be based on a small number of universal parameters, e.g. the masses of the quarks etc., and it must be able to compute useful physical quantities, e.g. the masses of the hadrons, the magnetic moments of the hadrons, the energies of their excited states and so on, with a precision comparable with that of the empirical measurements.
Nobody has succeeded until now to perform such a simulation.
On the other hand, if we assign to the proton the properties determined by empirical measurements (i.e. mass and magnetic moment), quantum electrodynamics can be used to compute with high precision various properties of the proton-electron system, i.e. of the hydrogen atom.
Not only that, we can't even simulate a single nucleon
This reads like an Epic Games PR piece. Good for them but there’s nothing new here for close followers of 3d tech.
Right.
The new thing, for people who don't follow this, is that UE5 has crawled most of the way out of the uncanny valley. They still can't do totally convincing humans in real time, but they are getting close.
you don't need to simulate everything in a big monolith and with all the complexity of everybody in it, that does not scale well.
You only need to simulate the perspective of you and what you can see, observe and the interactions you make right now.
Same as video games just render in a certain radius around you.
That scales well per human. Heck everybody around you might be an NPC and you will never know
You can still simulate it as a whole and have it scale:
- You can simulate whatever is not observed statistically and collapse to a discrete reality only when actually observed
- You can handwave away whatever is outside the observable universe since information (and thus effects) will never reach the observer
- Heck you can even handwave whatever has been observed and over time return to statistics thanks to n-body problem, n>2; it would be somewhat incorrect but no one would be able to check that it is wrong as long as it fits within a lightcone
I think that it's actually more "perfect" than simulating every atom, in the sense that it matches quantum mechanics, foregoing the intuitive model of a fully deterministic Newtonian universe that would need to be simulated "perfectly" everywhere all at once down to Planck scale.
Depending on capabilities within the simulation, that method scales unpredictably. If you can build supercomputers and run programs on them, these need to be simulated accurately at all times lest it becomes noticeable that they don't perform according to spec. Basically, the more machines you build, the larger the set of interactions that must be simulated becomes, and if your simulator's capacity is limited, then either things have to break from the perspective of the simulation, or the simulatees are effectively performing a denial of service attack on the simulator.
interesting perspective. So all entities in one big simulation to ensure everyone has the same clock/variables/... and then just simulate around the person?
How perfectly can the reverse light cone be simulated? That's something I think about more and more. It seems plausible and, moreover, every bit as interesting as exploring what's left in the whole rest of the universe.
The world is chaotic.
> In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time.
https://en.wikipedia.org/wiki/Chaos_theory
(The above applies equally to the past)
What would a universe look like where everything can be perfectly simulated in real time? I guess an empty universe would work. Or a universe that is a boring state machine.
Comment was deleted :(
How much time do you have? In real time, I'm not super optimistic. If you don't care about speed you're just limited by storage and maintenance.
Never perfectly enough. We can never know true knowledge/wisdom which is reserved for Gods. We can at most be lovers of knowledge/wisdom.
And what if those "gods" are just hypervisors/controllers/programmers of the simulation?
Not only that, if the simulation theory is true, than it's most likely that there are more levels of simulations and we live in one of the many simulations within simulations. I've read somewhere that this could be more probable than the chance of living in an original first universe. Btw, this is esoteric, but NDE-s usually speak of akashic records, whereby everything in this universe is being recorded and can be revisited.
If I fully simulate a car and its internal, I pretty sure I don't need to know about its "gods" wisdom/knowledge, I've just got to replicate how the system behaves.
Forget about it, unless quantum computers get here, we can't simulate deep stuff (from what I hear of my Denmark contacts, 2035)
At some point we'll build a perfect mimicry of the natural world.
The interesting part is what happens in the first fork.
"At some point we'll build a perfect mimicry of the natural world."
A perfect simulation of the natural world, would be a second natural world.
This assumes that the natural world isn't itself simulated, or it's not a second natural world, but an nth natural world.
Turtles all the way down.
https://qntm.org/responsibility
Wondering if this from GGP was a reference to that:
> The interesting part is what happens in the first fork.
Yes it does, but so far there are lots of indications for infinite complexity and 0 indications for it being a simulation.
Given we're in a universe that at macro scales appears to be continuous, and even continues to appear that way at micro scales until interacted with leaving persistent information around state changes (and switching back if the information is erased, like any good memory optimization might), we should probably seriously entertain that being in an emulated continuous universe with discrete side effects would mean real computing would be on the table for a foundational reality.
Can you point to any "indications of infinite complexity" that couldn't be run on a real computer?
"Can you point to any "indications of infinite complexity" that couldn't be run on a real computer?"
Have you ever seen an accurate long term weather forecast?
isnt MATLAB, Simulink, and Simscape already doing this at much finer detail?
[dead]
Crafted by Rajat
Source Code