hckrnws
I think it’s really interesting to see the similarities between what Wolfram is saying and the work of Julian Barbour on time being an emergent property. Both suggest a similar underlying ontology for the universe: a timeless, all-encompassing realm containing all possible states / configurations of everything. But what’s really fascinating is that they reach this conclusion through different implementations of that same interface. Barbour talks about a static geometric landscape where time emerges objectively from the relational (I won’t say causal) structures between configurations, independent of any observer. On the other hand, Wolfram’s idea of the Ruliad is that there’s a timeless computational structure, but time emerges due to our computational limitations as observers navigating this space.
They’ve both converged on a timeless “foundation” for reality, but they’re completely opposite in how they explain the emergence of time: objective geometry, vs. subjective computational experience
I was literally thinking of the same similarities. Barbour's exposition of the principle of least action as being time is interesting. There's a section in The Janus Point where he goes into detail about the fact that there are parts of the cosmos that (due to cosmic inflation) are farther apart in terms of light-years than the universe is old, and growing in separation faster than c, meaning that they are forever causally separated. There will never be future changes in state from one that result in effects in the other. In a way, this also relates to computation, maybe akin to some kind of undecidability.
Another thing that came to mind when reading the part about how "black holes have too high a density of events inside of them to do any more computation" is Chaitin's incompleteness theorem: if I understand it correctly, that basically says that for any formal axiomatic system there is a constant c beyond which it's impossible to prove in the formal system that the Kolmogorov complexity of a string is greater than c. I get the same kind of vibe with that and the thought of the ruliad not being able to progressively simulate further states in a black hole.
>There's a section in The Janus Point where he goes into detail about the fact that there are parts of the cosmos that (due to cosmic inflation) are farther apart in terms of light-years than the universe is old, and growing in separation faster than c, meaning that they are forever causally separated. There will never be future changes in state from one that result in effects in the other. In a way, this also relates to computation, maybe akin to some kind of undecidability.
Ho, I love this hint. However even taking for granted that no faster than light travel is indeed an absolute rule of the universe, that doesn't exclude wormhole, or entangled particles.
https://scitechdaily.com/faster-than-the-speed-of-light-info...
It would be nice if this was a problem with decidablity, but often it is a problem with indeterminacy that is way stronger than classic chaos.
The speed of causality or I information is the limit that is the speed of light.
Even in the case of entanglement, useful information is not ftl, If I write true on one piece of paper and false on another and randomly seed them to Sue and Bob, Sue instantly knows what Bob has as soon as she opens hers. While we teach QM similar to how it was discovered, there are less mystical interpretations that are still valid. Viewing wave function collapse as updating priors vs observer effects works but is pretty boring.
While wormholes are a prediction of the theory, we don't know if the map matches the territory yet. But it is a reason to look for them. But if we do find them it is likely that no useful information will survive the transit through them.
Kerr's rebuke of Hawkings assumption that black hole singularities are anything more than a guess from a very narrow interpretation of probably unrealistic, non rotating, non charged black holes is probably a useful read.
The map simply isn't the territory, but that doesn't mean we shouldn't see how good that map is or look for a better one.
Kerr's paper that was referenced above.
> There will never be future changes in state from one that result in effects in the other.
You are assuming that the Principle of locality is true and proven. This is far from being the case from my understanding.
You can’t really prove things in physics, but to my knowledge we don’t have observations that contradict locality.
I've been thinking about this comment a bit. What do you mean that it's far from being proven? Wouldn't this mean there is some evidence for something faster than c?
Actually, the parts of the universe receding from us faster than the speed of light can still be causally connected to us. It’s a known “paradox” that has the following analogy: an ant walks on an elastic band toward us at speed c, and we stretch the band away from us by pulling on the far end at a speed s > c. Initially the ant despite walking in our direction gets farther, but eventually it does reach us (in exponential time). The same is true for light coming from objects that were receding from us at a speed greater than c when they emitted it. See https://en.m.wikipedia.org/wiki/Ant_on_a_rubber_rope
They will never reach us because the rate of expansion is accelerating.
That article doesn't back up your claim.
Yes it does, look at the caption of Fig. 1: "Photons we receive that were emitted by objects beyond the Hubble sphere were initially receding from us (outward sloping lightcone at t <∼ 5 Gyr). Only when they passed from the region of superluminal recession vrec > c (gray crosshatching) to the region of subluminal recession (no shading) can the photons approach us".
I can’t reply to your last reply. I agree, in fact I said those regions can be still causally connected to us, not that they are.
Those photons aren't superluminal, the are in our past light cone, they were headed out way before the emitter was beyond the horizon.
It gets complicated because the concept of 'now' is a local property and because those objects aren't moving away ftl, space is expanding.
It shows that SOME “superluminal” photons can reach us, not that ALL can. With accelerating expansion, eventually all galaxies fall out of that interval and become unreachable.
Without time you’d be everything all at once, which isn’t capable of having an experience, that is to also say: a location.
To have experience, requires position relative to the all, the traversal of the all is time.
More like a play head on a tape, you’re the play head traversing and animating your own projection.
The universe doesn't need to evolve for us to have experience. We would experience evolution through the state space because its structure is oriented such as to experience evolution through time. Each point in experience-time (the relative time evolution experienced by the structure) is oriented towards the next point in experience-time. Even if all such points happen all at once, the experience of being a point in this structure oriented towards the next point is experienced subjectively as sequential. In other words, a block universe would contain sequences of Boltzman brains who all subjectively experience time as sequential.
The real question is why would such a universe appear to evolve from a low entropy past following a small set of laws.
Well, it doesn’t evolve. You just render it as evolving to perceive yourself / itself. The only way to have the state of being of observation and perception is to not be everything which gives rise to directionality.
One of my toy theories-of-everything is that we live in a branch of something akin to a Mandelbrot set. A trivial rule is all that is needed to produce infinite complexity. Sure, zoomed out a fractal can look simple, and even zoomed in (a lot!) it still looks trivially repeating, but if you zoom in enough eventually the complexity becomes high enough to represent something like the universe and the life within it. You can even squint at it and just like how the Mandelbrot set appears to fork repeatedly, parallel universes (like in MWI) could be forking off by the dint of following one path or another through this fractal space.
That’s funny: https://x.com/0x440x46/status/1824145776295154084?s=46
This makes a good argument that the block universe can't exist: https://aeon.co/essays/who-really-won-when-bergson-and-einst...
(search "block")
That's not saying it can't exist, it's just saying you can't go outside the universe to look at it.
But wouldn’t each brain still be frozen in a moment of time? Don’t you still need something that moves the “play head of the universe” from one moment to the next?
If your experiences were played out of order in some kind of "God's eye" time, how could you notice? The experience of each moment seems continuous due to our memory of the recent past. But this memory is just a configuration of our current state. The actual ordering of the evolution of this state doesn't influence the directionality of the subjective experience of evolving through time.
A god’s eye perspective still requires time. The absence of time implies nothing can change because time is required to differentiate two states. The notion of “observation” implies change because you’re learning something new.
You could say we exist in a simulation and the entities outside the simulation can pause the simulation or pre-compute the simulation so that it’s static but then you’re just kicking the can down the road because they would need their own notion of time to observe the simulation they created.
I don't see how this responds to the thrust of the argument. The argument is that if order doesn't matter to the directionality of subjective time then no order doesn't matter either.
Time isn't required to differentiate two states just as time isn't required to differentiate two static regions of space. The features of the thing can do the differentiation. Whether you consider all of block spacetime as a single entity or subdivided in various ways is a matter of convention. But regions of this block spacetime can be grouped by way of their apparent dynamical connection. I.e. the appearance of evolution following laws connects some regions with others sequentially.
Ah I think I wasn’t clear. I don’t really care if time moves sequentially or jumps around in random order. My concern is with the existence of time itself.
What gives space meaning is coordinates, which allow multiple things to exist separately from each other. Likewise you need another coordinate to differentiate “snapshots” of the universe. So in that sense time is necessary to differentiate two states. But i understand we’re talking about a more fundamental notion of time so i get what you’re saying.
Perhaps a better way to put it is time is necessary for events to happen. Let’s say you could view the universe from the outside, ok great but what can you do with that? You still need time to do things even if you’re outside the universe. Otherwise it would literally be frozen and meaningless.
That’s my issue with these timeless theories is people imagine viewing the universe as a static 4D object but they still talk about it as if things are happening outside the universe and you need time for events to happen.
If time doesn’t exist then a “gods eye view” is meaningless because nothing could happen from that perspective either. It’s also a strong statement about the origins of reality because if time doesn’t exist then reality could not have been created through any process. God or otherwise.
I get where you're coming from and I'm sympathetic to the argument. I don't give block universe stuff high credence myself. If consciousness is a process, then there would need to be discrete events that constitute the process. No events, no processes, no consciousness. I certainly find this highly intuitive. But this may be a biased analysis based on our time-oriented conceptual milieu. Can we make sense of processes without events?
We normally understand a process as a sequence of static events. Time here is really just defining a dependency relation between configurations and some indexical. But a dependency relation doesn't need to be constituted by something that has change as an essential property. Dependency is just matter of an orientation through the state space. Orientation rather than change could be fundamental. With orientation comes trajectories through this structure which could plausibly ground processes. The indexical doesn't matter from the perspective of the subjective evolution of time. What's the difference between a process evolving over essential time and a process "unwound" along a trajectory? Plausibly nothing relevant to consciousness.
The universe keeps going even when you're unconscious and having no experience at all. Others experience consciousness without your knowing. So why would you assume your past or future can't exist without your knowing?
I didn’t make any such claims regarding consciousness. I’m trying to understand how time as an emergent phenomenon instead of fundamental to the universe could work.
Proof?
Video footage of you being Bill Cosby’d?
Still contained within you. You’re the singularity.
Boltzmann brains are extremely ephemeral.
An analogy is that of stirring a vat of alphabet soup and noticing that there is a fair number of single-letter words popping into view ("A", "I"), a smaller number of two-letter words, an even smaller number of three-letter words ... a very very small chance of a twenty-letter word ... and a vanishingly small chance of the 189819-letter monster <https://en.wiktionary.org/wiki/Appendix:Protologisms/Long_wo...> popping into view. The stirring doesn't stop just because a multiletter word appears, so multiletter words are quickly broken up and even valid single-letter words get hidden behind the "B"s and "Q"s and other letters in the soup.
Boltzmann brains will fluctuate out of existence on the order of a small multiple of the light-crossing time of the brain-matter that fluctuated into existence. As the brains are human, they won't even have a chance to react. Although their false memories are encoded however true memories exist in our own brains, they'll have no time to have a reminiscence or notice their lack of sensory organs. (Which is probably good, since they would quickly suffer and die from lack of pressure and oxygen).
A Boltzmann-brain with a full encoding of a life worth of false memories (from never-existing sensory input) is a much larger number of letters. Also, in a cold universe, the stirring is slower, and the letters sparser. Boltzmann brains are tremendously unlikely except in a verrrrrrrrry big volume of spacetime. But with a sufficiently big volume of spacetime, or one with an energetic false vacuum, one should expect a lot of Boltzmann brains. This view puts some limits on our own cosmos's vacuum, since we don't see lots of Boltzmann brains (or even much less complicated but RADAR-detectable and/or eclipsing strucures) fluctuating into brief existence in our solar system.
Boltzmann brains are low-entropy. A persisting Boltzmann brain (fluctuating into existence and staying in existence for a long time) is much lower entropy still. This poses problems for hypotheses that the entire early universe fluctuated into existence and then evolved into the structures we see now. Here there are human brains attached to sensory apparatus, whose memories correlate fairly well with their history of input (and recordings by ancestors, and fossil records, and so on): a system with much much lower entropy than Boltzmann brains, so what suppresses relatively high-entropy structures (including Boltzmann brains) from dominating (by count) our neighbourhood?
Also, if the universe supports large low-entropy fluctuations, galaxies that briefly (~ hundred thousand years) fluctuated in and out of existence should be much more common than galaxies with a history consistent with billions of years of galactic evolution, and you'd expect random variations in morphology, chemistry, and so forth; that's not what we see.
This is a bit annoying, as it would be handy to point to Boltzmannian fluctuation theory as the source of the tremendously low entropy in the very early universe, i.e., it could have arisen spontaneously in a less precisely ordered space. Oh well.
> why would ... a universe appear to evolve from a low entropy past following a small set of laws
Thermodynamics.
The issue is: where did the low entropy past come from? Once you have that, evolving into a higher entropy structure-filled present is not too hard -- that's essentially what we have with the standard cosmology from about the electroweak epoch onwards.
So in summary:
> sequences of Boltzman brains who all subjectively experience time as sequential
whatever these might be, they aren't Boltzmann brains, since the latter don't subjectively experience anything as objectively they fluctuate out of existence in something like a nanosecond.
Very briefly, the short existence is driven by interacting fields and the need to keep entropy (relatively) high: if your starting point just before the appearance of the brain is a region that is high quality vacuum, you have to come up with protons, calcium nuclei, ... and all that requires very careful aim to have one split-second "movie frame" of brain. You need much better "aim" which really drives down the entropy (which corresponds a much larger fluctuation) to go from vacuum to a Boltzmann brain that doesn't disintegrate starting in the very next frame thanks to overshoots of momentum.
The higher the entropy of the Boltzmann brain, the clearer the stat mech argument. (If one gets stuck thinking about human brains, C. elegans apparently develop memories and store them in their nerve ring. Why isn't the outer space of our solar system full of those Boltzmann-C.-elegans brains fluctuating in and out of existence with each possessing false memories of sensory stimuli? Smaller fluctuations, so there should be many more of those than human Boltzmann brains).
I agree with all that. Bringing up Boltzman brains was just an alternate way of explaining how inhabitants of a block universe could experience time as sequential without a real sequential ordering of universe states. Presumably if one can conceptualize a Boltzman brain coming into existence to experience one instant of a virtual life with virtual memories, you can imagine a long sequence of them experiencing the entirety of this virtual life. But the order in which this sequences comes into existence doesn't alter the directionality of subjective time evolution for the Boltzman brains.
This is well said - this is exactly how I understood your comment as well and you put it very succinctly and in an understandable way and has been something that I've been pondering for a while now. Thanks.
> inhabitants of a block universe could experience time as sequential without a real sequential ordering of universe states
tl;dr: I don't think Boltzmann brains count as "inhabitants" because their worldlines are so short. Considering together a select set of available Boltzmann brains does not really admit something that looks like a long but complicated worldline. By virtue of being a fluctuation in a thermal bath in equilibrium a BB does not affect the wider universe; a Boltzmann flashlight can't blink out a message in Morse code.
The herd of elephants in the room is the exp(- \Delta S) suppression of fluctuations of size \Delta S out of equlibrium.
I think you are saying that we can imagine a set of some billion billion ephemeral Boltzmann brains each having memories associated with a unique fraction of a false life. I agree we can imagine that, but at the cost of having exact duplicates and many many more ephemeral Boltzmann brains with corrupted and even wholly unrelated false memories.
The true causality is the history of the thermal bath and not the memories of the brains.
In principle we can distinguish between a chosen set of same-false-life-at-different-stages Boltzmann brains and a real human with a very complicated FTL-and-time-travel worldline because at each point on the worldline the latter gets stress-energy ("signals", if you like) from the predecessor point, and also from (and to) each point's neighbours not on the same worldline. That is, our real traveller can detect the thermal bath temperature (which matters in an expanding cosmology) and leaks out metabolism photons to infinity. Ephemeral Boltzmann brains do neither.
The "virtual life" Boltzmann brains -- as you note -- do not have to be ordered in any way. I would go further: brains with immediately neighbouring fractions of the virtual life's false memories can be totally causally disconnected, not just causally disordered.
So I don't think the thought experiment says anything other than the Poincaré recurrence theorem admits states that are close (but not arbitrarily close or exact) to the initial state. That is, BB_final will recur, but so will BB_final-minus-one-nanoseconds-of-false-memory, BB_final-minus-two-nanoseconds-of-false-memory, ... but in some arbitrary point in the system's evolution. I don't think that's surprising.
There will also be brief ephemeral fluctuations into (and out of) mouse brains, cockroach brains, microchips, Jeep Wranglers, brains with memories of having lived lives as little green men from Mars, and so on and so forth. If you have BBs full of false human memories, without some unknown suppression mechanism you will also have BBs full of false nonhuman memories, and nonhuman BBs, especially smaller and less complicated ones (\Delta S being much smaller in those cases).
I also don't think it says anything about our universe, since we simply do not know enough about dark energy to make confident guesses about the very far future (i.e., does it really asymptote to de Sitter with the thermal bath from a dS horizon?). We also don't know if protons are stable that far into the future. However, with what we do know (which is not enough), the far future looks pretty empty. If there aren't RQFT interactions at GUT-scale energies that allow for violations of baryon and lepton numbers, maximum entropy in the far future (>> 10^33 years) still fairly low, but also the path to new nuclei from the (photon-dominated) thermal bath probably means no BBs at all (effectively all baryons are behind horizons by ~10^{10^10} years from now, and significant numbers BBs in our future light cone are expected not much earlier than ~10^{10^50}, although of course if BBs can happen at all, the very occasional individual BB may have its ephemeral moment at any time including today.)
So if there are BBs with false memories of human lives, they are so far in the future that an entire 2020s-style solar system fluctuating into existence and persisting (or a time-traveller who reaches that future) could not recover anything like our present cosmology. At best, via a strong Lorentz boost, they might detect relic and horizon photons, and maybe splatter their windscreens with the occasional BB.
The fluctuated-and-persisting-solar-system scientists would quickly realize its false memories of skies full of stars and galaxies were not true memories. A real time-traveller would probably just try to measure what's left of the CMB or find another marker of the cosmological scale factor.
(The time travel can have been to-the-future-only by something as boring as spending a lot of time moving close to the speed of light.)
Not discussed yet: what to do about fluctuations in fields which obey conservation laws that result in antiparticles. My fast answer would be: that's one reason BBs are so short lived; they are ripped apart by matter/antimatter annihilations.
Comment was deleted :(
> a block universe
I first encountered this theory and the related "eternalism" philosophy via Alan Moore [1] (Watchmen, V for Vendetta, The Ballad of Halo Jones, Swamp Thing, Batman: The Killing Joke, From Hell, etc.). Watchmen and its non-Moore-affiliated sequel have a lot of riffs on time and determinism.
Q: Jerusalem deals with the idea of eternalism: everything that has happened is happening right now and forever. Could you explain your views on this?
A: My conception of an eternity that was immediate and present in every instant – a view which I have since learned is known as ‘Eternalism’ – was once more derived from many sources, but a working definition of the idea should most probably begin with Albert Einstein. Einstein stated that we exist in a universe that has at least four spatial dimensions, three of which are the height, depth and breadth of things as we ordinarily perceive them, and the fourth of which, while also a spatial dimension, is perceived by a human observer as the passage of time. The fact that this fourth dimension cannot be meaningfully disentangled from the other three is what leads Einstein to refer to our continuum as ‘spacetime’.
This leads logically to the notion of what is called a ‘block universe’, an immense hyper-dimensional solid in which every moment that has ever existed or will ever exist, from the beginning to the end of our universe, is coterminous; a vast snow-globe of being in which nothing moves and nothing changes, forever. Sentient life such as ourselves, embedded in the amber of spacetime, would have to be construed by such a worldview as massively convoluted filaments of perhaps seventy or eighty years in length, winding through this glassy and motionless enormity with a few molecules of slippery and wet genetic material at one end and a handful or so of cremated ashes at the other. It is only the bright bead of our consciousness moving inexorably along the thread of our existence, helplessly from past to future, that provides the mirage of movement and change and transience.
A good analogy would be the strip of film comprising an old fashioned movie-reel: the strip of film itself is an unchanging and motionless medium, with its opening scenes and its finale present in the same physical object. Only when the beam of a projector – or in this analogy the light of human consciousness – is passed across the strip of film do we see Charlie Chaplin do his funny walk, and save the girl, and foil the villain. Only then do we perceive events, and continuity, and narrative, and character, and meaning, and morality. And when the film is concluded, of course, it can be watched again.
Similarly, I suspect that when our individual four-dimensional threads of existence eventually reach their far end with our physical demise, there is nowhere for our travelling bead of consciousness to go save back to the beginning, with the same thoughts, words and deeds recurring and reiterated endlessly, always seeming like the first time this has happened except, possibly, for those brief, haunting spells of déjà vu.
Of course, another good analogy, perhaps more pertinent to Jerusalem itself, would be that of a novel. While it’s being read there is the sense of passing time and characters at many stages of their lives, yet when the book is closed it is a solid block in which events that may be centuries apart in terms of narrative are pressed together with just millimetres separating them, distances no greater than the thickness of a page. As to why I decided to unpack this scientific vision of eternity in a deprived slum neighbourhood, it occurred to me that through this reading of human existence, every place, no matter how mean, is transformed to the eternal, heavenly city. Hence the title.
1: https://alanmooreworld.blogspot.com/2019/11/moore-on-jerusal...
I'm not sure why experience requires the arrow of time or location. Your experience does, and it might seem that is a universal rule, but only because you can't possibly intuit a world in which time doesn't flow.
I think Dr. Manhattan is a good fictional reference. He existed in a timeless form. Everything was happening simultaneously for him. For everyone else they experienced him in a time like way, but only as a matter of perspective.
How can you imagine any world without experience (observation?) thus any observer is dependent on position thus time simply because it is the partial history that allows the state itself to exist.
And your second point is essentially the metaphysical argument for god and early spirituality. Hebrew mystiscm for example describes god pouring itself into lower forms of being to experience itself
The universe is absolutely full of things you can't imagine but are nonetheless true. Our intuition is only good for a certain regime of space, speed, temperature, pressure, etc. That is why we have tools like mathematics, to expand our minds past our intuition through abstraction.
> I'm not sure why experience requires the arrow of time or location.
Because experience _is_ the arrow of time
> have experience, requires position relative to the all, the traversal of the all is time
You’re describing timelike experience. Photons “experience” events as in they are part of causality. But they do so in a non-timelike manner.
Said a human.
If it’s not time-like, then it’s everything, thus it can’t have experiences thus god. God splits (monad becomes many) to experience being (shards in multiplicity of the one through division: oooh spooky golden mystery).
[flagged]
Personal attacks will get you banned here, so please don't do this.
I think he’s actually got a point. I have the same “feelings” but can’t articulate it in a scientific way compatible with physicists in general.
Take your meds
Maybe we do experience everything at once, but then have to process it in a time-like manner to make any sense of it.
Like everything else that we "experience", maybe the perception that reaches our consciousness has nothing to do with what's actually out there.
There are no purple photons.
Yeah, god is everything, which can’t have experience, as it’s experiencing everything at once - thus the monad splits itself, allowing perception as a fraction of the whole which is experienced as time and direction.
Do you think god is in control?
God is everything.
Perhaps the limit of that curiosity is akin to control but anything that can be imagined will be imagined and explored and rendered in some sense to experience. Imo.
I think that time isn't what we think it is - but I don't think it's all already set; rather I think that the past can be constrained by the future just as the future is constrained by the past.
I don't think that there's spooky action at a distance (it's fundamentally equivalent to retrocausality, and the consequences of the distant foreign event cannot outpace its light cone anyway).
I think its a superposition of states of a closed time-like curve thing being fleshed out as its contradictions are resolved and interactions are permitted between its colocated non-contradictory aspects.
But I'm not a physicist, so that's probably all just bullshit anyway.
I don't think they are saying anything similar at all. Julian Barbour finds a way to get rid of Time completely (by saying every possible state exists and there must be some law that favours states that _seem_ to be related to _apparently_ previous states). Wolfram is more focused on making sense of 'time is change' through the lens of computation.
Idk, just looking at it now Barbour seems much, much more rigorous. The linked article is more “using scientific terms to muse about philosophy” than physics, IMHO. For example;
In essence, therefore, we experience time because of the interplay between our computational boundedness as observers, and the computational irreducibility of underlying processes in the universe.
His big insight is literally the starting point of Hegel’s The Science of Logic, namely that we are finite. That in no ways justifies all the other stuff (especially multiverse theory), and it’s not enough to build a meaningfully useful conception of time, at all. All it gets you is that “if you were infinite you wouldn’t experience time”, which is a blockbuster-sci-fi-movie level insight, IMO.I can’t help but think of Kant as I write this; he wrote convincingly of the difference between mathematical intuition and philosophical conception, a binary Wolfram would presumably—and mistakenly-identify with solid logic vs meaningless buffoonery. But refusing to acknowledge our limits makes you more vulnerable to mistakes stemming from them, not less.
…the metaphysic of nature is completely different from mathematics, nor is it so rich in results, although it is of great importance as a critical test of the application of pure understanding—cognition to nature. For want of its guidance, even mathematicians, adopting certain common notions—which are, in fact, metaphysical—have unconsciously crowded their theories of nature with hypotheses, the fallacy of which becomes evident upon the application of the principles of this metaphysic, without detriment, however, to the employment of mathematics in this sphere of cognition.
Worth remembering at this point that Aristotle coined “physics” for the mathematical study of physis (nature), which was then followed up by a qualitatively different set of arguments interpreting and building upon that basis in a work simply titled metaphysics (after physics). We’ve learned infinitely more mathematical facts, but IMO “what is time, really?” will forever remain beyond their reach, a fact determined not by the universe but by the question itself.TL;DR: if you’re gonna try to talk cognition you should at least admit that you’re writing philosophy, and ideally cite some philosophers. We’ve been working on this for a hot minute! Barbour seems to be doing something much less ambitious: inventing the most useful/fundamental mathematical framework he can.
I swear as I get older philosophy feels more and more like religion for intellectuals.
If you want to talk about cognition or time you should study science, not philosophy. You’re not going to learn about the universe in any significant way by studying hegel or aristotle or kant harder.
Science is philosophy, albeit just a branch of it- specifically the part concerned with learning how the universe works physically.
Other branches of philosophy study other things, and are good at understanding those things they are about. Moreover, philosophy has progressed and branched out quite a bit since those philosophers you mentioned. I spend a lot of time reading philosophy for fun, and have found many of the ideas practically useful in regular life- but am not a fan of any of the philosophers you mentioned, and find their work mostly useless or outdated.
Funnily enough, the scholastics thought of philosophy as the handmaid of theology. Ultimately, it's in the name (love-of-wisdom). You can learn wisdom from science, but that body of wisdom eventually becomes a philosophy. And the older philosophers definitely saw something, even if they are not completely correct.
Why are you so confident that Philosophy isn't the superclass to "science"? How could you hope to start on any science without philosophy, much less arrive at a definition for the term? I could maybe see mathematics without philosophy, as I mentioned above, but physical science/physics/"science" is inherently subjective. That doesn't mean truth doesn't exist, of course -- but I'd have to get into philosophy to explain why I think all of that ;). The best defense for philosophy by far is that you can't criticize it without engaging in it, and "it all seems obvious to me, just use common sense"-style citations are much less convincing than ones to long famous books.
More provacatively: have you engaged with it? I know that's a big ask, but it's also a bit unfair IMO to write off a field without taking the time to understand it. For example, Aristotle founded multiple scientific fields, including the big two -- Physics and Biology -- and established a theory of mind that still has immense sway in the west to this day. Kant was a reknowned scientist before he started into philosophy (even having a good claim to "first to show the existence of galaxies"), and the quote above is from A Critique of Pure Reason (https://www.gutenberg.org/files/4280/4280-h/4280-h.htm#chap1...) where he establishes cognitive science:
From all that has been said, there results the idea of a particular science, which may be called the Critique of Pure Reason.... Such a science must not be called a doctrine, but only a critique of pure reason; and its use, in regard to speculation, would be only negative, not to enlarge the bounds of, but to purify, our reason, and to shield it against error—which alone is no little gain.
Hegel built on this directly with his famous book The Phenomenology of Mind, of which I highly recommend the short preface titled On Scientific Cognition. I don't think there could be a more clear piece of evidence that these were the leading thinkers of their time on matters of systematic thought, aka science.Without the academy, we'd have no Francis Bacon, no Newton or Liebniz, no Einstein or Bohr, and definitely no Popper, Kuhn, Carnap, Wittgenstein, Chomsky, or any of the other amazing modern thinkers on how to improve our scientific endeavors. This one's less verifiable but I imagine it hits home the most: I don't even think we'd have Turing without Boole, Russell, and Whitehead to draw from.
...sorry, clearly I'm a bit sensitive ;) It's frustrating being valued for your puzzle solving abilities (aka SWE) much more highly than your engagement with science, as I'm sure you can relate to or imagine!
I generally like the idea of most everything being emergent, but where does it stop? Is it emergence all the way down?
I suspect there are many different mental conceptions that amount to the same facts of nature.
As usual with Wolfram, too hand-wavy. It could be true but this is not serious physics.
It is simpler than that. Wolfram has a long history of plagiarizing ideas and passing them off as his own.
That’s the history of 99.9999% of ideas based on the average token generation rate of humanity.
The mother of someone who was a friend in the 90s used to always pepper her speech with attributions for almost everything she was saying (in any "serious" conversation). "I think it was Popper who said ..." "Schenk developed this idea that ...")
It was * so * annoying to listen to.
We should hold dinner-table conversations and scientific letters to different standards.
Comment was deleted :(
Real scientists tend to try to be careful about attribution and especially don't just blatantly regurgitate the last thing they read and pass it off as their own. That is highly frowned upon in polite academic society.
So you are saying there is a version of me that is king of the universe in some timeline?
In a skin enclosed universe you are already King Meatbag, ruler over your mind and body.
My body disagrees.
If the universe is infinite then there is a possibility that you are a king of an observable universe somewhere.
Infinite does not mean that all the permutations are possible.
You being you and you becoming a king might simply not be a combination which is compatible.
Great way to let someone down who asks you out.
There are no branches in the Ruliad in which you and I end up together. I have foreseen it.
The best is to "zone out" and do micro eye movements for a 10 seconds and then say that.
You vastly misunderestimate infinity if you don’t recognize that anything feasible will happen.
Depends on how you define feasible.
Take Wolfram's 1-dimensional cellular automata... some of them have infinite complexity, and of course you can "run" them for infinite time, and the "current" state is constantly expanding (like the Universe). So let's define "something feasible" as some specific finite bit pattern on the 1-dimensional line of an arbitrary current state. Is that "feasible" bit pattern guaranteed to appear anywhere in the automaton's present or future? I believe, and if I understand correctly, so does Wolfram, that for any reasonably complex "feasible pattern" the answer is no; even though the automaton produces infinitely many states, it is not guaranteed to explore all conceivable states.
In other words, in a given Universe (which has a specific set of rules that govern its evolution in time) even though there are infinitely many possible states, not all conceivable states are a possible result of that evolution.
If you exist, you are one of the feasible states.
There are infinite numbers between 3 and 4, yet none of them is number 7.
7 isn’t feasible…
I wrote up more or less the same idea ten years ago, but in what I think is a more accessible presentation:
https://blog.rongarret.info/2014/10/parallel-universes-and-a...
I have read and appreciated your writings going back to the comp.lang.lisp days, but a blog post that starts with “if you haven’t read the previous post, please do before reading the rest of this one” is not what I would consider accessible. …and that previous post then asks the reader to first read a paper or watch a video before proceeding. While a decade later than what you wrote, Wolfram’s article is much more self contained and complete.
Thank you so much for this.
Whenever people criticize Wolfram the comeback is often, he’s just trying to discuss big ideas that mainstream science won’t talk about. Of course that’s not the reason for the criticism at all and I think your work here shows that it’s totally fine to speculate and get a little philosophical. The results can be interesting and thought provoking.
There’s a difference between big ideas and grandiosity. It also shows big ideas can stay scientifically grounded and don’t require making up corny terminology (Ruliad? lol).
More than that, "ruliad" is complete vacuous, too. "All possible rules applied to all possible states infinitely many times", like, every possible theory, including the right one is in it, ok... thanks for defining this useless object.
It is possible to make quantitative statements that I think capture many of the intuitions you assert. Here was one attempt:
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.10...
That particular proposal was mathematically wrong for reasons I still find physically perplexing (it turns out that for some events quantum theory allows for stronger memory records - defined via classial mutual information - of entropy decreasing events!). A simple example is in here: https://arxiv.org/abs/0909.1726 (I am second author).
Very interesting! Thanks for the pointers! I'll need to take some time to digest these.
It's sort of funny that where the title alludes to the arrow of time, opening with a quote asserting "all measurements are in principle reversible", it pretty quickly gets to a different arrow of time - that of comprehension:
> "If you haven't read the previous post ... this won't make any sense"
Could you have demonstrated, perhaps accidentally, an alternative organising principle allowing temporal ordering to emerge in a computationally oriented ontology? Can the future only "make sense" if it temporally follows the past?
Only half kidding!
That's actually a great question, and one I've been wrestling with for years. Why do we perceive time as a sort of continuous monotonic flow? And I think it can be explained in terms of perception and comprehension, which I have a gut feel can be formalized as a kind of preferred basis selection. But rendering that intuition into words (and math) has turned out to be quite challenging, which I why I haven't written about it yet. Maybe in the future :-)
[flagged]
Do physicists think time actually exists? I wonder if someone has reasoned that time is an accounting method that humans have developed to make sense of their experienced change of systems.
Wolfram uses the words progression and computation a lot in his essay, but there’s an implicit bias there of assuming a process is deterministic, or has some state it’s driving towards. But none of these “progressions” mean anything, I think. It seems they are simply reactions subject to thermodynamics.
If no one observed these system changes, then the trends, patterns, and periodicity of these systems would just be a consequence of physics. It seems what we call “time” is more the accumulation of an effect rather than a separate aspect of physics.
For example, I wonder what happens in physics simulations if time is replaced by a measure of effect amplitude. I don’t know, tbh, I am not a physicist so maybe this is all naïve and nonsense.
Time "exists" in physics in the same way everything else in physics does - namely, the value we measure with clocks in the real world satisfies all of the same properties (at least in certain regimes of the universe) as the thing we call "time" in various physics theories like relativity/classical mechanics. And those theories make (reasonably) correct predictions about the values we measure in the real world.
Is it possible that these properties are the result of some other interactions that have very different laws at a lower level? Absolutely! But the discovery of particles didn't cause the sun to disappear, if that makes sense.
> Do physicists think time actually exists?
Yes, spacetime is important for General Relativity, cosmology and thermodynamics. Whether it's fundamental or emerges from something more fundamental is an open question though.
I don't know the answer to your question but tangentially, many human concepts related to time definitely do not exist in a purely physical sense. Like being "late" or "early", things "taking too long" or "being slow". Being "out of time" or "just in time". These are all human concepts. Physically speaking (classically anyway), things all happen right when they are supposed to.
I find a lot of interesting links between spirituality and physics like this. One idea or message in spirituality is that everything happens exactly as as "the universe" intends it to. It's meant to be a comforting thought as events (good and bad) occur in one's life and to encourage one to detach from outcomes. Yet, it's more or less parallel to classical determinism as you mentioned.
> Physically speaking (classically anyway), things all happen right when they are supposed to.
Comment was deleted :(
Comment was deleted :(
Time is just a measure of change. No change. No time.
We are interested in a peculiar rate of time based on the heart beat of our experience.
It could be that what changes is our perception of reality, not reality itself.
Thought experiment on the nature of reality:
- In a much larger universe, write down in a log book every event to every particle at every instant, from the Big Bang to the restaurant.
- Put it on the fireplace mantle and leave it there.
This is basically a log of a simulation. It exists in much the same way as an ongoing simulation would, except that its time dimension isn't shared with the simulating universe. But every observer within has had the same observations as if it did.
This assumes that a map, if sufficiently detailed, is identical to the territory.
Maybe it is, maybe it isn’t - but it is a highly debatable metaphysical assumption. I’m not sure how seriously we should take some people’s claims that they “know” that such an assumption is actually true
It's an argument about simulations, not about reality. If reality is a simulation, then arguments about simulations apply to it, but that's the big if.
Not necessarily. Suppose that consciousness/qualia/etc is “something extra” which has to be added to non-mental reality, as some dualists believe. Then, it would be possible that we live in a simulation which contains consciousness because that “something extra” has somehow been added to it. And yet, maybe the “much larger” universe which contains our simulation also contains such a “log book” of a very similar universe to our own, also containing intelligent life - and yet, if the “something extra” has not been added to that “log book”, it would lack consciousness and qualia, unlike our own universe.
I’m not arguing that a dualism (of this sort) is actually true, merely that we don’t (and can’t) know for a fact that it is false. But if we can’t know for a fact that it is false, then even if we (somehow) knew our reality was simulated, that wouldn’t give us grounds to make confident inferences about the nature of other simulations, or the nature of simulations in themselves
I agree with your post. However, I was using the most mechanical meaning of simulation: "the production of a computer model of something, especially for the purpose of study," which implies determinism and excludes the "something extra."
It doesn’t actually exclude the “something extra”, it is neutral as to whether or not there is any “something extra”
Panpsychists claim everything is conscious, even rocks, even atoms. Again, I don’t claim this is true (I’d be rather shocked if I somehow found out it was), but we can’t know for a fact that it is false. Yet if panpsychism (or at least certain versions thereof) is true, every simulation (even a simulation of the weather, or of crop growth) is conscious, simply because absolutely everything is. But I don’t think most standard definitions of “simulation” are excluding that possibility - on the contrary, they are agnostic with respect to it, treating its truth or falsehood as outside of their scope
It also doesn’t necessarily imply determinism because some computer simulations use RNGs. Most commonly people use pseudorandom RNGs for this, but there is nothing in principle stopping someone from replacing the pseudorandom RNG with a hardware RNG based on some quantum mechanical process, such that it is indeterministic for all practical purposes, and the question of whether it is ultimately deterministic or indeterministic depends on controversial questions about QM to which nobody knows the answers
> It doesn’t actually exclude the “something extra”, it is neutral as to whether or not there is any “something extra”
Roger, that's even better. I tried to clarify the log book idea in another reply.[1] The question is whether you can have reality (from the observer's perspective) just based on whether coherent information exists in any setting.
Basically the question is whether we can go from "I think, therefore I am" to "something is constructing information." The latter is obviously a simpler, lower-level proof than other concepts of existence.
That brings us back to the "something extra." Is it required for our observations to be possible, i.e. can we rule out the log book conjecture? I don't think we can, but I might be wrong.
And yet, maybe the “much larger” universe which contains our simulation also contains such a “log book” of a very similar universe to our own, also containing intelligent life - and yet, if the “something extra” has not been added to that “log book”, it would lack consciousness and qualia, unlike our own universe.
In that case, the non-conscious people in the log book would spend a lot of time pontificating on their experiences of consciousness and how mysterious it is and whether it's possible for there to be other universes that contain entities like themselves except not conscious. They'd be having these discussions for reasons that have nothing to do with actually being conscious, but coincidentally their statements would perfectly correspond with our actual perceptions of consciousness. Maybe not logically impossible, but it seems extremely improbable.
(This is pretty much the argument at https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zo... which I find persuasive).
The word "simulation" is it self a simulation. So is the word "is".
https://en.m.wikipedia.org/wiki/Semiotics
Reality is a multi-disciplinary domain, but it gives off the appearance of being physics only, because of its metaphysical nature.
Except for the randomness introduced in Quantum Mechanics.
If they ever solve the randomness, then if the map is down to every particle, then yes, the map and reality could be the same. But think at that point you need a computer the size of reality to keep track of every particle.
Or, maybe the entire universe is one giant wave equation. But again, I think you need a computer the size of the universe to solve it.
We don’t know for a fact that QM contains irreducible indeterminism. If many worlds is true, then QM is ultimately deterministic. Same if hidden variables is true. A large class of local hidden variable theories have been ruled out by Bell’s theorem, but non-local hidden variable theories survive it (such as the Bohm interpretation and the transactional interpretation), as do local hidden variable theories which deny the Bell theorem’s assumptions about the nature of measurement, such as superdeterminist local hidden variable theories.
Wasn't the Noble prize last year for eliminating local hidden variables? That spooky action at a distance does occur?
And for many worlds. Doesn't it punt randomness into other universes, but doesn't help us solve for results in our own individual universe. Since we can't measure what happens in the other universe we don't really know. If there were two results, and one is in one universe, and one in our universe, sure we determinedly know both results. But we don't know which universe we are in, so instead of a random result, now we have 2 answers and 2 universes, but now randomly don't know where we are?
> now we have 2 answers and 2 universes, but now randomly don't know where we are?
We are in both. Both universes are equally real. Each 'copy' of you knows it's in the universe where the result matches the observation.
I'm pretty sure this is not true. Nobody has proven this.
If in my universe I could always predict the correct results, then we would just have determinism, and I could predict exactly when an atom would decay. There would be no need for statistics.
Some high level background that might help.
https://www.youtube.com/watch?v=z-syaCoqkZA
> Wasn't the Noble prize last year for eliminating local hidden variables? That spooky action at a distance does occur?
The 2022 Nobel Prize in Physics was for experimental verification of Bell's theorem. The experiment did not rule out superdeterministic local hidden variables; superdeterministic local hidden variables does not violate Bell's theorem, since Bell's theorem assumes "free will" (that there is no correlation between arbitrary choices made by an experimenter and the state of the system being measured), but superdeterminism is the denial of that assumption.
Comment was deleted :(
An MWI universe would be hard to simulate though. There's an unknown vast number of branches.
Maybe with a quantum computer in a larger multi worlds universe?
Are you saying that some things are just not simulable, given a sufficiently large and powerful computer, or that the universe is or might be infinite?
If the universe is real, not simulation.
If you know the position and speed, everything, about every particle, then you should be able to extrapolate the future by calculating it. The problem is you need a computer the size of the universe to do that calculation.
So even thought the map is the territory, equal scale, and you have the map. It is little worthless because the map ends up being reality.
Edit: Little different than the idea that if this is simulation, you can do clipping and only render what we see. I'm saying the entire universe is 'real'.
If the universe is not infinite, and if individual particles and waves are calculable, it follows that one can postulate a larger universe capable of simulating it, or a large enough log book in this example.
What I find interesting is looking at whether some observable things look like they might be performance optimizations, or even "magic seeds" (as in RNG seeds.)
No proof of a simulation obviously, but maybe hints.
> If you know the position and speed, everything, about every particle, then you should be able to extrapolate the future by calculating it.
But isn't that the exact thing that quantum mechanics refutes? You cannot know the future just from the past; you can only know the probabilities of different futures.
Yes. I referred to the randomness that would prevent this, "once that is solved".
Guess I'm in the camp that eventually we'll find some model or discover something new, to discover what is behind the randomness, so it is no longer just random. But, yes, that is big IF.
Until then, with current theories, we couldn't do these calculations. They'd just be approximations accounting for some randomness.
Many-Worlds doesn’t contain or require any randomness.
I guess for whatever reason you don’t consider that to be the correct discovery?
Because it doesn't remove the randomness from our universe. It punts it to other universes. That is great, but doesn't allow us to predict things in our individual universe.
Or another way of saying it. We have 2 answers, they are determined. That is great, we know the 2 answers, one in each universe. Now the problem is we don't know what universe we are in. Now which universe we are in is random.
We didn't move the ball towards doing something useful in our own universe.
The 'universes' are loose abstractions, not a defined part of the theory; there's no actual hard distinction between timelines, in much the same way as coastlines don't have a defined length. They all blend into each other if you look closely enough.
That said, isn't the obvious answer 'all of them'?
OK, but if you own the machine, you can just pick the outcome you want, or draw it from the distributions at random. We (observers inside the machine) cannot know the future of course.
> The problem is you need a computer the size of the universe to do that calculation.
I’m not sure were you get that idea from. The amount of calculations we can do, per say, 1 000 000 molecules dedicated to the calculation has absolutely skyrocketed, and will continue to skyrocket.
"The amount of calculations we can do, per say, 1 000 000 molecules dedicated to the calculation "
Lets say it takes 100 molecules in a circuit to calculate 1 particles state. Then you already would need a universe 100X the size to calculate our 1X size universe.
I'm assuming all particles, not that this is somehow clipping and only rendering what we see. I'm not talking about the brain in box simulation, I'm talking about idea that entire universe is out there. What would it take to calculate every position of every particle.
> Lets say it takes 100 molecules in a circuit to calculate 1 particles state. Then you already would need a universe 100X the size to calculate our 1X size universe.
That’s not how it works though. You’d have a lot of fixed costs to build a computer that simulates exactly the behavior of one particle. But then simulating a secondary particle will have a much, much, much smaller marginal cost.
Since you brought up clipping, games are actually a perfect example. You can see games as very crude simulations of our own reality, or slices of it. Take for example Red Dead Redemption 2. Run it on a PS5. Now compare the size of your PS5 to the mass of what was the old Wild West territory :)
Plus there’s the whole quantum computing thing, where in a way you’re reaching into “alternate” realities for extra compute.
Yes. Just like a Minecraft World is like the size of 64,000 Earths, but it runs on my laptop.
That is what I'm saying is not happening. I'm saying that in a particle collider, we measure particles, and those exist all the time, not just when we are looking at them. Like, I have DNA and bones, they exist all the time, not just a simulation showing a 'skin' so it doesn't have to render everything.
Unless you are making a bigger point. That a computer that could be simulating every single particle, must exist outside this universe, and maybe mass and energy in this outside universe is so radically different we can't even grasp the scales of it.
Just like someone inside a minecraft world with blocks, couldn't grasp the amount of energy in our world.
Well, I don’t know about outside the universe, but you’re still not understanding how scaling works. And our technological progress.
The simplest way I can put it is that at some point of compute, there is a crossover where you need less mass to simulate something than the mass of the actual thing is. This will hold true for particle simulations as well. So no, you would not need more particles than the universe has to simulate the universe perfectly.
Ok. I'll try again. I think the 'scaling' issue here is not understanding the size of the scale if we are talking about if dealing with every particle in the universe. The largest super computers today aren't simulating every particle in even a few molecules.
So lets say you have Minecraft running.
You can completely build a CPU / Memory, etc... Inside Minecraft with Redstone.
Lets say you do this, build a PC inside Minecraft to the point that it is functional enough to run Minecraft. Minecraft running in Minecraft.
There is huge overhead.
You need an astronomically large real PC that could handle running Minecraft such that the Minecraft version running inside Minecraft is usable. That is the scale problem.
I'd have to dig up the citation. But pretty sure this compute power needed to compute the universe has been worked out.
> Ok. I'll try again. I think the 'scaling' issue here is not understanding the size of the scale if we are talking about if dealing with every particle in the universe. The largest super computers today aren't simulating every particle in even a few molecules.
So.. we are basically just a few meters over the start line in terms of doing perfect particle simulations :)
Look at protein folding. For decades, we could do it incrementally faster as our globally available compute increased. Then Alphafold came along and proved that it could be done much more efficiently. Now there's multiple models / companies that are planning to jostle for supremacy in that space.
Our ways of simulating particles will get more sophisticated and efficient as time goes on. Our hardware will push more calculations per watt and per density (aka per unit of mass).
I guess ultimately you take the pessimistic view and I take the optimistic view, so we'll have to agree to disagree. Good talk though!
[dead]
I am King Ozymandias look upon my complete data dump/backup*, ye mighty and despair!
*May be subject to entropy over time.
If I took the binary representation of that log and XOR'ed it with a random binary string, then would the result also have observers with the same observations?
Good question? :) I'd say no.
How about an exact copy of the log book, but with one bit flipped. Voila, mostly universal physics.
Ok but the act of writing it down would always take longer than the actual unfolding of the universe itself. Just like the halting problem, we can’t skip ahead at any point and we have no idea what will come next.
Sure, but the timebases are different. Maybe it took the butterflypeople a thousand butterflyweeks to write it out.
Let me restate the metaphysics a bit differently. Let's say there's no us, no butterflypeople, nothing at all. Entropy reigns supreme, no information is organized.
Now add the butterflypeople. They write the humanpeople's log book. Information exists in organized form. The humanpeople's bits have been divined out of the great entropic nothing. Maybe that's all it takes?
Reminds me of https://xkcd.com/505/
A fun variant of this is that the log can be taken at variable intervals and as long as it is sufficiently detailed, it can still capture all salient details.
Similarly a simulation run at some "tick rate" can also be run at 2x the rate while taking 1/2 the step per tick. Within the universe nobody would notice, as long as the steps were fine enough to begin with.
I think it was in Diaspora (or Permutation City?) that Greg Egan proposed that any tick rate would be unnoticable to that simulated beings, including "none".
In other words, the movie Top Gun will continue to exist as-is, no matter how many copies are made of it, including none. Encoded as a digital file it is just a number, a pure timeless concept, it doesn't have to be written down to exist. It always existed on the number line, even before Tom Cruise was born. In fact, every encoding of Top Gun exists on the number line, in every compression format, in every resolution, even a future 16K resolution that was never filmed and has no display devices made for it yet. Its encoding as a 400GB long number is there, already, and will always be there.
In other words, and simulation, an log of events, any experience already exists in mathematics, in every encoding... somewhere on the number line. This includes the entire physical universe. This isn't hypothetical, it's necessarily true! Anything that can be represented by a finite amount of information must be on the number line.
Even if you assume the Universe lasts forever, you can break its history up into a sequence of states, each of which is finite. Then the series will exists on the number line as a set of points heading off to infinity.
This kind of thought experiment seems like it breaks down due to the uncertainty principle. We can't exactly specify the full state of every particle in the universe. The universe might also be infinite and you can't enumerate an infinite set even without uncertainty, though you can write a generating function or recurrence relation for it, which seems to be Wolfram's point.
But why bother with this kind of detail? What's the difference between what you're imagining here and a normal reel of film? It can be played back, but even if it isn't, it records the state of events that happened, including observers that once existed and no longer do, experiencing events that once happened but no longer do. It's possible for a record to describe a canonical sequence even if the record itself doesn't change. Somebody outside of the record can view it out of order, speed it up, slow it down, pause it, reverse it. A film reel doesn't share the time dimension of its own universe in that way.
I'm struggling to come up with what this implies and why.
To your first point, if it's a simulated universe, the simulators can just choose to make it finite, and come up with their preferred particle behavior rules.
As observers, we perceive time as passing, but is there anything special in this perception? Looked at another way, everything could be frozen in a 4D log book and we couldn't tell the difference, or could we? In this interpretation, Napoleon is as alive (in 1820) as we are (in 2024.) A film reel is a similar concept, except it's just a 3D projection rather than a complete detailed 4D account.
I get the questions of eternalism, the reality of the past and future, why privilege the present (or at least my present), and all that. I just don't understand why the fact that you can record events in a medium that doesn't experience the time that it records has any implications for how we should think about this.
Whether or not you want to say "Napoleon exists" or "Napoleon existed" seems to be a matter of linguistic convention and the more common latter would reflect speakers privileging their own time. If you want to look at it another way, Napoleon exists, but entirely in my past light cone, and I exist entirely in his future light cone. I can't send him any kind of information, but he can send information to me. Is there anything special in this perception? To who? To observers at the absolute end of all time, my future is just as written in stone as Napoleon's. To observers I can receive information from, my future is unknown.
To any particular observer, there are regions of spacetime in which you have no past. There are regions in which you have no future. There is a region in which you have both a past and a future. Is there anything "special" in perceiving the sequence of events within the third region as passing rather than existing forever as a log? Not really. You're just describing a different variety of the Copernican principle or relativity as far as I can tell. But so what? None of us are the center of the univers. None of us exist in a special inertial frame describing absolute spacetime. These facts, however, have consequences in terms of how to measure and compute stuff. They change the kind of testable predictions you make given certain conditions. What computational or predictive consequences arise from observing that the entire world curve of the universe exists at once from a perspective outside of the universe? Going back to the I can't send information to Napoleon thing, if observers outside of our universe are keeping a log, none of us nor anything else in our universe can receive information from them, so what difference does it make?
It's an interesting shower thought but kind of also a big so what?
One way the simulation argument starts is: Is this the ultimate nature of reality? A universe out of thin void? One possibility is that all possible universes exist, we just have survivorship bias. But we're blessed with poetry and rainbows and Goldilocks constants, which seems like a very lucky draw.
So we proceed to wonder if it's a simulation. That would mean the universe is interesting because it was designed or selected as such. This answers some local questions, and punts the existentialism into the simulating world.
My argument is that a simulation need not look like the Matrix, where there's a machine running in real time so that the simulating and simulated universes share a timeline. The simulants might just be doing the relevant computations in whatever graph-traversing order, and printing them out or whatever.
The computed information would make up our universe, it would be the essence of it. The universe would have come into "being" just by being selected out of the big entropic soup of all possible 4D log books to write.
As for "so what," it's just amateur philosophy, no guarantee you'll find it interesting.
Now shred that log to particles and scatter them everywhere, and you have the "dust theory". Neither the time dimension or the log are shared with the simulating universe, and yet they are still valid for the observers within the universe.
If the sequence of the log states is entirely deterministic based on the initial state, then you don't even need to actually write down the entire log for it to "exist". This is Greg Egan's Permutation City.
Can we reduce this to an estimate of survivorship bias? If there is only one universe, then our survival is clearly explained: we're in the only reality there is. If all possible universes exist, then we really lucked out in ending up in this one (well, depending on who wins the election I guess.)
In the middle are the permutations selected through the filter of other realities, when they chose which universes to simulate. We lucked out but not as much, because uninteresting universes never made it out of the entropic soup.
It would have to be a conditional estimate of course, because our sentience biases our contemplation.
There's a hidden condition here.
How do you know every event to every particle?
The answer to that will literally change what gets written in the log book.
The point is the log is a graph or a tree, not an array.
Quick appreciation for the Douglas Adams reference :)
Comment was deleted :(
Randall has you covered: https://xkcd.com/505/
Is any of what he’s saying here, something he hasn’t essentially already said before?
The parts of this which were a little surprising to me (e.g. the bit about comparing time to heat, and the bit about running out of steps to do at an event horizon) iirc all linked to a thing posted a while ago?
I don’t share his enthusiasm for the phrase “computational irreducibility”. I would prefer to talk about e.g. no-speedup theorems.
There's "digital physics" which goes back to the late 60s https://en.wikipedia.org/wiki/Digital_physics.
The connection between heat/entropy and time is well explored. E.g. https://en.wikipedia.org/wiki/Arrow_of_time and https://en.wikipedia.org/wiki/Entropy_as_an_arrow_of_time
It has been said before, but by Stephen Wolfram.
It feels like this could be a perfectly decent article if he toned down his ego and referenced existing work (other than his own).
But I don't think that's possible for him.
I like thinking about hypergraphs that continually rewrite themselves. I've thought about it in terms of literary critique, or in "compiling" a novel. It reminds me of petri nets in a sense, where at any given moment, a character has a static model of the world, which can be diagrammed through a causal graph of conclusions and premises. Then, an event happens, which changes their understanding of the world; the hypergraph gets rewritten in response.
I've toyed with this with my own graph software when writing novels. It's of course impossible to fully document every characters' model before and after every event that affects them, but even doing so at key moments can help. I've wished more than once that I could "compile" my novel so it could automatically tell me plot holes or a character's faulty leap in logic (at least, one that would be out of character for them).
I've also tried the more common advice of using a spreadsheet where you have a column for each character, and rows indicating the passage of time. There you're not drawing hypergraphs but in each cell you're just writing raw text describing the state of the character at that time. It's helpful, but it falls apart when you start dealing with flashbacks and the like.
Every time I read stuff like this I get super drawn to thinking about Sunyata* - In Mahayana buddhism, my understanding is that Sunyata doesn't mean absolute nothingness or no existence, but all things are devoid of intrinsic, independent existence. Everything is empty of inherent nature because everything is interdependent... phenomena exist only in relation to causes and conditions. This relational existence assumes that things do not possess an unchanging essence... the ultimate sense, there is no fixed reality. What might seem like "everything" is actually permeated by "nothingness" or "emptiness" and that phenomena arise dependent on conditions, without intrinsic, permanent nature.
My mind also went here when reading TFA.
The all-time-all-space-all-branches brane of the Ruliad we call the Universe is the continuous one-ness and our selves are just the single-perspective projection models of that universe in our neurons that persist across edits to the neurons, until such as point as we update the model to see the larger picture and we can call that Nirvana, if we wish.
Indeed. Not only that, but it can be a lived experience. One sees that the need for something called "time" is actually an invention of the mind, and totally unnecessary. I know this sounds bizarre and like mystical woo-woo, but when it's seen, it's the simplest and most obvious thing in the world.
Doesn't seem like woo-woo - if there isn't even mind, how can there be time? ;) it makes sense, at least to this Buddhist lol.
Sunyata comes from Sunya, which in Sanskrit means "zero", another idea invented by the Indians.
Seems like an appropriate post on a day when the Nobel of Physics was awarded not for Physics discoveries but for computer science...
But from Wheeler's "it from bit" to Wolfram's computational universes, the question is: where is the beef.
Now, there might be ultimately something worthwhile with the obsession with digi-physics. Mental models that seemed disparate may merge and become fruitful. It doesnt even have to be a fully formed toolkit. Newton's invention of calculus was kinda sketchy. But he was explaining things with it, things that were not undestood before.
Wolfram does offer an interesting alternative to viewing the universe as a manifold with a tensor (the GR view). He believes it's a graph with computational rules. Are they the same? Mathematically, manifolds have a clear notion of dimension. This affects things like the inverse square rule. Wolfram's view of the ruliad, an evolving graph with rules, does bring up the question of dimension.
But at the end of the day he needs to make a concrete prediction that differs than the current view in order to have people devote a lot of time studying his world view. He's a brilliant guy and the Wolfram Language is fantastic, but he really needs to humble himself enough to value the work of convincing others.
Worth noting this is ultimately the problem with string theory: String theory does provide a suite of mathematical tools which can solve real physics problems and give valid answers but they're known physics problems that can also be solved with other tools.
To be useful as a theoretical framework it always needed to be able to predict something which only string theory could - as a "more accurate view of reality".
Which is the same problem here: you've got to make a prediction, an accessible prediction, and ideally also not introduce any new incompatibilities.
> But at the end of the day he needs to make a concrete prediction that differs than the current view in order to have people devote a lot of time studying his world view
Even if it doesn't make any different concrete predictions, a new way of thinking about things can attract scientists' attention. The Many Worlds interpetation of QM is an example.
I honestly don't think he cares about 'mainstream acceptance'. He is a prolific publisher of his detailed thoughts, which in the pre-academic-gatekeeping-establishment era, was enough for any serious philosopher.
He's a hobbyist. That doesn't make him any less prestigious if his ideas are neat.
The gatekeeping and manipulation going on in formal scientific publishing is notorious, but that is not the issue here.
The fundamental algorithm of advancing physical science has always been: a set of "principles" or proto-concepts, a set of matching mathematical tools (that dont even need to be very rigorous), using these tools to explain a slice of reality (experimental outcomes) and, finally, predicting unknown behaviors that can be sought, can be confirmed (and celebrated).
Sometimes even just a purely equivalent mathematical representation is fine, as it may give handles for calculations and thinking.
But whatever the program with digi-physics is, it doesnt follow these age-old patterns that establish validity and usefulness intrinsically and not because some gatekeepers say so.
The primary utility seems to be to enhance the prestige and toolkit of computational physics, which is fine, but totally not justifying the universality claims.
The thing that bothers me about the idea of the "Ruliad" is that it's completely unfalsifiable. Even if we existed in a reality where true randomness existed, or computational irreducibility wasn't a given, you could always argue that what we observe is just one finite local slice of that Ruliad where things appear to be deterministic (or computationally irreducible) due to our boundedness as observers.
It's basically the modern equivalent of "turtles all the way down" because it pretends to explain the nature of reality by extending our definition of reality to fit within an all-encompassing mental model that only makes sense on a surface level.
Granted, the words "universe", "multiverse", etc. are insufficient in describing everything in a way that includes everything we currently want to include, but giving a new name to that abstract idea of "everything" isn't itself a compelling argument to also say that everything exists as a static construct and that everything is computationally irreducibile and deterministic at a fundamental level. Yes, that makes sense in a physics simulation, but in reality, we don't know what we don't know. Placing the unknown in a conceptual box doesn't imply that it's now known.
Right. It feels like conjecture built upon conjecture, I can't tell where the foundation lies. It at least needs to make some rigorous, real-world predictions we don't already have.
I'm also dissatisfied with the notion of time is just "rewriting" of the hypergraph - that feels ill-defined. It borrows our intuition for flipping bits in physical memory, but what does "rewriting" actually mean in the metaphysical domain of this hypergraph?
I have a lot of respect for Wolfram, but much of this feels so hand-wavy.
Is there anything testable or falsifiable here? Otherwise it's just preaching beliefs.
That's the whole point of philosophy.
Not really, modern philosophy attempts to present valid arguments based on a few axioms. You can then decide for yourself if you assume these axioms yourself, in which case you also have to accept the conclusion of the argument.
Surely that's logic/maths where accepting the axioms means that the conclusion has to be accepted? Philosophy tends to be far less rigorous and can have very dubious steps so that there's often arguments where you don't accept the conclusion despite accepting the axioms.
e.g. https://en.wikipedia.org/wiki/G%C3%B6del%27s_ontological_pro...
IMHO, the difference between math and (modern) philosophy axioms is that the latter's are way higher level (e.g. "the world is material", "every humans deserves to live"...) while the former's are very low level and concern themselves with "simple" rules (refer to ZFC).
Philosophers also make their arguments in natural language, while mathematicians use a formal language (ultimately also described in natural language).
Your exemple is interesting, as it makes a bridge between philosophy and mathematics. It's basically Gödel's attempt to prove the existence of God with mathematical rigor. It's basically a form of the original ontological argument[1] with extra flair. You can still translate the axioms into natural language, like: "P(¬ᵠ)⇔¬P(ᵠ)" becomes "a property is bad if and only if the opposite property is good" or "P(G)" becomes "being God is good".
Finally, mathematicians don't usually concern themselves with "universal truth seeking" and are often content to add axioms as it suit them, if it means they can do intersting things (e.g. the Axiom of Choice).
I really think that Wolfram's descent into fringe science has hurt a lot of well meaning people that don't know better and think that because he's developed useful software that he should be listened to in these domains.
The crackpot trajectory of otherwise smart people is fairly well trodden with a number of indicators and nobel laureates who have walked it - one of which is when people start stepping well outside their field...and then also tend to start stepping into "the biggest problems" of wherever they point themselves.
I call it helicoptering, my old boss used to love to do it. Helicopter down onto a problem, act like everyone that already studied it was an idiot and hadn’t spent their life trying to solve X, stir a bunch of dust up, accomplish nothing, and helicopter away again to something else.
Oh maybe because he has a PhD in particle physics from Caltech ?
Eric Weinstein also has a PhD in physics; it doesn't preclude you being (or becoming) a crank.
It’s part of their life cycle https://www.smbc-comics.com/comic/2012-03-21
What is specifically crank about his theory? From outsiders perspective having theories that require a bunch of extra dimensions just to make the math work sound no less cranky.
I'm not claiming to be qualified to judge it, but it's quite clear that no one who is takes it seriously. He also seems to spend most of his time pontificating about things he has no expertise in and using his genuine expertise in physics to show off in front of easily-impressed podcast hosts — not a great sign.
" pontificating about things he has no expertise in" again he has PhD from Caltech in particle physics he had a good number of published works in quantum field theory how are you coming to the conclusion he is pontificating about things he has no expertise in?
Time and space probably belong to consciousness, rather than the real world. The objective "true" reality may be utterly incomprehensible in its complexity, but we can imagine a "slice" of that reality that arbitrarily defines space and time so that the interior of that slice follows some reasonable rules. That slice of reality can be thought of as a high-level consciousness that defines rules of our physics. Other slices of the same reality are possible, GR-like or QM-like, including those that are computational and discrete in nature. One universe, but many interpretations. Within each slice of reality, it may be possible to define smaller subsets of reality, corresponding to smaller consciousness, down to the human or even more primitive levels. So what Wolfram is describing may be true, objectively, to the observers of a computational slice of the universe, just like the MWI may be simultaneously true to the observers of the MWI slice of reality.
"(as I’ve argued at length elsewhere)"
Everything he writes is "at length". This looks like an interesting read with good ideas but it is so long and has no structure that I gave up reading. It may help to give an abstract in the beginning of the article.
The problem with the treatment of time in physics is that we can only measure time intervals not the philosophical Time (with capital T). But physicists gladly conflate the two.
Mach said: Absolute time [the philosophical Time] cannot be measured by comparison with another motion, it has therefore neither a practical nor a scientific value.
Which means that all of the "t" terms standing for time in astronomical equations are for time intervals and tell us nothing about the philosophical Time.
Disregard anything Stephen Wolfram says about anything other than his Mathematica software. He's a pretentious, arrogant twat who thinks he's unlocked the keys to the Universe and is trying to convince the rest of the world of his brilliance.
Wolfram has always been difficult for me to follow. I think it's because he tends to drone on, I don't know why. I don't think even he knows why. My understanding of what I have managed to listen to or read is that being who we are, we don't process information fast enough in order to see much of what is around us, even while it is happening before us. An example is to take a minute under consideration, you can think about how long a minute is. It's tangible to us. It's not very long. But if we think about how long a femtosecond is, it is not tangible at all. We can't experience a femtosecond. We can experience a whole bunch of femto seconds, but not just one. This is just one example of what I perceived the meaning of his thinking to be. Is that wrong, or so far off? Not only can we not experience a femtosecond, we will never be able to experience a femtosecond because our brains are simply not fast enough and aren't built to exist at such a scale. If that's what it means, then does that mean that he is referring to our ability to exist in certain scales, and our tendency to know the scale in which we exist? And, to exist outside of that scale, requires different computational parameters? Additionally, is this an extension of dimensions, just in time, not space? Does he differentiate between the two?
I know that the perception of scale has more to do with, well, perception, whereas computational irreducibility (as I understand it to be, anyway) is more of a function of natural processes....or THE underlying function from which all other functions stemming from that, are built upon. ... Right? Between that and perception of the scale in which we have evolved to exist in, it seems like they are at least closely related...
Some of what has been discussed here in the comments has me doubting my understanding, is the reason I ask.
To extend my question, could computational irreducibility help to explain why the Universe tends to "recycle" so many parts of itself? Is that some sort of telltale sign that when we see these patterns (golden ratio, fractals, recurring structures in naturee), we are looking at a fundamental aspect of the universe in some form, or it's computationally irreducible equivalent, or is this to be determined?
> THE underlying function
So this is about where it clicked for me: A function, to us normies, is something consisting of at least one part that doesn't do anything and another part that does something but has no tangible form, 'the operation'. So, to me, irreducible can only mean that there is some level where the function is the thing and vice versa, so that this irreducible function, from our (current) space-time-perspective, has no constituents except 'self'.
Which is nonsense, because self is worthless without stuff it can react with or to. Except, is it really?
A femtosecond can't be experienced because subpixel-sized movements/fractions of reactions happen during this short measurement. But that's irrelevant for the interface between this function and nature and evolution from their current space-time-POV and their, and thus our, space-time-blind-spots. It's like thought and action when there is not enough time to stop a movement or when stopping that exact movement would terminate the intended result.
But I actually don't think that irreducibility is the right term. It should be liminality or something, focusing on the fact that nothing temporary is measurable before the emergence of THE underlying function, which is what I used to think The Planck length is for (more or less) constant space.
Where it's nowadays standard practice in science to conceive of time as the dimension along which events are tagged, I would suggest the opposite: process, as a sequence of events, induces time. But also in the modern conception, time is derived from atomic events produced by a nuclear source. So, fundamentally the two conceptions are the same, but the process conception allows for greater freedom in what the underlying process may entail.
Is this a guest writer? It doesn’t have the Wolfram tone at all. It describes a universe that isn’t centered on Stephen Wolfram, for example.
Wolfram article on the nature of reality.
Cellular automaton on the first screen.
I think Stephen at least dares to ask the question.
Here is a little thought-experiment on the Nature of Time.
You take the three body problem and you pick an initial condition and generate the trajectory of the three body from 0 to T by integrating through time with some numerical scheme like Runge-Kutta.
Now you do it again, and again, generating each time a "universe" of three-body trajectories. Doing so allows you to build a dataset of physically realist three-body trajectories.
And now the kicker : You train a diffusion model on this (potentially infinite synthetic) dataset. Once trained, to build a "universe" (aka 3-body trajectories) you only need to sample from this diffusion model. There is no more need to integrate through time. Past, present and future inside the universe just fold themselves into place in order to make sure the universe follows the time-evolution constraint.
When working numerically, both these schemes can theoretically be as accurate as desired (error smaller than any chosen epsilon), although the diffusion model seems to potentially necessitate more memory in toy model, it's not evident as the universe is stored in a compressed fashion which necessitate less memory when the universe is no longer a toy model.
The underlying question I perceive from Stephen works are is whether it's more efficient computationally to explore all possible universes simultaneously in which case time is a mere constraint you have to solve, or to generate each universe independently stepping through internal time.
Although it may seems to be the same (our perception only having access to a slice of the multiverse), as in the end you get in both cases a physically consistent universe, the nature of the sampling process change the distribution of possible states. It also opens the possibility of shifting across various universes, not that we would be physically aware of (the previous universe and future universe), but we would benefit by experiencing a "better" universe. It's the same vibe of ideas which states that our universe has been fine-tuned for life to be possible.
I'm a big fan of Wolfram's physics project, however, he seems to be confusing thinking about physics (computation) with the continuous and ever-changing substance of the universe itself.
Time is a human idea to grapple with the fact that everything is both continuous and constantly changing. Time is simply picking out from that continuous change a sequence of changes or state(s) that occur during a measured standard sequence of change, such as the earth making a single rotation around its axis (day). It helps us manage and refer to and measure both the order of changes and the duration of changes or states using standards.
I thought spacetime was a fundamental concept of physics which explains gravity and not merely a human invention for measuring change...?
Indeed it is, but that fundamental concept is for human understanding of how physics works based on how we perceive/think about the universe, its not the metaphysics of the universe itself.
SW is the Derrida of computation. More words to add more confusion than explain anything.
Okay. Time is a computation. Patterned or otherwise predictable computations can be performed instantly and thus are not time. Only results that can’t be precomputed are part of our perceptions. That’s what I got out of it.
Physics does not explain flow of time at all. If one films a thrown ball, physics can tell from few frames its speed or where the ball is on the following or previous frames. But it tells nothing about why, when see the film, we perceive the ball moving. Articles like the above misses this.
In fact there is no even notion of direction of time in physics. All physical models are time-reversible. And even if we observe violation of, say, CPT, in nature, it still will not explain while we perceive time flowing in a particular direction.
This is very well discussed in the book “Time’s Arrow” by Huw Price.
The author discusses some of these points. One excerpt:
> But even at a much more mundane level there’s a certain crucial relationship between space and time for observers like us. The key point is that observers like us tend to “parse” the world into a sequence of “states of space” at successive “moments in time”. But the fact that we do this depends on some quite specific features of us, and in particular our effective physical scale in space as compared to time.
> In our everyday life we’re typically looking at scenes involving objects that are perhaps tens of meters away from us. And given the speed of light that means photons from these objects get to us in less than a microsecond. But it takes our brains milliseconds to register what we’ve seen. And this disparity of timescales is what leads us to view the world as consisting of a sequence of states of space at successive moments in time.
> If our brains “ran” a million times faster (i.e. at the speed of digital electronics) we’d perceive photons arriving from different parts of a scene at different times, and we’d presumably no longer view the world in terms of overall states of space existing at successive times.
> The same kind of thing would happen if we kept the speed of our brains the same, but dealt with scenes of a much larger scale (as we already do in dealing with spacecraft, astronomy, etc.).
This still misses the biggest question about the nature of time. The problem is not that we perceive the world as a set of space-like frames. The problem is why our consciousness perceives the frames moving from one to another at all and in particular direction.
Is it a question about nature of time or about our perception of time though?
Because the universe is evolving from a low entropy state to a high one.
This does not explain the flow of time nor the direction of how consciousness perceives it. A low entropy is just a low probability state. Such state in the past is just as unlikely as in future as physical models are time-reversible.
Moreover, there is no evolution in physical models. The universe is just 4-dimensional thing. Surely time in physics is different from space as we can predict across time based on on the condition in 3-d space-like surface, while if one make a slice in the 4-d universe with 2 space dimensions and one time-dimension, predicting across the remaining space dimension is impossible.
But that does not explain why our perception flows from one space-like slice to another and in particular direction. Surely some of the slices are less common (low entropy) then others (high entropy), but there is no movement or evolution.
A good analogy is a rod with a color gradient from white on one end and black on another with white turning into black quickly so most of the rod is black. We can arbitrary call the white side first and even say that the color evolves from white to black. Then as the white side is a low probability as a randomly selected slice of the rod will be black, we can even say that the color evolves from a low probability to high probability stare. But this is arbitrary as in reality color does not evolve and there is just the single colored rod.
Ok, so after the article on time as ought to be an emergent property[1], here we go with time from a computational point of view.
Can we at least receive a definition of computation that is not somehow depending of time being a given, explicitly or implicitly?
Am I alone finding this a bit taking aback? Like this is not physics or even general philosophy but plain old theological focus on the prime mover.
[1] https://www.quantamagazine.org/the-unraveling-of-space-time-...
Discussed in Permutation City
Yeah. I'm in the middle of writing a book about this, but in a sense it was also discussed by the Pythagoreans. And they (correctly, I think,) went a step further:
"The Pythagoreans too used to say that numerically the same things occur again and again. It is worth setting down a passage from the third book of Eudemus' Physics in which he paraphrases their views:
‘One might wonder-whether or not the same time recurs as some say it does. Now we call things 'the same' in different ways: things the same in kind plainly recur - e.g. summer and winter and the other seasons and periods; again, motions recur the same in kind - for the sun completes the solstices and the equinoxes and the other movements; But if we are to believe the Pythagoreans and hold that things the same in number recur - that you will be sitting here and I shall talk to you, holding this stick, and so on for everything else - then it is plausible that the same time too recurs.’"
- Simplicius, Commentary on the Physics 732.23-33.
Branching paths, "all possible mathematics," etc. In a universe which appears to be discrete, which can support finitist arguments, and where the potential number of paths is starkly finite -- this eventually leads to the conclusion that all paths eventually recur.
Strictly speaking, it only leads to the conclusion that eventually the universe will enter a loop passing through a finite number of states.
There’s no requirement that the current state is part of the loop. Or indeed that any state containing conscious observers is.
The bit in Permutation City about siphoning compute by exploiting the magnitudes of vector computations as a kind of scratch space out of algorithms that only needed the resulting angles… wonder if you could modify the DoRA parameter-efficient finetuning algorithm to do something like that lol, since it also splits up the new weights into angular and magnitude components..
Its certainly interesting, though the language its couched in wouldn't be found in any philosophical discussion on time. This is all to say that it deals with concepts that have been discussed in philosophy for a long time, and these insights wouldn't be considered "new" to someone from say mid-19th century Prussia. Certainly the "progressive unfolding of the truth," in qualitatively different steps which Wolfram adopts here as his concept of time is no different from Hegel's concept of time and the movement of history. I would recommend, for anyone interested in this sort of thing, to just read the "Preface" to his Phenomenology of Spirit.[0]
[0]https://files.libcom.org/files/Georg%20Wilhelm%20Friedrich%2...
Comment was deleted :(
Fascinating, but I really wish this work was being published as a series of papers in peer-reviewed journals. Otherwise it’s hard to take the work seriously.
Im curious about how this relates to deterministic time and the lack of free will.
>Our minds are “big”, in the sense that they span many individual branches of history. And they’re computationally bounded so they can’t perceive the details of all those branches, but only certain aggregated features. And in a first approximation what then emerges is in effect a single aggregated thread of history.
Does this allow free will?
I've yet to come across a satisfying definition for free will beyond "it's not determinism but also not randomness"
I've actually thought about free will in the context of wolfram's ideas before, and I like the idea that our minds are computationally irreducible - I think it is a very close analogue to free will.
I don't understand how computational irreducibility matters for the perception of time. Surely, even a computationally reducible universe could be so insanely expensive to predict that it wouldn't matter?
I also don't understand why our inability to predict the future is related to our perception of time.
Overall, my impression is that this is an essay in philosophy (i.e, devoid of any content) rather than science.
Surely Wofram deserves the Nobel as much as Hopfield and Hinton? Not for this stuff of course (which I doubt many take seriously), but because he also provided us with an amazing computational tool without which physics would be very far behind where it is today?
[And at least I knew his name already unlike our current laureates whom I just had to look up!]
This year is an exception because of the AI Gen AI Artificial Intelligence AI AI zeitgeist.
If we keep giving the physics Nobel to people building computer tools, soon it will have to be renowned physicist Linus Torvalds, whose computational platform underlies every big physics experiment.
I'm not sure physicists would be thrilled if we keep going in that direction.
I think this is one of the rare times I feel comfortable speculating that had he not created Mathematica than someone else would have.
There was a demand and plenty of people with interest.
He was just in the right place with the right set of skills to execute on it before others and won the market in its infancy. Also it's a small enough market that the like of Mircosoft didn't feel the need to come in and crush him like they did Lotus 1-2-3.
I suspect you are right - but multiple Nobel prizes have gone to people who got there only very slightly ahead of others in the race. Would be tough to argue that there are many prizes which are for work that wouldn't have been done within a decade of when the winner actually did do it.
> If we were not computationally bounded, we could “perceive the whole of the future in one gulp” and we wouldn’t need a notion of time at all.
Maybe, if we assume we aren't axiomatically bound, despite knowing that we are (but that knowledge is rarely in context, so we can only know it sometimes...once again: time...weird).
"Thought is Time."
- Jiddu Krishnamurti
> perceive the whole of the future in one gulp
"Therefore, as regards such knowledge, they know all things at once" Summa
You could perceive (maybe? Depends on how it's hooked up) a future (a simulation based the information you have), but there's no reason to think that's what the future is with certainty. Map/territory stuff too.
> but there's no reason
What is it that you refer to here?
You can't exactly predict the future unless you have all the information, even theoretically.
You can certainly predict portions of it (1=1 will continue to be true indefinitely, and that's just one example).
And, there is no need for predictions to be true, or claims of fact about whether there are or are not "reasons" for things. In fact, epistemically unsound claims such as this are very often the only type of speech ~allowed, as crazy as that may seem.
I don't see how what you're saying lets you "perceive the whole of the future in one gulp", or maybe it does, but you can't be confidant that it's the real future.
Oh, I was disagreeing with the proposition!! :)
He literally only cites himself in that article…
https://media1.tenor.com/m/v6Awsd0YO7IAAAAd/metal-gear-risin...
So did God.
> At the lowest level the state of the universe is represented by a hypergraph which captures what can be thought of as the “spatial relations” between discrete “atoms of space”. Time then corresponds to the progressive rewriting of this hypergraph.
I believe it's simply a unit of measurement we use to understand the movement or rhythm on which the universe operates, so it could be termed the "progress of computation" if that makes more sense but it's all in the same effort.
You have to figure time would carry on even if nothing else was happening . . .
. . . at the time ;)
That doesn't seem likely. If there was nothing happening, then how could you determine one instant from another - without any change there can be no concept of time.
>That doesn't seem likely.
Really I guess I've always felt that way when you think about it conceptually, but maybe all it has to do is be slightly more likely than time standing still while other things do not ;)
You might also very well be able to say that without time there would be no concept of change either :)
How would a bag shaped universe experience time? https://youtu.be/FYJ1dbyDcrI?si=9Ga7PCeac4EV4Y4_
The idea that time is tied to computation makes me wonder if everything we see as 'progress' is just the universe showing us the loading screen percentage of the game of life.
Space is distributed and time is a centralizing force. The serial action bottleneck forces the brain, for example, to unify and send one action at a time. This is also replicated in LLMs that are distributed internally, but generate one token at a time. So time is like the force of centralization while space supports the distributed side.
These two tendencies are reflected in the exploration/exploitation tradeoff. The exploitation part is centralized in language and culture, while the exploration part is distributed across the components of a system. They work together to achieve intelligence, both are needed.
Everytime this work of Wolfram's comes up, I think the same thing: what this is more than anything else, is a tacit argument that the universe we inhabit and are structures/processes within, is computed in a strong sense. I.e., that we are living in a computational "simulation," the substrate of which is not currently accessible.
That he doesn't come out and lead with this, I find quite peculiar. I've asked him about this in person and not gotten a less cagey response. I assume that is because he does not want his theoretic hypotheticals to be binned under "simulation theory" and his overall world view so categorized.
But I don't see another reason to pursue this line of conjecture the way he does. And as I suspect that that premise is actually true, it's all good IMO.
Unrelated directly, but certainly adjacent, is that at the intersection of simulation-theories and AI, is the premise that a computed person (i.e, an AI) is uniquely situated to "jail break" our own reality, to exist in the framing one. (And you know, maybe it's turtles all the way down a la Flatland, so...)
As Douglas Hofstadter and Daniel Dennett foregrounded, a simulated hurricane doesn't get you wet, but a simulated poem is a poem in every frame. So too travel entities defined well by computation.
A good reason, if we needed one, perhaps, to get on with the business of elevating ourselves into a purely computational embodiment, I think. I'd like to pop up a level and take a look.
So you can't go back in time for the same reason you can't go left in Super Mario Bros.
Wolfram's theories are still largely pseudoscientific, in that way they look a lot like string theory, minus the public funding the latter received.
Neither theory is really falsifiable : if new experiments are made that contradict the theory, it can just be adjusted to fit the new observations. As a consequence, those theories are unable to make any kind of prediction about our reality, which makes them pretty much useless. No wonder this "research" was never published in any physics journal.
This model of physics does make some falsifiable predictions, and there are discussions about how to test them elsewhere.
Unlike string theory, this theory does not have any free variables to adjust. It's either true or it's false.
I, for one, find it to be trivially true. It fits every observation and is the only theory ever posed that doesn't have the "But why those initial conditions?" problem.
Computationally unbounded observers see more of the future but what of free will?
Did he tackle Lorentz invariance?
when you die, people say that your time has ended. Does anyone know scientifically speaking what happens to time for a dead person
Without even visiting the page I can predict what this writing will be about with uncanny accuracy.
1. Big words at the start - pretending to hack at a problem so big that just swinging the axe is a major undertaking
2. The prose slowly drifts to make less and less sense; words have no practical meaning anymore.
3. Simplistic images galore. Various plots via cellular automata and "pretty" images show things that have nothing to do with the topic and are only distant metaphors at best. Yet these images are the proof that it all "works."
4. A nothingburger by the end. Leaves you wondering, why did I read all this?
Every essay by Wolfram is the same.
You forgot ample use of “computational irreducibility”, and “like I showed 30 years ago (proceeds to not have shown this)” but yes. Very much this.
Comment was deleted :(
Almost like time is the stack and space is the heap.
Meh. Almost.
I think he's a quack trying to torture an explanation of the universe out of his pet theory that uses a lot of words to say simple things but doesn't predict anything. If "time is what progresses when one applies computational rules" then how is the order in which the rules are applied defined in the first place?
Computational irreducibility is a neat idea but i'm not sure its novel or something that explains the entire universe. My basic intro course on differential equations taught us that the vast majority of them cannot be solved analytically, they have to be approximated. I don't know if the irreducibility idea is anything fundamentally different than saying some problems are hard, whether its non analytical equations or NP hard problems.
I think you’re slightly misunderstanding his concept of computational irreducibility. It’s more like the halting problem than anything: basically he’s saying that dynamic systems can’t be reduced to an equation that is easier to calculate and so you just have to simulate the entire system, run it, and watch what happens. This means we can’t ever predict the future within these systems.
Well I wouldn't put it quite like that either.. because you have to be careful what you mean by 'simulate' and 'easier'.
There could be multiple ways to simulate the same system, i.e. produce the same evolutionary output steps. Wolfram tends to imply there is only one most-expensive way for systems that are computationally irreducible and that way is grinding through a recursive computation. I think that's partly because the simple experiments, like cellular automata, he used to come up with this principle actually explore the 'space of simple rules', not the 'space of ordered sets of states of systems'.
Of course the latter is a much more computationally expensive things to do but it seems to me it would generalise better to the universe. Because in the universe what we're really observing is the evolution of states not the outputs of rules. There may be other hidden assumptions in the principle if you assume that all systems can and do evolve from simple rules as much of Physics does. Nevertheless, you need a high bar if you're going to state universal principles.
Perhaps the simplest way to state the principle is: say we set up a simple iterative computation where the input to step n, is the output of step n-1. Then there's no way to compute state n without having previously computes states n-2, n-3 etc. That's what he means by irreducible. In other words it's "necessarily recursive" which may be a better and more focused term.
I'm cautious about making it mean more than that, since Wolfram tends to write in great leaps of conclusions without showing us his working. Nevertheless I enjoy following his ideas, and I did find aspects of this article quite thought provoking.
> I think that's partly because the simple experiments, like cellular automata, he used to come up with this principle actually explore the 'space of simple rules', not the 'space of ordered sets of states of systems'.
I think it’s the opposite actually. He chose to study these recursive systems because they seem to describe reality, and then when he found more evidence that they do a good job describing reality, he kept studying… so on and so forth. Basically a sort of hermeneutic circle type deal.
You do a much more thorough job of describing it. I should’ve mentioned the recursive part earlier. I just kinda assume we all already know we’re talking about recursion and time steps and that’s not a useful assumption
It's pretty dang hard to give the output to Fibonacci(x) for any x up to infinity without having done the work up to that point.
Good point. We’re still not describing it perfectly. Admittedly I’m doing it by my memory of the last time I read Wolframs ideas. I think we unfortunately have to describe it using Kolmogorov complexity: what is the length of the shortest computer program that produces the object as an output? What Wolfram means by computational irreducibility is that he asserts that reality itself is the shortest length computer program that can produce its own output, and it can’t be shortened (reduced) any further without losing information.
Edit: sorry I think I still haven’t fully described it. Will have to come back to it tomorrow when I’ve had some sleep
Actually there’s an explicit formula for Finonacci(x) that involves phi. I think you can use generating functions to derive it.
(But your overall point still stands.)
Your comment makes me think about statistical mechanics and microstates. That is to say ... in a complex system with properties that are a function of microstates, whether the internal structure of the microstates that correspond to a given property matter can depend on your point of view or interest in the system.
Heat, for example, is a statistical property of a system, and a given temperature can correspond to a vast number of possible microstates of the system. For some purposes, you care precisely which microstate the system is in; for others, you do not, and the temperature property is entirely adequate to describe the system.
Rules may describe the microstate, but may be (depending on your POV) be irrelevant to the property.
Using Wolfram's model of the world, there may indeed be a cellular automata following rules that underlies the property, but there may be no reason to care about it in a given instance; instead you're interested in the "evolution of states" (i.e. values of the property).
Some complexity scientists are quite taken with this idea of not needing to care about the lower levels of a system when consider higher level behavior. In their view (and rightly so, IMO [0]) you don't always need to consider the rules that drive (say) physics when considering (say) psychology.
[0] except that I think that Hofstadter's "heterarchy" idea is likely to be even more accurate - interesting systems are the ones in which there are complex feedback systems between different levels of the system.
It seems pretty clear to me that this desire for "perfect" layers of abstraction is something we strive for due to our own intellectual limits, and that in reality all abstractions are lossy to some degree. Heat as a single integer in Degrees F is good enough most of the time but when you're designing CVD for a Silicon Fab you might actually care about the positions and orientations and vectors of the gas molecules.
That smells like the Universe is the best Computer for computing the future of the Universe tautology.
So apparently the inside of the Black Hole event horizon is just "Forty Two".
Yes that's pretty much it
Funny you should say this, because most work on the halting problem is reducing the systems down to equations that are easier to calculate.
But is the theory that such work can continue forever?
There's a point at which it becomes impossible: the nth Busy Beaver number is independent of ZF, for n≤745 (ref: https://wiki.bbchallenge.org/wiki/Cryptids). So no, such work cannot continue forever.
We don't know whether such work can continue up to that point. The only way to find that out is to explore the relevant mathematics, and see if we find something fundamentally irreducible. There's no long-term pattern to the proofs, despite the presence of short-term patterns. (In this sense, the hunt for Busy Beavers is computationally irreducible – but there are still easier and harder ways of approaching the hunt.)
I don't mean it as an attack, I honestly mean it as a straightforward question, what are your qualifications in this matter to call someone as accomplished as SW a quack?
I have studied science and engineering in school and college and know what constitutes a scientific theory. Part of it is that it has to make empirically falsifiable predictions. This was taught in a 101 level biology course im taking online now as well, it's not rocket science. That's all it takes to decipher this as quackery. There are many concepts that rhyme with "computational irreducibility" some of which I mentioned before, as I said this kind of thing is obvious after taking a few undergrad level courses. Further this idea makes no new predictions or insights. You're focused on qualifications but whats true is true and whats false is false regardless of who says it. If you read over his Wikipedia [1] page, you will see similar sentiments said by more "qualified" people:
"The book was met with skepticism and criticism that Wolfram took credit for the work of others and made conclusions without evidence to support them."
"Physicists are generally unimpressed with Wolfram's claim, and state that Wolfram's results are non-quantitative and arbitrary"
If you really truly believe that a person cannot know a thing without a qualification or that anyone with a "qualification" must know more, i think you should really reconsider that view. What exactly are Wolfram's qualifications in the field of science, and how are they looked upon by others in that field? How many cases of unqualified outsiders to fields making huge contributions that the rest missed? What are the financial incentives around Masters programs in universities, the difficulty of courses in an average M.S program compared to the undergrad, and what does that say about the supposedly higher qualification? How many cases of scientific fraud uncovered recently by eminent, respected people? "Nullius in verba" is the heart of science and deferral to authority can sometimes be opposed to it.
Comment was deleted :(
Comment was deleted :(
He treats computation as if it is a fundamental law of nature, but I don’t find that assertion compelling. I’m also more of a pilot wave theory advocate, which although incomplete, cuts off several diseased (renormalized) branches of quantum physics.
I guess I’ll just wait for Sabine to say something about this.
I'm guessing she'll be pretty sarcastic as she's not overly fond of mathematical theories that aren't testable, to say the least.
Except for superdeterminism - but maybe she doesn’t have a choice.
Comment was deleted :(
[flagged]
Comment was deleted :(
[flagged]
groans in metaphysicist
The web became trashed over a decade ago.
Crafted by Rajat
Source Code