astro-cosmo-q a day ago

As someone who works decently close to, but not in, this area, I am surprised to see this on the front page of HN. The paper authors do not use correct statistical practices (e.g. H_0 cannot be fixed “as a nuisance parameter” to remove a degeneracy with another parameter - nuisance parameters must be marginalized over!) and the authors fail to account for several effects in their model (e.g. stretch/color factors for each supernova must be varied) which are known to be necessary for robust inference of cosmological parameters from supernovae data.

This is an honest question since I have seen this phenomenon occur a few times now with cosmology/astrophysics papers on HN: How did the original poster find this? And why has it gotten such interest/points? I sincerely hope it is simply a well-intentioned interest in our universe (which it greatly heartens me to see!) combined with naïveté (not meant pejoratively, just to refer to lacking context) wrt the technical nature of this work, but I am interested to hear your thoughts.

  • necubi a day ago

    I saw it floating around Twitter with some expansive commentary "This could be an incredible revolution in Cosmology. The Dark Energy model of the universe, which won a Nobel Prize in 2011, may be completely wrong.", etc.

    For whatever reason, engineers really hate dark energy and will glom onto any fringe theory that appears to disprove it. Not to psychoanalyze too much, but it seems to be a topic where non-experts get to feel like they're smarter than those PhD cosmologists because they watched a Sabine Hossenfelder video.

    See literally any thread about dark energy (or dark matter, which elicits similar reactions) on HN.

  • shepardrtc 12 hours ago

    If this is the case then the paper is easily dismissed, correct? How is it being published if the flaws are immediate, obvious, and completely destructive to the idea being presented? I'm legitimately asking - I've published papers and reviewed them, and if I thought something was so thoroughly wrong, I certainly wouldn't give it the ok.

  • codethief 18 hours ago

    I can't speak to the technical nature of the work, as I don't work in the field, but the last-named author, likely the advisor, seems to be a respectable researcher.

    > I have served on the committee of the International Society on General Relativity and Gravitation 2017-2022, and am a past President of the Australasian Society for General Relativity and Gravitation and a past President of the New Zealand Institute of Physics. I served 8 years on the editorial board of Classical and Quantum Gravity, 2012-2019.

    http://www2.phys.canterbury.ac.nz/~dlw24/

    So while your criticism may well be justified, you make it sound like this is a fringe paper, which I don't think is the case. So instead of opening a meta discussion about what physics papers get posted & upvoted on HN and why (which is hardly novel), I'd be much more interested in how big the issues you're mentioning are and whether the results of the paper could be salvageable. I at least do think that Wiltshire's research is interesting and he has made good points[0] about the challenges of coarse-graining spacetime structures and where & why the underlying assumptions of LCDM might fail to hold.

    [0]: See e.g. the lecture notes on https://arxiv.org/abs/1311.3787

    • astro-cosmo-q 13 hours ago

      I am certainly not suggesting anything negative about the character or reputation of the authors of the work. I think that work on alternatives to the accepted concordance cosmology model (LCDM) should definitely be explored, and David Wiltshire et al. should pursue this if they deem it promising.

      As an aside: You may not find this compelling, which is understandable, but I will note that the vast majority of cosmologists (very conservatively 95%) do not question the FLRW aspect of the cosmological concordance model (which is what the paper here does away with in the alternative timescape cosmology), even if they question other parts of it (i.e. by considering dynamical dark energy, neutrino interactions, etc.). I agree that timescape is an interesting idea, but it seems like only a few people have been working on it for over a decade now - unless you have a very (and I think unfairly) dim view of professional cosmologists, if there was a strong case to be made for the timescape model based in data, the greater community would have adopted it by now.

      Finally, and independent of the sociological points above, an answer to your question about specifics aside from the types of statistical/modeling issues of the type I mentioned above. Again this is not exactly my area and I have not done the analysis myself, so I will refrain from strong opinions. However, my immediate reaction to this is that it is easy to fit one particular dataset (in this case part of the Pantheon+ supernovae sample) with a more complicated model than LCDM, but often these types of models fail when other cosmological datasets are included. Considering such joint constraints with multiple datasets is not an “extra ask” of alternatives to LCDM - to be taken seriously any alternative model should hold up in a joint analysis of precision cosmological datasets (see, for example, the tests of a more seriously considered alternative model in the community, early dark energy [0]). These days, this is often the combination of Cosmic Microwave Background (CMB) Anisotropies, galaxy Baryon Acoustic Oscillations (BAO), and one of several supernovae samples - see e.g. [1] for a somewhat pedagogical overview. The failure to fit several datasets is a common issue e.g. with many MOND papers. In this case, the authors do not try to fit data to anything but a single (modified) supernovae dataset. I would expect that if they included a fit to CMB/BAO data, there would be trouble with the timescape model, as this is exactly what was found in an analysis of the timescape model in another state-of-the-art supernova dataset [2] - there the timescape model could not accommodate the supernova data when combined with CMB/BAO data [2].

      [0] https://arxiv.org/abs/2302.09032 [1] https://arxiv.org/pdf/2007.08991 [2] https://arxiv.org/pdf/2406.05048

      • codethief 8 hours ago

        Thanks for elaborating!

        > However, my immediate reaction to this is that it is easy to fit one particular dataset (in this case part of the Pantheon+ supernovae sample) with a more complicated model than LCDM, but often these types of models fail when other cosmological datasets are included.

        This is what I was afraid of. Thanks for the links, in particular for [2]!

  • astro-cosmo-q 11 hours ago

    A parenthetical remark just to clarify since I was reading quickly. Wrt the second point in parentheses about stretch factors - upon a second look it is not clear to me exactly what is happening since they say the factors are both fixed for each supernova but then also list the (global?) stretch and color factors in Table 1 (implying that they are varied).

  • marcosdumay a day ago

    That is probably the same phenomenon that makes the most sketchy papers on nutrition science to appear on popular news.

    Sketchy things have the most interesting results. People that want entertaining news select for interesting results, and the sketchy ones get over-represented.

  • throwaway81523 a day ago

    It has been circulating on the intertubes. I saw it on a more general interest site earlier today, before seeing it on HN just now.

  • SpaceManNabs a day ago

    I have mentioned this in two comments but HN gets really kooky when it comes to cosmology. A few years ago i saw a barrage of highly upvoted papers on MOND and de Broglie stuff.

    It really made me wonder what else gets posted here that is patently absurd but i dont have the prerequisite knowledge to filter it.

  • andrewflnr a day ago

    > well-intentioned interest in our universe

    I mean, probably. Though HN does have a taste for fringe theories, which might color your interpretation of "well-intentioned". And most of us aren't really qualified to assess the statistical rigor of astrophysics papers, myself certainly included.

  • mmooss a day ago

    > why has it gotten such interest/points?

    For context: The top comment on most HN stories, especially research, tries to completely discredit the OP, often by finding flaws, and especially in statistical methods.

    Everything has flaws. I think people are interested in what is valuable and possible. Shakespeare's work has many flaws, but that's not what people focus on.

    Also, while you aren't responsible for all those other top comments, why should I believe yours? Usually I just ignore these comments (but I appreciate your curiosity).

    • JoeAltmaier 16 hours ago

      Failures in statistical methods are not covered by 'everything has flaws'. Those are fatal, existential flaws. Something that is not likely true (though it is erroneously presented as likely of being true) are likely false.

      • mmooss 10 hours ago

        Working backwards ...

        > Something that is not likely true (though it is erroneously presented as likely of being true) are likely false.

        That doesn't make sense to me. Because something isn't proven here, that doesn't make it more likely to be false; it's just uncertain. Poor evidence is not evidence either way. To say it's false, it would need to be proven false, with good evidence. If I assert, 'the universe is expanding because Pluto is further away today than yesterday', my argument wouldn't support the claim but that doesn't logically imply that the universe is not expanding.

        > Failures in statistical methods are not covered by 'everything has flaws'. Those are fatal, existential flaws.

        Why are errors in statistical methods -- if they exist here: we have a hot take by a random, anonymous Internet commenter (using a new account) against scientists who spent a long time on this work, and put their names and reputations on it -- somehow more fatal than other errors?

        For example, some statistical errors lead to weaker results, but results nonetheless. Some lead to results with a somewhat different meaning. (Some lead to stronger results.)

        We need to deal with imperfect information all the time and find value in it, or we would have almost no information. I spent last week solving a problem with several routers interacting; I had some clear data, some unreliable information, and some black-hole uncertainty; I had to work with what I had and solve the problem. The idea that science is exempt from that is a fantasy of non-scientists, of the religion of science.

molticrystal a day ago

This paper argues that the Timescape model [0] provides a better fit than the cold dark matter model when examining Type Ia Supernovae. According to the Timescape model, clocks run faster in voids where the gravitational field is less, and significant differences exist between a galaxy floating in a void and one like the Milky Way Galaxy. The Timescape model suggests that other models, which fail to account for these differences, lead to less accurate calculations and less plausible solutions.

[0] https://en.wikipedia.org/wiki/Inhomogeneous_cosmology?useski...

  • T-A a day ago

    > a better fit than the cold dark matter model

    than the Lambda Cold Dark Matter model.

    Lambda, i.e. the cosmological constant, a.k.a. dark energy, is what they do away with, not dark matter.

    • SpaceManNabs a day ago

      Thanks for saving me time in dismissing this paper lol. Any time somebody wants to get rid of dark energy, i run into some garbage. Reminds me of the mond nuts

      Just reading the rest of the comment section is enough to help me verify that.

      For some reason, hackernews always gets kooky when it comes to this stuff.

      • webdoodle a day ago

        Can't censor it, so you gaslight it?

      • andrewflnr a day ago

        I don't know, the evidence for dark energy has always seemed a lot sketchier than the evidence for dark matter. Dark matter has lots of interlocking lines of evidence. Isn't dark energy pretty much entirely based on various cosmic distance measures that all have huge stacks of assumptions embedded?

        • SpaceManNabs a day ago

          I agree. Until i see better evidence for 1a, wmap, and cluster formation in another theory, i really want all the charlatans to be quiet. We dont know what dark energy is, but we have decent evidence to say it is there and also decent theory.

          I am not saying this paper is made by charlatan btw. This type of work attracts those people though.

  • vlovich123 a day ago

    If clocks run slower in the presence of gravity, wouldn’t it stand to reason it runs more quickly in a void where there’s less gravity? Or is the model saying that clocks run even faster in a void than Einstein’s theory predicts?

    • vecter a day ago

      Clocks run at "normal" speed (i.e. "1x" speed) in the absence of a gravitational field. The stronger the gravity, the slower they run (i.e. less than "1x" speed).

      • vlovich123 a day ago

        Right so is the paper saying that lambda CM completely ignored clock differences due to heterogeneity in mass distribution in the universe where isolated galaxies would be experiencing less time slowing than galaxies near other galaxies which would experience more time dilation?

        • raattgift 14 hours ago

          In the standard cosmology the Integrated Sachs-Wolfe effect captures the redshift/blueshift of distant light sources (up to the Cosmic Microwave Background) as it traverses relatively dense regions and relative voids.

          https://en.wikipedia.org/wiki/Sachs%E2%80%93Wolfe_effect

          Note that in the next paragraph I depart significantly from the vocabulary that the Timescapes programme proponents have been using for the past twenty years.

          ISW and comparable spectroscopy is easy enough to think about in terms of an accelerating cosmic expansion, i.e., relative voids are becoming spatially bigger with the expansion. It becomes much less intuitive how to fit the data if one keeps relative voids at roughly constant volume instead implying that there is a significant false vacuum above the ground state and in voids the false vacuum is slowly decaying to that state. (Outside the supervoids, near matter, this false vacuum decays much more slowly still). Because "vacuum" in the voids isn't really vacuum, one is stuck with a running function on the constant c (it gets faster with time from the formation of the CMB; this is because the false vacuum evolves towards a real vacuum) or adapting lightlike geodesics by imposing refraction (since the false vacuum is a medium).

          The usual terminology is reasonably capture in the first paragraph here at <https://en.wikipedia.org/wiki/Inhomogeneous_cosmology#Inhomo...> ("Inhomogeneous universe"). The following short section ("Perturbative approach") is what is done in the standard cosmology when one wants to do detailed studies of filamentary distributions and other structures that are lumpy at some (larrrrrge) length scale of interest: the perturbed homogenous background is practically always the standard FLRW.

          The justification for perturbation theory on FLRW is that even though there are dense spots (notably most galaxies' central black holes), principles like the Birkhoff theorem capture the idea that as you get far enough away from a galaxy it behaves more and more like a small shell, and this happens at intragalactic scales for these SMBHs: gravitationally, even to its arms' structure, it makes practically no difference whether Andromeda's central bulge has a lot more stars/gas/dust or whether it has one, two, or six central SMBHs (at enough spatial separation that they're not mutually orbiting in a way that would generate gravitational radiation our observatories are sensitive to).

          The same idea applies to galaxies->galaxy clusters->filamentary structures: as you "zoom out" the density variations become less important: filaments are pretty sparse on average.

          The Timescapes programe wants a sharper difference in matter sparseness between voids and filaments, and proposes that gravitational backreaction by the matter is responsible for generating that: the presence of matter steepens the density of matter over time (without the visible matter clearly becoming denser). I don't personally see how that's much different from a false-vacuum decay in the voids, conceptually. (ETA: well, it depends somewhat on how the Timescape void fraction evolves, but the local universe VF doesn't run void clocks fast enough, unless we do violence to the Copernican principle.)

          (Also ETA, mostly a note-to-self: I also don't understand how they capture the angular diameter turnover point in their dressed geometry <https://journals.aps.org/prd/abstract/10.1103/PhysRevD.80.12...> PDF available from institution at <https://ir.canterbury.ac.nz/items/36fe829a-0e7a-45d6-8db6-c2...> (cf <https://astronomy.stackexchange.com/questions/21006/understa...>.))

          Finally, I think the most important result of this latest Timescapes paper is a reminder to everyone that supernova data are a mess. A good X-mas present would be a couple readily visible Milky Way supernovae.

          -

  • brotchie a day ago

    Pet theory is that our universe is run on some external computational substrate. A lot of the strangeness we see in quantum physics are side effects of how that computation is executed efficiently.

    The inability to reconcile quantum field theory and general relativity is the that gravity is a fundamentally different thing to matter: matter is an information system that's run to execute the laws of physics, gravity is a side effect of the underlying architecture being parallelized across many compute nodes.

    The speed of light limitation is the side-effect of it taking a finite time for information to propagate in the underlying computational substrate.

    The top-level calculation the universe is running is constantly trying to balance computation efficiently among the compute nodes in the substrate: e.g. the universe is trying to maintain a constant complexity density across all compute nodes.

    Black holes act as complexity sinks, effectively "garbage collection." The matter than falls below the event horizon is effectively removed from the computation needs of the substrate. The cosmological constant can be explained by more compute power being available as more and more matter is consumed by black holes.

    This can be introduced into GR by adding a new scalar field whose distribution encodes "complexity density." e.g. some metric of complexity like counting micro-states, etc. This scalar field attempts to remain spatially uniform in order to best "smooth" computation across the computational substrate. If you apply this to a galaxy with a large central supermassive black hole, you end up with almost a point sink of complexity at the center, then a large area of high complexity in the accretion disk, and then a gradient of complexity away towards the edges of the galaxy. That is, the scalar field has strong gradients along the radius of the galaxy, and this gives rise to varying gravitational effects over the radius (very MOND-like).

    Some back of the napkin calculations show that adding this complexity density scalar field to GR does replicate observed rotation curves of galaxies. Would love to formalize this and run some numerical simulations.

    Would hope that fitting the free parameters of GR with this complexity density scalar field would yield some testable predictions that differ from current naive assumptions around dark matter and dark energy.

    • fsloth a day ago

      ”External computation susbtrate” is a useful idea if it leads to falsifiable theories. As a ”theory of everything” it sucks because it’s clearly not motivated by any specific maths or observations, but by the human need to map nature into some comprehensible analogue. Ie. taking some simpler subset of nature and trying to pretend the rest of it is like that as well. Usually nature so far has become more incomprehensible the deeper we’ve looked at it.

      Newtonian mechanics & mechanical clocks being hottest precision technique led scientists at the time to viewing nature as a clockwork. Now we have computers, we think ”nature is like computers” because it’s an appealing analogue.

      But it’s a false analogue imo. Just like clocks are a thing enabled by nature (a subset, in every meaning of the word) similarly computers are a subset of nature. So yes, nature can think (with human brains) and nature can run computations (with cpu:s impregnated with programs) but that also is just a subset of nature.

      Now: games of the mind and helpfull analogues rock. And asking ”how is nature analogous to a turing machine” is interesting for sure. But just because a game is fun or analogue appealing, should not one let forget in the philosophical sense that one is playing only with a limited subset of a thing.

    • cma a day ago

      There's a Danny Hillis talk on this but I couldn't find it.

mutagen a day ago

So Vernor Vinge was on to something[0] with his 'Zones of Thought'...

[0] https://en.wikipedia.org/wiki/A_Fire_Upon_the_Deep#Setting

  • shepardrtc 12 hours ago

    Well his idea was that the laws of physics actually change depending on the distance from the galactic center. Far enough out, information can be transferred faster than the speed of light. Too close and life stops working. Which honestly seems much more fun than gravity slowing things down a bit.

PaulHoule a day ago

There's been a general problem in astronomy for a long time that it seems like there just hasn't been enough time for objects to develop

The oldest version of this I know of can be seen in a diagram of ways that large black holes could possibly form in this book

https://en.wikipedia.org/wiki/Gravitation_(book)

which shows as early in 1973 people knew they had no idea how supermassive black holes could possibly form. Lately these problems have intensified because Webb seems to see that all sorts of developments seemed to happen a lot more quickly than they should of which leaves one wondering if the first billion years were really the first ten billion years. Could Timescape explain that?

  • api a day ago

    AFAIK one possible explanation for the black hole issues could be primordial black holes, which are also a candidate for at least a component of dark matter.

    • PaulHoule a day ago

      Yep. There is the idea that you could get little primordial black holes (that maybe weigh as much as a mountain and could be evaporating now) and the idea that you could get huge primordial black holes. Also the occasional strange idea that the universe might be cyclic (not too fashionable but can fill the hole left by inflation) and that black holes can survive the crunch.

      • numpy-thagoras a day ago

        Black holes can survive a Big Crunch scenario? That can go a long way to explaining many things. Can you please provide a paper with more references to this, and potentially one with an example mechanism?

      • api 18 hours ago

        I love the idea. It’s one of our current physics hypotheses I hope is true, because it means the universe would be full of tiny things the size of a hydrogen atom with the mass of asteroids.

        “The devil’s glitter?”

        BTW they would not suck up planets and stuff like inaccurate sci-fi. One could be going real fast and fire right though the Earth and do little, maybe cause some seismic events, but we would never know unless we knew exactly what to look for. A tiny black hole would have a tiny event horizon.

        If you dropped one into a planet or star with a low enough velocity that it didn’t shoot out the other side it might do a lot of damage, then come to rest in the middle and slowly grow. I recall reading that one in Earth’s core would take possibly millions of years to do much since the radiation pressure caused by accretion around it would limit the rate of matter falling into it. Earth would eventually become an Earth mass black hole but it would not happen in any human lifetime, possibly not in the lifetime of the human race.

tigerlily a day ago

This is the opening salvo in cosmology's Battle of Trafalgar. Dave Wiltshire has lined up a set piece 20 years in the making that is going to obliterate both lambda CDM and MOND and all the rest.

ajross a day ago

Webb is turning out to be one of the most impactful pieces of scientific apparatus of the last century or so. Not that it took all the relevant data, but that it was the final thing that broke open all the doors being held shut. We're watching a Kuhnian paradigm shift in astronomy unfold in real time.

  • epicureanideal a day ago

    I’ll be happy to see all the dark matter, dark energy stuff explained away.

    • SpaceManNabs a day ago

      We have a century's worth of evidence for dark matter and about 20 years worth of evidence for dark energy.

      Once an alternative theory stands up to scrutiny, maybe we shouldnt a priori dismiss things we dont understand?

haxiomic a day ago

A very compelling argument that the need for dark matter may be an artifact of a in incorrect assumption about the universe; the extent to which it is homogeneous and large scale structures can be ignored in calculations

Dr Ridden, an author of this paper, has a great explainer video: https://www.youtube.com/watch?v=YhlPDvAdSMw

  • haxiomic a day ago

    Typo: Dark Energy*, not Dark Matter

api a day ago

Good wikipedia article on these types of cosmologies including timescape cosmology:

https://en.wikipedia.org/wiki/Inhomogeneous_cosmology

  • numpy-thagoras a day ago

    Many advancements in science have happened because we stopped for a second, and then looked to generalize our assumptions. Consider,

    e.g.

    Euclidean geometry -> non-euclidean geometry; Classical analysis -> nonstandard analysis; Linearity -> non-linearity; Homogeneity -> inhomogeneity; Flat spacetime -> curved spacetime; Singular probabilities -> superposition.

    All of these were loosening of certain criteria that opened up many possibilities. It is certainly erroneous to assume we must, by necessity, have a homogeneous cosmology.

eximius a day ago

Is anyone familiar with the (ln B > x) notation being used? What is this value being referenced?

  • the8472 a day ago

    See section 2 of the paper.

scrubs a day ago

I'm surprised cosmology hasn't accounted for differences in clocks given how central GR is to astronomy. Granted I am no expert, but adding this dynamic was, until today, a bridge too far, or thought to average out somehow and not be pertinent

  • codethief a day ago

    > cosmology hasn't accounted for differences in clocks given how central GR is to astronomy

    Of course it has. Yes, LCDM's FLRW metric, by its defining assumption of spatial homogeneity, doesn't allow the metric (let alone the speed of clocks) to vary spatially. However, it is very common to do perturbation theory on top of the FLRW metric to account for density fluctuations. Besides, there are also models like LTB (Lemaître-Tolman-Bondi) which give up on homogeneity at the non-perturbative level (while still preserving isotropy, though).

    All in all, the idea that local voids could explain away the Lambda in LCDM is anything but new. It's just that the OP's timescape approach is the first one that seems to produce promising results. (Disclaimer: I merely skimmed the paper.)

    • scrubs a day ago

      Point taken. Thanks.

jandrewrogers a day ago

An implication is that you would expect ancient advanced civilizations to form in the voids.

  • largbae a day ago

    Wouldn't such a civilization slow down as it gathers?

throwaway290 a day ago

Why is it a "change"? We already have two cosmology models. This just gives one of them more support right?