"After the detection of the gravitational wave GW170817, Jason T. Wright (Physics Today, 72, 5, 12, 2019) reminded
the community that many of its features had been predicted by Dyson more than half a century earlier. Dyson’s article was
published only once, in Cameron’s long out of print collection, though a scan may be found at the web site of the Gravity
Research Foundation (https://www.gravityresearchfoundation.org). Dyson thought it had been reprinted (in his Selected
Papers, AMS Press, 1996, forward by Elliot H. Lieb) but it was not. Hoping to make the article easier to find, I wrote Dyson
for his permission to post it at the arXiv"
It's about using two big bodies, A and B, to accelerate objects: "The energy source of the machine is the gravitational potential between the stars A and B. As the machine continues to operate, the stars A and B will gradually be drawn closer together, their negative potential energy will increase and their orbital velocity V will also increase."
No, distributed swarm logic, as apparently he later admitted that what he thought about was what we now call a Dyson swarm, and it's just the public that took the "sphere" bit literally and run with it.
Georgiy Antonovich Gamov (Ukrainian: Георгій Антонович Гамов, Russian: Георгий Антонович Гамов) was born on March 4, 1904 in Odessa, Russian Empire (now Ukraine).
His father taught Russian language and literature in high school, and his mother taught geography and history at a school for girls. In addition to Russian, Gamow learned to speak some French from his mother and German from a tutor. Gamow learned English in his college years and became fluent. Most of his early publications were in German or Russian, but he later used English for both technical papers and for the lay audience.
He was educated at the Institute of Physics and Mathematics in Odessa[2] (1922–23) and at the University of Leningrad (1923–1929). Gamow studied under Alexander Friedmann in Leningrad, until Friedmann's early death in 1925, which required him to change dissertation advisors. At the university, Gamow made friends with three other students of theoretical physics, Lev Landau, Dmitri Ivanenko, and Matvey Bronshtein. The four formed a group they called the Three Musketeers, which met to discuss and analyze the ground-breaking papers on quantum mechanics published during those years. He later used the same phrase to describe the Alpher, Herman, and Gamow group.
Upon graduation, he worked on quantum theory in Göttingen, where his research into the atomic nucleus provided the basis for his doctorate. He then worked at the Theoretical Physics Institute of the University of Copenhagen from 1928 to 1931, with a break to work with Ernest Rutherford at the Cavendish Laboratory in Cambridge. He continued to study the atomic nucleus (proposing the "liquid drop" model), but also worked on stellar physics with Robert Atkinson and Fritz Houtermans.
In 1931, Gamow was elected a corresponding member of the Academy of Sciences of the USSR at age 28 – one of the youngest in its history. During the period 1931–1933, Gamow worked in the Physical Department of the Radium Institute (Leningrad) headed by Vitaly Khlopin [ru]. Europe's first cyclotron was designed under the guidance and direct participation of Igor Kurchatov, Lev Mysovskii and Gamow. In 1932, Gamow and Mysovskii submitted a draft design for consideration by the Academic Council of the Radium Institute, which approved it. The cyclotron was not completed until 1937.
Defection
Gamow worked at a number of Soviet establishments before deciding to flee the Soviet Union because of increased oppression. In 1931, he was officially denied permission to attend a scientific conference in Italy. Also in 1931, he married Lyubov Vokhmintseva (Russian: Любовь Вохминцева), another physicist in Soviet Union, whom he nicknamed "Rho" after the Greek letter. Gamow and his new wife spent much of the next two years trying to leave the Soviet Union, with or without official permission. Niels Bohr and other friends invited Gamow to visit during this period, but Gamow could not get permission to leave.
Gamow later said that his first two attempts to defect with his wife were in 1932 and involved trying to kayak: first a planned 250-kilometer paddle over the Black Sea to Turkey, and another attempt from Murmansk to Norway. Poor weather foiled both attempts, but they had not been noticed by the authorities.
In 1933, Gamow was suddenly granted permission to attend the 7th Solvay Conference on physics, in Brussels. He insisted on having his wife accompany him, even saying that he would not go alone. Eventually the Soviet authorities relented and issued passports for the couple. The two attended and arranged to extend their stay, with the help of Marie Curie and other physicists. Over the next year, Gamow obtained temporary work at the Curie Institute, University of London, and the University of Michigan.
I wonder if one could calculate an upper bound of the available potential gravitational energy available in the entire universe by estimating how far every massive point (baryon) is from all the others.
It's not. Moreover, the total energy is actually being lost, as particles "lose" kinetic energy due to expansion (and the light is red-shifted).
If this seems to violate the law of energy conservation, you're spot on. It is indeed being violated.
This is not fundamentally problematic by itself, because the law of conservation of energy depends on time invariance. Which doesn't hold in the case of an expanding universe. But it is an unsatisfying copout, and we hope that it can be resolved by the quantum gravity.
No, it doesn't, because the concept of "gravitational potential energy" is not meaningful for an expanding universe considered as a whole. It's only meaningful for isolated systems within the universe.
Wouldn't there be meaning in saying it would take this much energy to push all the matter in the universe to one place?
Which that amount should be increasing as space-time expands.
Thought experiment, if you could place a mass of an arbitrary amount at any one point in space, how much mass would you need such that all the mass of the universe is now falling towards it.
Or could you bend space-time to a point that all mass falls into it.
> Wouldn't there be meaning in saying it would take this much energy to push all the matter in the universe to one place?
No. The universe is not an isolated system that we can operate on from the outside. You can't treat it as though it is. So your thought experiments aren't meaningful.
Obviously the thought experiment requires energy that doesn't 'exist' or doesn't have meaning in the sense that it could happen literally. It's a what-if and that does have a number and that does have meaning.
So there is meaning to the previous persons question which is what the thought experiments were meant to show but obviously that's something you can't imagine.
> Obviously the thought experiment requires energy that doesn't 'exist'
No, it requires energy to be added to the system from outside the system. Which is precisely what you cannot do with the universe as a whole. That's what makes such thought experiments meaningless for the universe as a whole.
But you can't do that for the universe as a whole. Asking what would be the case if you could is meaningless; it's like asking what would be the case if 2 + 2 were 5. No consistent model exists of such a situation, so the question is meaningless.
> if the initial thought experiment was posed as 'observable universe' would it make a difference to you?
It would address my "can't operate on it from the outside" objection, yes. But it still wouldn't make the thought experiment meaningful, for the reasons I gave in response to wyager elsewhere in the thread.
Original thought experiment said matter you're talking about total energy in your response. We ought to take matter to mean ordinary every day matter, mass being the property that all regular ordinary matter posses. Since that's normal everyday language.
Yes in a closed universe the net energy is 0, nothing in; nothing out, so you wouldn't be able to magically 'pop' mass into one place without taking from somewhere else.
Now if mass A is 100 meters away from mass B it would take X amount of energy to push/pull them together. If they are now 150 meters apart now the energy required to bring them together is now higher.
So if galaxies are pulling away and away from each other over time then overtime the energy you would need to bring them closer together again increases.
Your original thought experiment asked about the energy required to push all the matter to one place. But to even try to formulate such a question in GR, "matter" has to mean "whatever stress-energy is present in the universe". And "energy" has to mean the same thing, because stress-energy is what "does stuff" in GR, not "mass".
> that's normal everyday language
You can't do physics in normal everyday language.
> in a closed universe the net energy is 0, nothing in; nothing out, so you wouldn't be able to magically 'pop' mass into one place without taking from somewhere else.
The fact that you can't "magically pop mass into one place" is true in any spacetime in GR, not just a closed universe; it's a consequence of the Einstein Field Equation and the Bianchi identities. You also can't magically "take" mass from some place, for the same reason.
> if mass A is 100 meters away from mass B it would take X amount of energy to push/pull them together. If they are now 150 meters apart now the energy required to bring them together is now higher.
These statements are only valid in the particular cases I have already listed: an asymptotically flat spacetime, or a stationary spacetime. An expanding universe is neither.
I understand that you don't get this; that's because you are using ordinary language, but, as I said above, you can't do physics in ordinary language. Your reasoning looks OK to you because you don't understand that, except in the special cases I described, the ordinary language you are using does not correspond to any valid physics. (I have explained why in other subthreads in this discussion.) I know it looks to you like it ought to; but it doesn't.
The focusing theorem basically says that an initially converging congruence converges more quickly in the future, and an initially diverging congruence diverges less quickly in the future.
The Raychaudhuri picture for your pair of masses is fairly straightfoward given that the small distances can effectively wash out any coupling to the metric expansion of space. If the masses are small (i.e., we're not talking about two black holes) the background gravitational metric is effectively flat, perturbed only by mass A and B. This lets us determine exactly the factors opposing recollapse of your initially-diverging mass A and mass B. We can also be confident that an initial impulse could drive the separation, with the focusing theorem behind the eventual collision between A and B.
At larger length scales the metric expansion of space becomes important, so the background gravitational metric is instead Friedmann-Lemaître-Robertson-Walker (FLRW) or similar. We can still talk about Raychaudhri focusing in that context, but we take "... will diverge less quickly in future" slightly differently depending on whether the galaxies are flying apart because of an initial impulse (i.e., the expansion is inertial), or whether there is also a cosmological constant (i.e., the expansion is accelerating).
In the inertial case, the ultimate recollapse depends on a critical value in the (average) energy-density of the whole universe. Returning to your mass A and mass B picture, in the previous case with a flat background spacetime where A and B colliding is inevitable, in the inertially-expanding spacetime the initial impulse on A and B can lead them to never meet again, even given infinite time.
In the case of a small positive cosmological constant (which best explains what we see in redshift surveys), heavy nearby galaxies will take their time to separate compared to lower-mass galaxies with similar separation, or the same heavy galaxies which formed with a larger separation. The Raychaudhuri focusing theorem adapted to this setting tells us whether the galaxies will merge or not, and this is useful in understanding the spatial extents and masses of galaxy clusters. Returning to mass A and mass B, given a positive cosmological constant, no initial impulse is necessary; the CC alone can determine whether they will be separate forever, or whether they will eventually collide.
Returning to your final paragraph, the (average) energy-density must be over some critical value for distant galaxies to come into contact with each other, and that critical value for all practical purposes only depends on the value of the cosmological constant (if any).
So far, with decades of trying to measure the average energy-density and critical value, it's safer to say that distant galaxies will lose contact with each other. There is no known mechanism to change that for arbitrary sets of galaxies.
Note that I have not discussed energy above except as energy-density (which is energy per unit volume, particularly where the volume is very very small) averaged across the entire cosmos, and the invariant masses of your A and B and of galaxy clusters.
Finally, for completeness, clusters of galaxies are known to move strikingly against the cosmological coordinates, leading to some famous cluster-cluster collisions (e.g. the Bullet Cluster). Why they don't just float at the same cosmological coordinate (remember the coordinates expand) like the overwhelming majority of galaxy clusters is a topic of active research. However, perhaps some late-time impulse could drive initially diverging galaxies (carried for all practical purposes only by the cosmological constant) into a collision in a way that matches your final paragraph. (Alternatively these collided galaxies might never have been diverging in the first place).
You can't ignore it because it's not a "detail"--it's a crucial feature of the thought experiment that doesn't work for the universe as a whole. What you're suggesting is like saying, in my thought experiment I assume that 2 + 2 = 5, just ignore the fact that 2 + 2 is actually 4.
No. A thought experiment cannot be based on a contradiction.
> A cat can't be alive and death at the same time, either.
And the Schrodinger's Cat thought experiment does not claim that it is, even though many pop science discussions try to claim otherwise. The Schrodinger's Cat thought experiment is based on the math of QM, i.e., on a consistent underlying model. It simply points out consequences of that model that might not be obvious to many people.
The "thought experiment" I have been objecting to in this thread, by contrast, is not based on any consistent mathematical model. The operation it is proposing to do on the universe as a whole is inconsistent with GR, which is the only consistent model we have of the universe as a whole. That means it's not a valid thought experiment. "Thought experiment" does not mean you can make up whatever you want.
"Thought experiments" that allow you to make up whatever you want are pointless, because you can also make up whatever answer you want. So the "thought experiment" tells you nothing.
As actual physicists actually use thought experiments, they cannot make up whatever they want. Thought experiments involve taking a known consistent model and working out consequences of it that were not previously obvious or well known, and seeing where that leads. You can only do that usefully if you constrain the thought experiment by the model, i.e., if you do not allow yourself to make up whatever you want.
Since the discussion here is about an article on physics, it seems appropriate to treat proposed thought experiments the way actual physicists would treat them. That is what I have been doing.
Um, what? I can operate on an ordinary volume (say a beaker in my lab or a planet that I am in a distant orbit around) from the outside. I can't operate on the universe as a whole from the outside. How is this not an obvious difference?
If you fix a sub-volume of the universe where the boundaries of the volume are subject to the expansion of space, you can calculate the energy in the volume. The question upthread is clearly "does the energy in this volume increase due to expansion?". I'm not sure why you're so focused on integrating over the entire universe; that wasn't an important part of the question upthread. You are being very vague. If you have a coherent mathematical objection that you are trying to explain indirectly, please just say the mathematical objection.
Then you are not talking about the thought experiment that I was responding to, but about a different one. I have no objection to talking about the different thought experiment that you propose (and I'll do that below), but nothing in any such discussion is relevant to the objection I made to the original thought experiment, which was about the entire universe, not just some portion of it.
> you can calculate the energy in the volume
Actually, no, you can't. There is no known invariant in GR that corresponds to "the total energy inside this volume" for an expanding universe. There are only two cases in GR where we have known invariants that correspond to "the total energy inside this volume": (1) an asymptotically flat spacetime, where we can define the ADM energy and the Bondi energy; and a stationary spacetime, where we can define the Komar energy. An expanding universe does not fall into either of these categories.
You will find claims in the literature that a "total energy" for cases like an expanding universe can be calculated using so-called "pseudo tensors". However, such claims are not accepted by many physicists, and even physicists who do accept that "pseudo-tensors" are physically meaningful don't all agree on which pseudo-tensors those are.
You can, of course, choose some set of coordinates (such as the standard FRW coordinates used in cosmology), and integrate energy density over some spatial volume in a 3-surface of constant coordinate time. (It is not clear that this is a correct way to get "total energy", because in GR the source of gravity is the total stress-energy tensor, which includes momentum, pressure, and stresses as well as energy density, but we'll leave that aside for now.) But the result of any such computation is not an invariant; it depends on your choice of coordinates. The energies I referred to above (ADM, Bondi, Komar) do not. That is why they are accepted as physically meaningful by all physicists.
> The question upthread is clearly "does the energy in this volume increase due to expansion?"
It's not at all clear to me that that is the question being asked upthread (for one thing, that poster, in another subthread, has explicitly said the "energy" they are thinking of adding comes from outside the universe). But even if we assume it is, the question is still meaningless because it assumes there is such a thing as "the energy in this volume", which, as above, there isn't.
Excellent, thank you so much for the detailed response!
A couple questions come to mind:
1. In the latter case, where we use e.g. FRW coordinates to define our volume, can we use the usual hack for defining an invariant energy of defining the center of our coordinate system to be the center of mass of the volume? I'm willing to believe the answer is "no"; I'm just not sure where it would fall apart.
2. If we leave aside the notion of defining volumes entirely, can we meaningfully ask questions like "you have a toy universe with two gravitationally bound masses; does expansion increase the energy of this system in the center of mass reference frame?" I guess this is probably just equivalent to ADM/Bondi, since the spacetime is asymptotically flat.
> can we use the usual hack for defining an invariant energy of defining the center of our coordinate system to be the center of mass of the volume?
A volume by itself doesn't have a center of mass. If you are talking about a standard FRW model where the energy density and pressure are constant in any given spacelike slice of constant FRW coordinate time, then you can pick a particular sub-volume of a spacelike slice and define the spatial center of FRW coordinates to be the geometric center of the sub-volume, and that point will also be the center of mass (or more properly the center of energy-momentum) of the stress-energy in the sub-volume.
Since all of the stress-energy is comoving in this model, you can pick out the set of comoving worldlines that are in the sub-volume at the instant of FRW coordinate time that you chose, and treat them as a "system", whose center of energy-momentum will be the comoving worldline at the spatial origin of FRW coordinates, and that will be true for all time. The issue comes with trying to define a "total energy" for this "system"; you still run up against the same issues I described.
> can we meaningfully ask questions like "you have a toy universe with two gravitationally bound masses
There is no known exact solution that describes this case, so the only way to treat it would be by numerical simulation. Astronomers do do this, for example to model binary pulsar systems (as in the Hulse-Taylor binary pulsar observations that won them the Nobel Prize). However--
> does expansion increase the energy of this system in the center of mass reference frame?"
Such a "universe", in the numerical simulations, will not be expanding. It will be asymptotically flat, and will slowly emit gravitational waves and become more tightly bound (this was the prediction that Hulse and Taylor's observations over many years verified). In short, this "toy universe" has nothing useful in common with our actual expanding universe.
In terms of energy, the ADM energy of such a system will be constant. The Bondi energy will slowly decrease with time as gravitational waves escape to infinity. But again, this system is not expanding, so these things tell you nothing useful about an expanding universe.
> I guess this is probably just equivalent to ADM/Bondi, since the spacetime is asymptotically flat.
> A volume by itself doesn't have a center of mass.
Why not? This seems like something we could calculate in an invariant way (I have not actually tried coming up with an expression; this is a solicitation for context, not a claim). Also, to be clear, I am talking about a volume with some non-homogenous mass distribution. Maybe you draw a boundary around a solar system or something. Can we not come up with an invariant expression for the CoM of everything within that boundary?
> Such a "universe", in the numerical simulations, will not be expanding.
OK, this seems important. I never made it much past SR in undergrad, so this is a hole in my comprehension. Is the expansion of the universe directly deducible from GR? My understanding was that an expanding universe was one of the admissible solutions under GR, but is it the only admissible model for a universe that looks like ours? If so, what's the relevant difference between our universe and the toy model I mentioned, that causes GR to predict that our universe expands and the toy one doesn't?
Because "volume" is a geometric concept, not a physical thing. It has a geometric center, but not a center of mass. What you appear to be thinking of when you talk about calculating the center is the geometric center, not the center of mass. (And note that, unless an actual physical system has a high degree of symmetry, the geometric center defined by its spatial volume will not be the same as its physical center of mass.)
> Is the expansion of the universe directly deducible from GR?
The fact that the spacetime describing the universe cannot be stationary (i.e., that it must be either expanding or contracting) is deducible from the original 1915 Einstein Field Equation (i.e., without a cosmological constant) plus the assumptions of homogeneity and isotropy--roughly speaking, that the universe looks the same at all spatial locations and in all directions. We then pick out the "expanding" option as the one describing our actual universe based on observations.
Einstein actually discovered this in 1917, and he was bothered by it, because he believed (as did most physicists and astronomers at that time) that the universe was static--that it did not change with time on large scales. So he added the cosmological constant to his field equation to allow it to have a static solution that could describe a homogeneous and isotropic universe. Then, about ten years later, when evidence began to mount for the expansion of the universe, Einstein called adding the cosmological constant "the biggest blunder of my life"--because if he had trusted his original field equation, he could have predicted the expansion of the universe a decade before it was discovered.
Today, we believe that there is in fact a nonzero cosmological constant (our best current value for it is small and positive), but we also understand, what Einstein did not explore very thoroughly, that the Einstein Static Universe is an unstable solution, like a pencil balanced on its point: any small perturbation will cause it to either expand forever or collapse to a Big Crunch. So this solution is not considered a viable candidate to describe our actual universe. And we also know that there are no other static solutions that describe a homogeneous and isotropic universe.
> What you appear to be thinking of when you talk about calculating the center is the geometric center, not the center of mass.
No, I mean something along the lines of $Integral_V x*m(x) dx / Integral_V m(x)dx$ where $m$ is the mass-energy density function. The usual way of finding the center-of-momentum frame of a system that people mean when they say "invariant mass".
In cases where the integral you describe is well-defined and physically meaningful, yes, you are correct, it is a center of mass (or center of momentum) integral not a geometric center. But it's not the center of mass of the "volume" over which the integral is done, it's the center of mass of the stress-energy over which the integral is done. In order to obtain the function m(x), you need to look at the stress-energy tensor.
Also, the integral you describe will not, in general, be invariant; it will depend on your choice of coordinates, because you are integrating over a spacelike surface of constant coordinate time, and which surfaces those are depends on your choice of coordinates.
Your intuition about the "usual" invariant mass is based on the special cases where the integral you describe can be equated to one of the known invariants, the ADM mass, the Bondi mass, or the Komar mass. (Strictly speaking, even the Komar case is problematic, because the integral in question in a general stationary spacetime does not necessarily converge. In cases where it does converge, AFAIK the spacetime must be asymptotically flat and the Komar mass is equal to the ADM mass.) But an expanding universe is not one of those cases.
Why is it not meaningful? "Isolated systems" seems meaningless - there is no objective cutoff where a gravitational system becomes "isolated", except perhaps in the sense of "non-intersecting light cones".
I have read all of your comments and not one of them actually says anything concrete. It's the exact same vague objection repeated over and over again.
Please explain exactly why you think calculating the total gravitational potential energy of the entire universe or a well-defined sub-volume of it is intractable. Feel free to use arbitrarily technical mathematical or physics language, just please stop being vague.
Yes, it should increase on paper, but no source of energy to power that expansion is found yet. Big Shrink can power itself, so I'm voting in favor of https://en.wikipedia.org/wiki/Shapley_Attractor
> estimating how far every ... baryon ... is from all the others
The true metric of the universe, sourced by every massive object (and every massless one, and all self-energies and interactions and fluxes of momentum-energy), is fantastically complicated.
The standard cosmology is tractable only because it coarse-grains all this into a model where at every point in space (having taken a particular slicing of the whole spacetime into spatial volumes ("space") indexed by the scale factor) has a total energy-density. The total is a sum that includes the energy-density from baryons, radiation, dark matter, and so forth. The energy-density in any given slice is modelled as the same at every point in the spatial slice, leading to the observed large-scale isotropy and homogeneity that is central to the model, and allows for a cosmological frame to be picked out.
In the cosmological frame, where coordinates expand with the metric expansion of space, we can talk about the energy-density at a given point in space. In principle we can make measurements at a large number of points and produce an average energy-density. Finally, we can talk about the consequences of the average value: https://www.astronomy.swin.edu.au/cosmos/C/Critical+Density
So, in a sense, cosmologists do talk about whether the entire universe will recollapse or expand forever (or fall into some steady state), and (average) baryon-density is an important factor. Thus there is some upper bound for a local quantity not a million miles from gravitational potential energy. (Below your question there are others pointing out that there is no such global quantity available. This mainly means that while one can imagine increasing the average baryon or DM density leading to a collapse of the whole universe, this is still firmly in the FLRW universe, and definitely not converting from FLRW to an asymptotically flat spacetime around a central collapsing mass. But see below.)
Going further, we can evolve the (average) baryon-density in a space along the scale factor. In the timelike (scale factor) direction away from baryogenesis, the energy-density of baryons decreases. At a finer-grained level that means clouds of neutral atoms and molecules tend to thin out. (So does radiation, so does dark matter, so do relic neutrinos, and so forth; pretty much everything but the cosmological constant (which is constant, after all) falls to zero on average far enough from the formation of the cosmic microwave background).
This coarse-grained picture can be refined in several ways, by e.g. introducing inhomogeneities: overdensities or underdensities of baryons, for example, which evolve into large black holes and voids respectively. One can then ask questions in an inhomogeneous cosmology about the (average) density of black holes of various masses, and compare that to the energy-densities of baryons and the rest. The question is, for a given spatial slice, what fraction of baryons have fallen into black holes, and what fraction has not? Then evolve that question along the scale factor (e.g., in the future, are most baryons in black holes, or are they mostly spread out in wispy filaments around the edges of large voids?).
One can do some headstands and try to understand the fraction of all baryons not yet fallen into black holes in that sort of cosmology as relating to gravitational potential energy, however I think that it's bound to be more useful to think, like above, of the (averaged) energy-densities and how that averaged (and thus coarse-grained) picture generates a metric comparable to FLRW. Instead, interpreting your question somewhat, you might start with something that ultimately must be nonuniformities in a contraction of the Riemann curvature tensor (see <https://en.wikipedia.org/wiki/Scalar_curvature>) then dusting that geometry with objects whose trajectories you'd study so that in some set of coordinates you could carve out (for each of them) from their intrinsic mass a kinetic energy and potential energy.
I don't think that approach would be fundamentally wrong. It is essentially along the lines of Lagrangian mechanics (L = T - V, where V is a potential energy), although in a general-relativistic setting this gets hard, see <https://en.wikipedia.org/wiki/Relativistic_Lagrangian_mechan...>. Inevitably you would have to do some coarse-graining for tractability, and would want to make sure your coarse-graining procedure is not unphysical.
Finally, apologies for not expanding a number of acronyms, and for wandering off-track a bit. Yours was an interesting question especially in light of some of the more technical replies below, and I was torn about what audience-expertise to write for (and settled on probably satisfying nobody).
“[…] 5. Perceived Irrelevance or Lack of Relevance: Depending on the context of the Hacker News post, copy-pasting AI-generated content might be seen as irrelevant or not contributing to the topic at hand. This can be frustrating for users who are looking for meaningful and relevant insights from other community members.”
> One can't solve 3-body problems without Superfluid Quantum Gravity
The article (from 1962) does not predict 3-body accelerations of large masses or particles using current methods (probably because they had not yet been developed at the time).
(Ironically, in context to the rejection of AI methods to summarize superfluid Quantum Gravity for the thread's benefit (and not my own),)
N-body gravity problems are probably best solved by AI methods; it is well understood that there are no known closed-form solutions for n-body gravity problems.
Given such an anti- AI-with-citation policy, researchers preparing comment content with citations for the platform have a counterproductive incentive to parallel construct after using e.g. search engines that use AI (Google, Bing,), and also an incentive to not cite their sources?
"They're already banned—HN has never allowed bots or generated responses. If we have to, we'll add that explicitly to https://news.ycombinator.com/newsguidelines.html, but I'd say it already follows from the rules that are in there."
I think this idea is essentially an example of a "gravitational slingshot".
I found it interesting that such a system could be used to "accelerate delicate and fragile objects to a velocity of 2000 km/sec at an acceleration of 10,000 g, without doing any damage to the objects. ... So a large space ship with human passengers and normal mechanical construction could easily survive the 10,000 g acceleration." This seems counterintuitive, but since the object is in freefall the entire time, I guess it makes sense.
The 10,000g would be approximately uniform through the entire volume of the ship, so no damage. In a regular rocket, the acceleration would have to be transmitted to passengers and cargo through the normal force, and that would crush you.
Makes sense, though if you'd asked me before your explanation, I'd have thought the idea of accelerating at high speed without leaving free fall impossible.
A gravitational assist/slingshot is just a transfer, think of cogs in a machine - by using axles and cogs you can change the speed and direction of the forces.
source: KSP player
Dyson was quite the visionary. LIGO / Virgo gravitational wave detectors have confirmed all this (with much more development from people like Caltech's Kip Thorne and many others):
> "The energy source of the machine is the gravitational potential between the stars A and B. As the machine continues to operate, the stars A and B will gradually be drawn closer together, their negative potential energy will increase and their orbital velocity V will also increase... the loss of energy by gravitational radiation will bring the two stars closer with ever-increasing speed, until in the last second of their lives they plunge together and release a gravitational flash at a frequency of about 200 cycles and of unimaginable intensity."
The Ligo Lab's youtube channel has lots of great videos on the topic, from the sounds made by a pair of colliding black holes to long talks about how certain elements are mostly made by colliding neutron stars:
In a similar vein, John Kraus (Of Antennas... textbook fame) described a gravitational transmitting and receiving system as a fun (?) diversion near the end of the book.
It has been a few years, but I seem to recall that the transmitter was a 500T steel bar spun at very close to the maximum RPM the tensile strength of steel allowed; the radiated energy was something like a fraction of an attowatt. (An attowatt = 1*10^-18W)
There are more efficient transmitting schemes out there.
I became curious about this and went to the book to learn more. The system proposed in the book is capable of radiating around 2.2 x 10^-29 W by rotating a bar weighing 500 tonnes about 270 times per minute.
it doesn't seem surprising to me. people were in the lab playing with DC and AC and clearly heard "clicks" from remote instruments that correlated with them turning switches on and off.
I always wondered how you might be able to extract energy from the expansion of space. It's particularly interesting because conservation of energy does not hold on such large scales.
Think about harpooning a galaxy at, say, 100 megaparsecs, with a long rope attached to the harpoon. In the Milky Way, loop the rope around the rotor of an electric generator. In the distant galaxy, have the harpoon-end of the rope fall into its central supermassive black hole. Ignoring proper motions (the black hole and the electric generator are likely to move within their host galaxies, and their host galaxies within their galaxy cluster), this gives one about 72 kilometres per second per megaparsec of linear speed on the rope as the space between us and the distant galaxy increases.
Of course, you need a lot of rope, for the rope to be indestructible (and ideally of low mass), for lucky aim when harpooning, and for the harpoon to be able to carry rope all the way to the target, and for the target and far end of the rope to be impossible to separate.
The more local model for this is to erect a scaffolding well above an object in hydrostatic equilibrium (so anything from a round planet to a supermassive black hole) and fix electric generators to the scaffolding, driven by ropes dropping onto the scaffold-surrounded object. There are a lot of physics questions that can be explored using that model; it's a good exercise in all of them. (Some coursework uses this setting to explore the dominant energy condition of general relativity, since that imposes a maximum tensile strength on non-exotic matter rope or wire or filament: there is a speed limit on the operation of intermolecular/interatomic binding forces; c.f. Bell's rope-spaceship "paradox" in special relativity.)
> energy is not conserved
Carroll's point is that there is a generalization of conservation of energy in curved Lorentzian spacetimes, where changes in the motion of matter and changes in the spacetime geometry are exactly related. That applies in the harpoon-a-distant-galaxy model as well. The rope (and stresses within it) and power produced by the electric generator are all forms of moving matter, creating a geometrical change which (depending on the properties of the rope) may become non-negligible. A rope that is strong enough (and implicitly having much more mass per cm^3 than empty space) to connect two megaparsec+-separated galaxies (driving a generator at one end for appreciable time and feeding a black hole at the other for appreciable time) forces one into some calculating to answer the question: does the rope slow the metric expansion along its length?
Next, how do you get the generator to turn rather than be carried out of our galaxy? (We can sharpen this somewhat by dispensing with a generator, and throwing each end of our megaparsecs-long rope into a megaparsecs-separated galactic centre black hole. What happens if there is a large mass-ratio (heavy:light) between the black holes, or their surrounding galaxies? Does the lighter black hole get pulled out of its galaxy by the heavier? What happens as the mass ratio goes to 1?
Carroll's link above, showing \Nabla_{\mu}T^{\mu\nu} = 0 says that as long as we don't introduce further degrees of freedom we can calculate the equations of motion in the systems above. That is, it's fine for an expanding space with nonzero vacuum energy, and for that plus noninteracting (except by gravity) dusts. However, our very long rope cannot be non-interacting (it must be at least self-interacting) and its extra degrees of freedom are liable to become important under extreme tension (e.g., it might get hot and radiate a ~blackbody spectrum), so a somewhat different covariant equation would apply.
The metric expansion is not affecting the shape or H II gas cloud orbits of galaxies at increasing redshift, so newer galaxies (less-redshifted) and their host galaxy clusters have not themselves been pulled apart over the course of billions of years even as the clusters expand away from one another. Additionally, stars aren't disintegrating, various lunar ranging experiments don't show a cosmic component of the evolution of the Earth-Moon orbit, and "Brooklyn is not expanding"[1] (nor are optical fibre cables buried within it). Cosmic expansion, if considered as a sort of (frame-dependent) force, is very weak compared to real forces.
Why would the rope, if it's not ripped apart by tension and shear, or exposure to high energy ions and other radiation in the interstellar and intergalactic media, behave differently from Manhattan or the Milky Way? It might, but you'd have to write down a hypothesis in order to have a decent starting point for what "[the rope] expands just the right amount" means.
My counter-hypothesis, loosely, is that the intergalactic part of the rope (assuming it's taut) induces a perturbation on the FLRW metric that in cylindrical coordinates (where the rope forms the axis) quickly asymptotes to flat space; we can then apply junction conditions with FLRW there. The much shorter rope segments in the two galaxy clusters and host galaxies can be treated similarly, substituting a suitable metric in the Lemaître-Tolman-Bondi (LTB) family. (We already know how to do a "swiss cheese" cosmology where we embed LTB vacuoles in the expanding background of FLRW, using junction conditions). The ends are the tricky part: are they really able to keep the rope taut over cosmological times, or instead do the anchors end up colliding with each other eventually?[2] That last problem we sidestep a bit by not anchoring one end: it just winds around the electric generator while there's still rope left on the generator end. When there isn't any more rope, the unanchored end will tend to fall into the host galaxy of the anchor.
The more physical answer is that the rope, anchored or not, breaks into many pieces from a variety of causes. Uncoupled by whatever non-gravitational forces hold the rope together, the fragments of the shredded rope couple to the local metric around them. The intergalactic segments expand away from everything just like galaxy clusters do, the in-galaxy-cluster segments at either end fall inwards, perhaps ultimately landing on the anchor points.
Even more physical answers cast doubt on whether such a rope can even be built and deployed. It's a lot of material, and fragile to non-gravitational hazards. And you have to play it out towards its far end.
Human technology sure can't do this today. Maybe the fast track is to create the legendary paperclip-maximizing nanobots[2] and have them build and maintain a cosmic-length cable out of linked paperclips.
It's easy enough to imagine a hard sci-fi novella about doing this experiment, and also easy to imagine a range of actual student theses setting bounds on different aspects of the idea (even better to consider a rope in a hard binary, vs one in softer and softer binaries, with the BBH ultimately reaching cosmological radial separations).
[2] I'm drawn to a pattern of describing a gravitational-wave-shedding binary black hole as "barbell shaped", with a (notional) thin, zero-mass hand grip connecting the weights at the end. My intuition is that if we strengthen that connecting hand grip, we will generate a lot of gravitational radiation at the ends, justifying the idea that we rip one or both massive black holes out of position within their host galaxies, with the black holes ultimately colliding rather than separating with cosmic expansion. I see a host of problems with this intuition, though, that approximations up to numerical relativity could explore.
Does anyone have an idea what sort of design could achieve the proposal near the end:
> Clearly the immense loss of energy by gravitational radiation is an obstacle to the efficient use of neutron stars as gravitational machines. It may be that this sets a natural limit of about 108 cm/sec to the velocities that can be handled conveniently in a gravitational technology. However, it would be surprising if a technologically advanced species could not find a way to design a nonradiating gravitational machine, and so to exploit the much higher velocities which neutron stars in principle make possible.
From my dim memory of Kip Thorne's popular book, a spinning black hole could be used in this way, which would mean there's at least one solution.
This talks about two bodies, but would the Halo Drive (https://arxiv.org/abs/1903.03423) also be one such thing? I mean, I guess the photons are the other bodies?
Of course there are also more mundane ways of utilizing gravity such as Tidal and Hydroelectric power, or just walking (controlled falling) for that matter.
When a (perfect) ball is thrown at a (perfect) car, it will bounce off with the same velocity, v, but with its direction reversed. If the car is moving with some velocity -V, when the ball bounces off it will have a velocity -v - 2V, gaining an extra -2V. (This is easier to understand from the car's passenger's point of view, who will see the ball arriving with a relative velocity v + V and bouncing with -v - V, or -v - 2V relative to the ground).
In the ball-car collision the electromagnetic forces are the ones responsible for changing the direction of the ball. But in the binary system, it is gravity. In particular as the shuttle enters orbit around the "incoming" star, the star's gravity will pull it forward mostly when it completed half orbit.
I hope I make sense.
Overall, such a joy to read this paper. With basic physics it makes you dream of sci-fi...
Figure 1 shows one possible mechanism. It's basically a gravitational slingshot using a binary star system. A test mass comes out with more kinetic energy than before and the binary star system's radius decreases, releasing gravity waves at the same time.
A large mass is dropped on the right trajectory between the two, the gravitational forces slingshot it back at higher velocity, which can be captured with some sort of electromagnetic regenerative braking mechanism. Just a toy idea for how energy can be extracted.
Opposite is true also: it's possible to add energy into three body system and increase distance between them (antigraviation). In principle, spherical gradient of gravitation creates possibility to increase orbit using only energy and interaction between masses even in two body system (if smaller body can change it shape and distibution of mass to simulate three body system). I had idea of such aparatus when I was student 30 years ago, but then I forgot the details.
"After the detection of the gravitational wave GW170817, Jason T. Wright (Physics Today, 72, 5, 12, 2019) reminded the community that many of its features had been predicted by Dyson more than half a century earlier. Dyson’s article was published only once, in Cameron’s long out of print collection, though a scan may be found at the web site of the Gravity Research Foundation (https://www.gravityresearchfoundation.org). Dyson thought it had been reprinted (in his Selected Papers, AMS Press, 1996, forward by Elliot H. Lieb) but it was not. Hoping to make the article easier to find, I wrote Dyson for his permission to post it at the arXiv"
It's about using two big bodies, A and B, to accelerate objects: "The energy source of the machine is the gravitational potential between the stars A and B. As the machine continues to operate, the stars A and B will gradually be drawn closer together, their negative potential energy will increase and their orbital velocity V will also increase."