If you know the diameter of the observable Universe and you want to calculate its circumference with the accuracy of the diameter of a proton, the number of digits of pi that you need is 43.
The smallest possible distance is the Plank lenght, 1,610^-35 (110^-15 is the diameter of a proton). And for that you only need around 60 digits of pi to calculate the circumference of the universe.
Of course, that is just for the simple operation of calculate the circumference given the diameter, more complex operations with pi may require more precision.
> According to the generalized uncertainty principle (a concept from speculative models of quantum gravity), the Planck length is, in principle, within a factor of 10, the shortest measurable length – and no theoretically known improvement in measurement instruments could change that.
At least according to Wikipedia, it seems it is indeed the smallest observable distance. Although it has never been proven and follows from the theoretical generalized uncertainty principle.
https://en.m.wikipedia.org/wiki/Planck_length#Theoretical_si...
From what I understand the particles you need to probe distances on the order of the Planck length are so energetic that their own gravity would start interfering. Their Swarzschild radius would become bigger than the distance you're trying to measure.
Does that mean the rest of the digits of pi are not "real," at least according to a realist rather than a Platonic philosophical position on the meaning and nature of mathematics? Seems like you could argue that digits beyond what are needed to render measurement to within one Planck length are meaningless and therefore a kind of fiction... at least if you take that philosophical position.
It's been estimated that if the universe were a computer, it could have performed no more than 10^120 operations on 10^90 bits of data so far (based on the size, age, and total energy of the known universe). http://arxiv.org/abs/quant-ph/0110141 I think the number of physically relevant bits of pi would be represented in there somewhere. But there's a long road ahead. If the universe keeps "computing" forever, the precision of numbers involved could also keep growing.
If the universe were a computer, it would not have to compute π to "simulate" physical processes. Mechanical processes don't depend on the value of π directly.
Pi appears in many more places than in ratio of circumferences to their diameter. For example, if you flip a coin n times, the probability of getting exactly n heads and n tails asymptotically tends towards 1 / sqrt(π*n). The probability that two randomly chosen integers are coprime is 6/π^2. Etc.
I think this assumes the order of magnitude of the size of the observable universe.
Clearly if the diameter of the universe is smaller than a proton then you don't need 43 digits of pi to calculate its circumference to smaller than the diameter of a proton. So if the diameter of the universe is 10^1000000^100000000 proton wide the precision you need for pi would be way higher than 43?
The post i replied to started off with "If you know the diameter of the observable Universe" but if the result it depends on the size of the observable universe then you ought to say "given the size of the observable universe".
Starting with 'if' would imply that the result only requires you 'knowing' the size of the obsv. unv. but does not depend on its actual size.
There's a difference between "the size of the universe" and "the size of the observable universe", see also "cosmic microwave background", "dark energy", and "inflation."
This overlooks the issue that for repeated calculations, such as numerical integration, the trouble comes from accumulated roundoff errors. Even 16 digits of precision can become 0 digits pretty quickly if you're not very careful.
I agree, that's a poor answer by NASA director and chief engineer. Here is a better answer:
The precision used for calculations is dependent on the number of "steps" required to get to the final result. Roughly, for N repeated calculations you lose somewhere between sqrt(N) * eps to N * eps of precision (eps=2e-16 for IEEE64).
Here are some actual examples:
IEEE64 (~16 decimal digits) is OK for interplanetary navigation for few months, where relatively low accuracy is required.
With the same precision, you start to lose phase accuracy above 24 hours if you're simulating GPS constellations. You need quad precision or above for simulations > 24 hours.
For simulating planet trajectories and solar system stability (Lyapunov time of planets), IEEE64 is good for ~10 mya in the future (Neptune-Pluto Lyapunov time), IEEE128 for ~200-1000mya, above that it is recommended to use 256bit floats and above. This is assuming typically ~1000 steps per simulated orbit.
Fun fact: we know from simulations that Pluto trajectory is stable for >10G years, but unpredictable above >10M years because of chaotic (but stable) interaction with Neptune.
Something to add to your list of examples: During the first Gulf war, 28 US soldiers died due to accumulated rounding errors in the Patriot Missile battery computers: https://www.ima.umn.edu/~arnold/disasters/patriot.html
(This was in fact a known issue, and operators had been instructed to reboot the computers every 8 hours. Unfortunately this instruction ignored the fact that, in the field, nobody wanted to be responsible for turning off their defensive systems for a minute.)
Yeah, that doesn't surprise me in the least, many high-tech military systems have MTBF/MTTF of a few hours at best. Also, that's what you get when you try to do radar time computations using 24-bit fixed point in Ada.
Back to astronomy, in many astronomy libraries (such as astropy library) computations regarding time are done using 2 doubles (about 106 bit precision). 1 double is not enough.
_brandmeyer_ also mentioned something important that I totally forgot - any trigonometric computation requires computing modulo-pi to an accuracy of 1 ulp, which requires storing PI to ~1144 bits for double precision (for numbers near pi) (see Kahan argument reduction paper).
Since Intel processsors don't reach the required precision for IEEE standard above pi/2, this modulo reduction is done in software to this day. gcc maintains a 1144 bit PI constant and does a 1144 bit modulo every time you compute a sine/cosine above pi.
TLDR - 344 decimal digits of PI are used. High-precision PI computation is surprisingly more common than we expect...
any trigonometric computation requires computing modulo-pi to an accuracy of 1 ulp
For the trigonometric function itself, sure. For any reasonable algorithm which uses the trigonometric function, no. If find yourself computing sin(10^6), you're not really trying to compute sin(10^6); you're trying to compute sin(x) for some value of x which you know lies between 10^6(1 - epsilon) and 10^6(1 + epsilon). So the extent to which trigonometric calculations can lose precision by not doing extra-precision argument reduction, that precision was already lost in computing the unreduced argument.
Disagreed, the answer isn't aimed at engineers or actually at any science-oriented people, it's directed at the general public who don't even care what IEEE is.
As a software engineer, though, I find this answer way more interesting. That one part in 10^15 ends up being ~an inch on the scale of the solar system is thoroughly unsurprising.
Arbitrary precision is not a valid answer: every multiplication doubles the number of mantissa bits. Starting with a 64-bit precision, after just 40 multiplications, you will have consumed 800 GB of RAM just for storing a single number, at which point you'll ask yourself: how many decimal digits do I really need? Which was the initial question...
A branch of physics used to be taught a long time ago called "numerical analysis" to deal with this issue.
We even used to be careful about the difference between 'precise and exact'.
Pi = acos(0) is absolutely exact. But computer don't know about symbolic calculus. So to put the value in a register we used tricks.
Pi as a the converging value at the infinite of the Taylor development is awesome. But computer don't know about infinite.
3.1415926535897932384626433832795028841971 is precise.... it has a lot of digit and people loves that.
In ana num 3.15159 +- 0.00001 is exact. It bounds your result. Hence you can estimate your error and its propagation.
Because we thought humans were smart we thought that 3.14159 would be so meaningful people would understand that a constant should be considered to be exact with the implicit meaning that 9 was the last significant digit and people would be wise to use upper and lower bounds to estimates their results.
Then Computer Science was taught in university.
People not understanding why they had to study math and physics to simply program 2 + 2 and thought, stop bothering us. We just compute TVA we don't send a rocket to mars. Why learn boring math (integration, derivation, Newton's methods for approximation, Taylos's development, Cauchy Suites, condition of converging Suites, Integration in the complex field to compute generalized integrals, simplex, LU/RU matrices ....)
Yes people loves recurrence. They cannot apply the reasoning to simple maths series.
And that's how we have funny stuff like a lot of coder not understanding why :
1.198 * 10.10
12.099799999999998
Yes ... why are computers' maths so odd. What can we do about it?
Having a look at HP Saturn opcode makes you wonder if the lack of solution is because it does not exists or because people forgot.
http://www.hpcalc.org/details.php?id=1693
I can't speak for other nations, but they still teach numeric analysis in Chinese universities as an undergraduate course. In my university it is a required subject. Many of us have countless dreadful memories of Runge-Kutta method, Euler's method, Newton's method, rate of convergence, numerical stability and error margins, just to name a few of the dreads...
Yes, perturbation methods are still taught and are still recognized as important. The first math course I took during my graduate degree (Intro analytic methods) covered it, for example.
Mathematics? The people trying to find solutions of an equation by solving them?
Mathematicians is a weird education I hardly understand. The last PhD I met from Mc Gill university ignored the existence of non euclidean geometry. His excuse? He was on formal proof.
I am sorry, I have a hard time with people that never were challenged to make equations spit their solutions in order to make something actually work in the real world with limited time and money.
It is the same difference I see between athletes and ergotherapists, or Einstein and Poincaré.
>A branch of physics used to be taught a long time ago called "numerical analysis" to deal with this issue.
Freshmen engineers had to take a year-long Numerical Analysis and FORTRAN programming class at my undergrad university. We would learn various iteration methods for solving equations and the homeworks would be more problems to solve with a program or two to write too.
This stuff is still taught, it just might be out of a different department than physics.
Numerical analysis is the cornerstone of the applied mathematics curriculum, and still very much taught. Any applied mathematician, physicist, or engineer will have at least some background in the subject, and anyone with a good graduate degree will usually have taken two or three courses (source: I TA'd one of the graduate numerical analysis courses at Berkeley for a couple years, it was a requirement for many engineering grad students).
This story is frustrating to me because it makes it sound like 15 digits of precision isn't a lot. Fifteen isn't a big number, but fifteen degrees of precision is almost incomprehensible.
If you measured your height with fifteen degrees of precision, you would have a measurement in femtometres. A femtometre is roughly the diameter of a proton.
> Pi = 3, coincidentally, is the Hebrew Bible's approximation too.
Certainly it's not explicitly spelled out. The example I've heard was the outer diameter and inner circumference of a vessel's circular rim were given. Pi comes out to 3 only if the thickness of the rim of the vessel is zero.
It is a large cast bowl in 1 Kings 7:23ff. It's beloved of a certain kind of 'gotcha' internet skeptic "Proof that the bible thinks Pi is 3 !!1! How dumb are teh Christians!".
But the passage itself even mentions the thickness of the bowl, and there's no reason to assume the numbers are anything more than a description of a particular bowl (which inevitably wouldn't have been perfectly circular).
even if the measurements were both outer measurements, the actual value of pi is within the typically assumed error bounds (30/10 < pi but 30.5/9.5 > pi.)
When I took astronomy, anything within an order of magnitude (10) was considered to be the same number. Calculations are very easy when you're only worrying about the number in the exponent.
Feynman was talking to some students, and he used some historical event as an illustration, but he got the date wrong by a few years and they called him on it. He laughed and said "Hey, three decimal places is pretty good for a theoretical physicist!"
I remember back in high school physics when we were calculating the volumes of a few stars and my teacher said "Just round out 4\pi/3 to 4". I completely understand why we'd do that -- the error terms in the radius of the star completely drown out that approximation -- but goddammit it still feels wrong. I guess I'm a mathematician and not a physicist for a reason.
15 digits is about the precision hand-held calculators provide, right? Many early NASA missions took HP calculators along in missions with trajectory routines in case the computer failed.
Hand-held calculators might have been carried on many NASA missions, but not the early ones. The first missions started in 1961 and hand-held HP calculators weren't invented until more than ten years later. By that time we had already been to the moon 6 or 7 times.
The first actual hand-held calculator I every saw was a Bowman Brain (simple 4 function calculator) it was for sale in 1971 at the MIT COOP (the bookstore). I only knew one person that bought one; the rest of us continued carrying around our slide rules (they came in handy leather holsters with belt loops.) The HP that came out about a year later was a real scientific calculator.
Years before that, sometime between 1965 and 1968, on an episode of Lost in Space (a TV program with a family of early space explorers lost in outer space) the son, Will Robinson, was carrying around a large device about 3 inches thick and a foot tall that looked like a calculator. I thought the idea quite marvelous and went to bed thinking about it and how much better it would be than my slide rule for playing around with calculations. (I was a weird kid.)
We must be about the same age. I went through college with a leather holster for my slide rule. I saw the first TI calculator after graduating from college in '71. I lusted after it and later I went to work at HP and got an HP. I was a super-fan of reverse-polish at that time. HP stayed with RPN for some time.
I spoke at some length with Fred Haise Jr, Lunar Module pilot of Apollo 13, and when I showed him my slide rule, he enthusiastically took it and started to play with it. Yes, astronauts used to use slide rules.
He asked me what I used to lubricate it, and showed some simple calculations to our host. He said he'd mostly forgotten how to use it, but it was clear that it was very, very familiar to him.
He signed it. Later I got Jim Lovell to sign it as well. One of my most treasured possessions. I only wish I'd had the presence of mind to get TK Mattingly to have signed it when I met him a year earlier. There is yet time.
The other side from the vector adder ("front" side at https://upload.wikimedia.org/wikipedia/commons/c/c4/StudentE... ) includes a circular slide rule with perfectly normal log scales for fuel, time, distance calculations, an extra scale to help with hours/minutes conversions, and some marks for various conversion factors, including lb/gal fuel and lb/gal oil for use in weight/balance.
The main difference between a straight and circular rule is that it has only one appearance of the index, so you don't have to move the slide around as much, and it's round so the equivalent of a 10" rule has around 3" diameter.
It also has other scales for converting altimeter/airspeed (really pressure gauge) readings into other numbers more useful for certain purposes like true altitude (good for missing obstructions) and "density altitude" (for estimating takeoff performance, also helpful for missing obstructions).
This is amazing. The idea of converting units of multiple types in a hurry when it matters with a slide or circular rule in imperial units is terrifying. Somehow doing that in metric seems less so, but the fact that the system got people to the moon relatively recently is still amazing.
BTW, the computers in the ship were also made by HP.
There is a great story about an incident where a waste recycling problem caused a mission to be aborted. There was urine all over the inside of the capsule. NASA publicly reported that it was a computer failure.
Unfortunately HP had just done an ad campaign about their computers in space. HP sued and NASA settled for an unknown amount.
The flight computers on any manned NASA spacecraft have not been made by HP. The Gemini capsules and the Space Shuttle used IBM computers [1,2] and Apollo used computers made by MIT (primary computers) and TRW (Abort Guidance System) [3]. I could also not find any information on a mission that failed in such a way.
Handheld calculators typically get between 8 and 12 digits of precision. A four-function calculator will top out at 8; a scientific calculator will offer more (plus scientific notation support).
15 digits is about what's offered in double precision floating point calculations.
In this particular case, they’re just using a standard double precision IEEE 754 floating point number. So I assume they do all of their arithmetic (“for JPL's highest accuracy calculations”) using double precision floats.
I think this is a bit of an oversimplification. You must consider compounding when talking about rounding errors. A single matrix operation with hundreds of rows and columns can easily have millions of multiplications. At every multiplication the previous error gets multiplied. That's why I don't feel the answer was exhaustive.
> At every multiplication the previous error gets multiplied
This is a bit of an oversimplification as well, it's not like you keep multiplying pi with itself over and over again and it's not like the error you introduce is random, if you've rounded pi once, you're gonna keep make a slight error in the same direction.
If you were right there'd be no hope of ever getting sane results when multiplying largish matrices of doubles regardless of the presence of pi.
I'm not saying that accumulation of error doesn't exist, I'm just saying that it's not to the extremes you're describing.
Pi is kind of a worst case, because you round it only once, every operation will add errors on the same direction. Because of that you either use way to many decimal places, or make sure you don't keep multiplying pi with itself as you said. But both are optional, and must be designed into.
A matrix of measurements, by its turn, normally has unbiased errors, what makes the resulting error grow much slower.
>The primary purpose of the DATA statement is to give names to constants; instead of referring to pi as 3.141592653589793 at every appearance, the variable PI can be given that value with a DATA statement and used instead of the longer form of the constant. This also simplifies modifying the program, should the value of pi change.
Xerox Basic FORTRAN and Basic FORTRAN IV Manual[0], attributed to David H. Owens.
Not quite Pi, but something very closely related to Pi is retained to extremely high precision in computers.
libm frequently contains 2/pi to very high precision. For example, Newlib's math library contains 476 decimal digits of 2/pi as part of its routines for calculating sine and cosine of numbers outside the range [-pi/4..pi/4].
See e_rem_pio2.c for more. Many of the open source math libraries are ultimately descended from the same root: the Sunpro fdlibm, archived at netlib: http://www.netlib.org/fdlibm/
The best way of looking at problems like this, is that it's an exponential process. The number of values you can represent with n digits increases exponentially. Each additional digit increases your precision by a factor of 10. If you have 15 digits, well imagine multiplying 10 over and over again 15 times, it's pretty big.
The word "quadrillion" is rarely used in the English language. Because it's very rare you need numbers that large. And when you do, being off by a few digits doesn't matter. Calculators commonly only display up to 8-10 digits, for example.
This applies to programming, since computers often only have a limited number of bits. Programmers often complain about floating point. One of the things about neural networks is that they don't actually need that many bits of precision, since they are by nature very "fuzzy". We can build computers that are bigger/cheaper by sacrificing a lot of bits.
But one of the problems is, when adding a bunch of small numbers together, it rounds to the nearest whole number every time. And the inaccuracy builds up. So to really take advantage of less precision, we need to somehow build computers that can do stochastic rounding, where they sometimes round up, and sometimes round down, so the expected output is the same.
I have heard, but never done the math the verify, that with 50-ish digits of pi, one's error on a circle the size of the Universe would be smaller than a plank length.
Although this really should have been posted 4 days ago,
2pi x 3x10^8 x 40x10^10 / 1.6x10^-35
is just over 10^54 so that's about all the digits you need to memorize. Unless you're looking for a really strong password, reciting 100,000 digits is probably more than necessary:
http://blogs.scientificamerican.com/observations/how-much-pi...
The ratio of the observable universe's circumference to a proton diameter may be 10^-35, but that doesn't really say anything for the precision of Pi you'd need in practice for any calculation involving these scales.
Because for everything involving real-world data, you'll have to measure quantities, and this is hardly ever done to more than just a few decimal digits. Whenever I want to state the circumfence of anything I know the diameter of down to single numbers of proton diameters, I first have to measure the diameter of to a precision of 1/3 proton diameter. Only when I reach such an absurdly nonsensical precision, I'd introduce errors by using an inadequately runded value for Pi.
More practically: I might know that I could line up 2.611*10^25 protons (disregarding the fact that due to their charge they would repel each other) around the earth, but to calculate that I only need 5 decimal digits of the earth's diameter, and only 5 decimal places of Pi.
Universal constants [1] have about 6-9 significant digits today. I wouldn't use more than 10 digits of pi, if I am working on some physical calculations.
I really thought that the reason would have been for technical reasons, like a compromise between precision and how fast they can actually calculate with pi.
The answer is simply awesome.
He's an American, writing for an American audience, and thus he's using the units Americans use. There's absolutely nothing more scientific about one set of units or another (although different sets of units may be more convenient in different situations).
It prevents mistakes when we all use the same units and the SI are agreed by an international committee of scientists and engineers. It's one less thing to go wrong.
It would be interesting to see the analytics - I'm not disagreeing and the site is presumably funded by American taxes, but people like the site aren't all American.
Inch / foot / yard / mile is a really small range compared to what metric can handle (anything), so in practice most science is done with metric units.
Wide ranges of values are handled exactly the same way in imperial units as in metric: multipliers. You write "0.00023 inches" or "5.48e4 miles", etc. It's just not as pretty.