Are there historical examples of conjectures widely believed to be true and then proven to be false?
I suppose Gödel's incompleteness theorems and Hilbert's second problem is along those lines, but I'm not sure Hilbert's problem was regarded as a conjecture.
The most famous example is probably Euclid's Fifth Postulate, aka the Parallel Postulate, which states that given a line and a point, there's a single line through the point parallel to the first line. http://en.wikipedia.org/wiki/Parallel_postulate
For 2000 years, it was conjectured that it followed from Euclid's other postulate. Lobachevsky, however, disproved it. (As with many 19th Century results, there are questions of precedence -- in part because Gauss claimed to have thought of everything first himself -- but his name is most commonly attached to the result.) Riemannian Geometry depends on the Parallel Postulate being false. General Relavity depends upon Riemannian Geometry.
I think there's some classic examples people pull out, but whether the relevant community actually thought them to be correct but unproven, I'd like to know.
I find this an interesting topic, the concept of we 'know' this is true but can't prove it yet.
I wonder if the idea of 'absolutes' sets back Mathematics sometimes. But maybe there are whole fields that rely on yet unproven but assumed true ideas?
Complex engineering for instance relies on 'this is probably true'. They make catastrophic mistakes, but they just pick themselves up and move on. Unless they are NASA I guess.
One field that relies on unproven but assumed true mathematical conjectures is cryptography. Provably secure crypto usually assumes that P != NP (it also incorporates further assumptions such as the intractability of integer factorization).
The best example after the Parallel Postulate may be a conjecture by, of all people, Fermat. He thought all numbers of the form 2^(2^n))+1 were prime. http://en.wikipedia.org/wiki/Fermat_number
However, I'm not aware that he ever claimed to have proved the result, "wonderfully" or otherwise. :)
Not the same. Physical models don't really live in the same space as mathematical conjectures. They are expected to be approximations, valid within some domain. Newtonian physics is still 'correct', in that is a valid approximation, because that's all a model ever is (except for the imagined 'theory of everything').
While it's true that this is the modern interpretation of what physics is, I'm not sure if that always was the case. It might be precisely these repeated breakthroughs (and I'd guess relativity and quantum mechanics especially) that are the cause of this attitude.
I recall being taught by my physics professor that Galileo never actually dropped two lead balls to test his theory - he reasoned that "heavier objects fall faster" made no sense because it implies that connecting two objects would make both fall faster.
Pretty cool. I was actually wondering the other day if, for a given manifold, there is a general algorithm to randomly choose a point on it such that each point has an equal probability of being chosen.
This isn't possible for some manifolds. In particular, you can't choose a point uniformly from an infinite plane. To see why, split the plane into an infinite number of equally-sized (say, one by one) squares. Each square should have an equal probability that one of its points is chosen. Call this probability s.
Now let's look at the entire plane. If you pick a point, it will surely be on the plane. That means the probability that you choose a point on the plane is equal to one.
Putting the squares together gives you the whole plane, so you'd expect that summing their overall probabilities s would give you the overall probability of the plane, which is one. But there are an infinite number of squares, and there's no real number s for which the sequence s, s + s, s + s + s, ... converges to one.
> In particular, you can't choose a point uniformly from an infinite plane.
While this is true, your reasoning is incomplete, you only arrive at the conclusion that s couldn't be positive. Why couldn't be s equal to 0? Note that a 0 probability event is not an impossible event and could still be in the event space. The problem is that there are countably infinite squares on the infinite plane so the measure of the infinite plane would add up to 0 instead of 1 by the property of the measure of countably infinite union of measurable sets.
What if we restrict our attention to closed manifolds?
I guess the difficulty with framing this as an algorithmic problem is that it presupposes some way of specifying the input, and I don't know exactly how that would be done. If your input is a polygonal mesh (or higher-dimensional equivalent) then the problem is easy, but what about things like parametric surfaces?
You probably mean bounded manifolds—the plane is closed (it contains all its limit points). The most general condition is that the manifold M itself has a positive, non-infinite measure m(M) (where m is the measure function). Then you can take the probability density of a set S to be m(S)/m(M). From this perspective, the problem with the plane is that it has an infinite surface area, so you can't divide by it to get a well-defined probability density.
Note that the OP desired that "each point has an equal probability of being chosen". The infinite plane being infinite, this requirement does not formally make sense, but a natural way to interpret it is to ask for invariance under isometries (or translation-invariance). As the post that you replied to indicates, this is not possible for the Euclidean plane. The measure constructed by the lemma that you cite does not have this property.
Yes, but the probability measure they construct doesn't give "equal probability" in the sense that Xcelerate probably meant. Going back to the plane example, if we use the unit squares as our sequence A1, A2, ..., we'll get P(A1) = 1/2, P(A2) = 1/4, P(A3) = 1/8, and so on. This sums to 1 as you'd expect, but it's not uniform!
Well, obviously. Even a slightly generous reading of the GP's question would indicate that pretending he asked about an algorithm "such that, for subregions a and b of equal volume, P[a]/P[b] approaches 1 as the subregions' volume approaches 0" would give him the answer he was intuitively searching for.
Thank you for clarifying that haha. I forgot that on HN one has to articulate all ideas as formally and rigorously as possible in order to prevent the pathological cases from becoming the centerpiece of a conversation.
Technically true, but not useful. The same is true for, say, choosing a real number uniformly at random on the interval [0,1), but nobody thinks twice about treating that as a well-defined sampling problem.
What the sum of an uncountably infinite bunch of zeros is, is the subject of the Calculus. There are systems that treat dx as something strictly different from zero (but not any finite value), though.
You know that's trivially true, right? When you define "equal probability" for each point, you imply a measure, for example the Minkowski content for manifolds in R^n. Then by simply assigning probability measure(Region)/measure(Manifold), you have your algorithm (note there are problems if the number of points is not finite or your distribution is not diadric, so if you're given random bits to draw from you can only approximate the random choice in finite time).
It's not at all obvious to me how "assigning a probability measure" translates into an actual algorithm. For one thing, if you have an n-D manifold embedded in an (n+1)-D space, any set of points on the manifold will have measure zero in the enclosing region.
To give an intuitive definition, consider the volume near the n-D manifold, up to a distance d. As you get nearer the volume goes down it starts looking like ~A*2d -- in fact we define A as the limit as d goes to 0 of the volume divided by 2d. You can extend this to k-D manifolds on n-D spaces by normalizing this volume with a n-k dimensional ball.
Now you can partition this set in any way you want and apply the rule I mentioned for each partition (p=measure(Partition)/measure(Manifold)). Simply because that is precisely how one might define "equal probabilities", there exists an algorithm which partitions the manifold into simply connected compact sets of decreasing measure which converges to giving "equal probabilities": the definition implies an algorithm.
Edit: panic above clarified the requirement. I missed that that measure(Manifold) must of course exist, but I'm confident all else is self-evident.
Seems odd that they did not provide a counter example. My suspicion there is an incorrect assumption for non fractal structures.
PS: This is really one of those cases where a paper does not mean much. You really need to dig through http://arxiv.org/abs/1303.2354 and find the flaw as there is no peer review. Though it supposedly is going to show up in Journal of the AMS so it's not junk and probably worth diging into.
A specific non-orientable five-dimensional manifold M5 with Sq1 ∆(M) ̸= 0 is constructed in [25]. Hence M5 is non-triangulable. To get non-orientable examples of non-triangulable manifolds in dimensions n > 5 we can take the product of M5 with the torus Tn−5. To get an orientable example in dimension 6 we can consider the non-orientable S1-bundle over M given by M ̃ ×Z/2 S1, where M ̃ → M is the oriented double cover; the total space of this bundle is orientable. To get orientable examples in dimensions n > 6, we can then take products with Tn−6.
I suppose Gödel's incompleteness theorems and Hilbert's second problem is along those lines, but I'm not sure Hilbert's problem was regarded as a conjecture.