> Feynman concluded: “for my money Fermat’s theorem is true”.
> "the main job of theoretical physics is to prove yourself wrong as soon as possible."

Great example of the main difference between mathematicians and theoretical physicists .

This reminds me of another magician, Enrico Fermi, who was also an extremely good mathematician but didn't pursue rigor or precision for the sake of it: 20% was good enough precision for him for most cases.

I feel like it is a very "physics" motivated approach to at least "investigating" this theorem. Calculating probabilities is a line of thinking Feynman would be familiar with (quantum mechanics). Physics is often responsible for mathematical development, while they are different, they complement each other. It's nice to see different perspectives, and how ideas are connected.

> > Feynman concluded: “for my money Fermat’s theorem is true”.

> > "the main job of theoretical physics is to prove yourself wrong as soon as possible."

> Great example of the main difference between mathematicians and theoretical physicists.

Actually, I'm not sure I agree: even before Wiles's proof, almost every mathematician would have been willing to wager, at least conversationally, on the truth of FLT; and mathematicians also are in the business of proving themselves wrong as soon as possible. The only catch is that we don't count an inability to prove yourself wrong as a proof that you're right ….

The difference lies in the fact that absolute rigor to assess truths is not as fundamental in theoretical physics as it is in mathematics. Uncertainty is accepted. Physics puts a premium on empirical results and intuition over the more formal treatments common in mathematics (many important results/tools are not mathematically well-defined e.g. Feynman path-integral in d > 1).

Agreed! I didn't mean to claim that there isn't a difference, for there is a wide one; only that the two particular quotes chosen seemed (unlike most other things Feynman said!) not to illustrate them.

If you integrate the characteristic function of the rational numbers over any interval, you get zero because rational numbers are very rare.

So they don't exist either?

To be less glib, I don't see Feynmann's argument to be bringing anything new. We already knew that counterexamples, if they existed, would be very rare because we tried looking for them with computers and we couldn't find them. But stuff being rare still doesn't prove anything.

Your rational numbers argument does indeed fail, but for a different reason. It's not valid to go from a sum over the rationals to an integral in the reals in that way, however it is perfectly valid to go from a sum over the integers to an integral over the reals (in certain situations), if you only want an estimate (see eg. the Euler Maclaurin formula https://en.wikipedia.org/wiki/Euler–Maclaurin_formula ). This was fine in Feynman's argument.

The integral of the characteristic function is zero, but the probability of finding a rational in any interval is one. In Feynman’s argument, it seems he was relying on this latter probability.

So I don’t think you have produced a compelling counterexample yet (though I expect you are right that one exists).

No one is claiming that this kind of argument constitutes a proof, but this kind of back of the envelope calculation is very powerful. For example in the theory of prime numbers there is a heuristic that says that primes roughly behave like randomly selected numbers where Prob(N is prime) = 1 / log(N) [this is a simplification, but that's the crux of it]. With this heuristic you can accurately predict whether a large class of statements about primes are true or false, and can get extremely precise estimates about things like "how many twin primes are there less than N", or "how many solutions in primes are there to p1 + p2 = 2*N, for some huge N", which is one way of phrasing two famous prime number conjectures.

It doesn't lead you directly proof - but often just knowing what the answer 'should be' can be a real guiding light.

This argument doesn't calculate the probability of an individual number, it sums the probability over every number.

This kind of thing is used a lot in number theory to figure out the plausibility of some theorem. A lot of open number theory problems are of the form "Prove [unlikely event] never happens."

I know, but it's being exhibited as an example of how back-of-the-envelope type of approximations by physicists can be just as good as rigid mathematical thinking. And I don't find this to be a convincing example of how loose physicist arguments can work.

Schwartz distributions, infintesimals; okay, fine, those turned out to be a weird trick that can be formalised. But sometimes their tricks are just plain wrong and this is one example of a trick that just is wrong and can't be formalised.

Consider, many useful primality tests are statistical in nature. It’s pure math, and exact answer is possible but it’s still useful to get a quick check to see if something is a waste of time.

Really, if a full solution takes 20 years you don’t want to actually spend 20 years without having a very good idea it’s going to work.

I think jordigh is saying that the method is not statistically sound, i.e. that it will not (necessarily) give accurate probability estimates. They're not criticizing the method simply for being statistical.

Using zero probability on infinite sets to ascertain the inexistence of an object doesn't work. Lots of things have measure zero in an infinite set. For example, out of all the integers, the probability of picking 7 is zero. Any finite set will have density zero in the integers too.

So I find his small probability, however tiny, that out of all possible integer tuples none of them are a counterexample to be utterly unconvincing and ultimately misguided.

To be clear: my complaint is that there is no way to turn this kind of argument into an actual proof. We could salvage other physicist arguments, but not this one. Probability zero on an infinite set cannot mean inexistence. And the rest of what he's doing, trying to determine that counterexamples must be rare, is "well duh, we knew that, because we've been looking for them."

I recall reading that there is an, incorrect, proof that would match the kind of proof we expect from Fermat and thus is believed to be the one he had in mind.
However I was unable to find it. Instead I found a discussion on how likely it is that he had a proof : https://hsm.stackexchange.com/questions/3/what-evidence-is-t...

I had a number theory professor who said there are a number of deceptively promising avenues of attack using simpler methods than Wiles, but that end up being dead ends. He said mathematicians are now mostly of the opinion that Fermat was probably onto one of these.

Some even speculate Fermat didn't write this note. Most of his works were collected by his son who is suspected of adding this commentary to a conjecture that was considered many times by Fermat.

Well, I'm not 'math folk' (grey beard programmer), but I love the Horizon documentary on Andrew Wiles and his solution, and I'd love to hear why my intuitive understanding is inapplicable from people who know better than myself. (Note that this will not in any way be a proof, but just the train of thought I believe to be the line of thinking Fermat may have used to construct his proper mathematical proof.)

My idea here is based upon physical/visual intuition, starting with why it works for n=2 (squares) and then why it cannot work for n=3 (cubes) and then that n>3 is necessarily more complex than n=3 thus cannot work either.

[Note that I will use lower case letters for the sides/roots and their uppercase letters to denote the areas or volumes.
Thus, the full equation is Z=Y+X, with X = x^n, resulting in z^n = y^n + x^n.
I also use (for n=2), dy = z - y, and Dy = 2(y(dy)) + dy^2,
and dx = z - x, and Dx = 2(x(dx)) + dx^2.
I'm sorry my dx and dy conflict with calculus notation but my notation means dx is "the difference between z and x" which is the same as "the length that must be added to x to equal z" and Dx is "total amount that must be added to X to get Z".
Therefore (for n=2), Dx = Y = 2(x(dx)) + dx^2, and Dy = X = 2(y(dy)) + dy^2.
]

For n=2, Z=Y+X works because (what can be visualized as a square) X can be "smushed" over two sides and their joining corner of (the other square) Y evenly, such that Z = Y + Dy = Y + 2(y(dy)) + dy^2. The term "2(y(dy))" is the amount that must be added along each of the two sides, and the term "dy^2" is the amount that must be added at the corner to complete the perfect square Z.

So, for example, 5^2 = 4^2 + 3^2 because both
3^2 = 9 = 2(4(1)) + 1^2 = 2(4) + 1 = 8 + 1, and
4^2 = 16 = 2(3(2)) + 2^2 = 2(6) + 4 = 12 + 4.

Now, for n=3, we must visualize the situation where the cube X is smushed over the cube Y's three faces and its joining corner. (Now X=x^3 and Y=y^3.)

The equations for dx and dy are the same, but Dx and Dy have expanded by a dimension:
Dx = Y = 3(x^2)(dx) + 3(x)(dx^2) + dx^3, and likewise
Dy = X = 3(y^2)(dy) + 3(y)(dy^2) + dy^3.

The term "3(x^2)(dx)" is the amount that must be added to three faces of the cube Y, the term "3(x)(dx^2)" is the amount that must be added along the three edges joining those three faces of the cube Y, and the term "dx^3" is the amount that must be added at the corner.

Now, I haven't the maths to prove why Dx and Dy for n=3 won't have integer solutions but my intuition says it has something to do with the fact that it's three dimensions and, therefore, a couple of odd numbers multiplying around in there (the first two terms) and the fact that there are only two cubes being smushed together to try and reconstitute another perfect cube. I also imagine Fermat could actually mathematically prove why it's impossible. Perhaps it can be shown that Dx and Dy cannot both have diophantine solutions. These are just guesses.

As for n>3, the terms (and physical/visual representations) will only become more complex and there will be still only two terms with which to reconstitute the hypercube.

Anyway, that's my intuition about the entire problem and I have to imagine that a proof that Fermat can easily intuit yet is (a bit?) too large to fit in the margin must surely tread down a simple path, perhaps even one that relies on a physical/visual interpretation of what the equations can be likened to.

I look forward to this being eviscerated or flatly rejected, if appropriate, or at least corrected for inconsistencies. If it serves anyone in their exploration of this insidiously complex yet apparently simple-seeming problem, my joy would only grow. If my name would someday appear in a mathematical paper that a real mathematician produces as a result of this, well that would be out of this world for this poverty-striken math wannabe.

Your intuition is similar in spirit to Feynman's. For n>2, the number of plausible candidates is sparse, whereas for n<=2, there are many plausible candidates.

The proof Fermat hinted to was about the difference between squares. All whole numbers taken to a power greater than two (n^3) can be represented as the difference between two whole squares (x^2 - y^2). These differences can then be shown as the sum of consecutive odd numbers:

When you examine the odd number series that results from each base, you'll discover that there will always be a gap if you try and combine two odd number series together, which explains Fermat's little joke about margins. The same trick works for higher powers.

It's not that hard people. Stop believing everything you're told about how "hard" something is.

HINT:
The number of odd numbers in the series exactly matches the starting square base number

I'm working to understand this, but I can't seem to fit it together. Following Feynman's lead in this sort of thing, can you give me an explicit example of why the equation x^13+y^13=z^13 has no solutions? Or even just use your technique to explain why x^5+y^5=z^5 has no solutions?

It looks like you edited this comment, but I'm serious. I'm trying to understand your proof, but I'm having trouble seeing what the steps are for higher powers than 3 or 4. I already know the proofs for then cases n=3 and n=4, but I can't see how what you say works in the case, say, n=5, or n=13.

Seriously, can you walk us through the steps of why x^5+y^5=z^5 has no (non-trivial) solutions?

And to be fair, there are cases where, say, undergrads have proven significant results that had been outstanding for a long time. Proving that prime recognition is in P is one such case. but in that case they published a complete, clear paper. In this case I can't really see what you're saying you've done, or why it's true, which is why a walk-through of the case n=5 would be so helpful.

Thanks.

========

For anyone interested, this is what the comment used to say ...

The series of odd numbers must be consecutive and they simply are not when you add two different series together for all powers greater than two. It's okay. You'll get it.

1) a whole number, n, taken
to a power greater than two
2) can be represented as a
consecutive series of odd
numbers
3) where there will always be
a gap in the series between
consecutive base numbers,
for all p and p+1
4) therefore there will be
a gap for all p and p+n,
n>=1 combinations

I see it more like walking away knowing that I have checkmate whether the other person can see it or not. If you pay attention to the conditions that must necessarily exist, like the that the difference between two whole squares can be represented as EITHER an integer base taken to a power higher than two, or the sum of a series of consecutive odd numbers, the number of terms equaling the original base number of the higher power equation.

Consecutive base numbers will necessarily alternate between even and odd. So even the closest base numbers still have a gap between their resulting odd number series, which only increases as the distance between base numbers increases.

'ox_n appears to be trying to prove (by a pigeonhole agument) that the equation x^n + y^n = z^n in unsolvable for _some z_. That's much weaker than proving that it is unsatisfiable at _every_ z.

> It's not that hard people. Stop believing everything you're told about how "hard" something is.

There are still many problems in physics and mathematics which are considered "hard" (e.g., dark energy, Riemann hypothesis, etc). Can we crack them by simply adopting your positive mindset?

I don't think the "you can do anything" mindset works in real life. It helps self-help book authors sell their stuff, but it's not a good strategy to live by. (Incidentally, this reminds me of Key & Peele's "You can fly" sketch).

What does work though is this: advanced formal education in a topic. Once you have that you can start thinking on how to solve some simple open problems. And if you are lucky and turn out to be extremely smart, you may be able to tackle more challenging problems. Some amount of self confidence may also you to keep going but doesn't make you a genius overnight.

Simply going to a mindset where things are 'not hard' is closer to delusion than it is to anything else.

In academia we get often emails from people who solved quantum gravity (e.g. using fire), show us how einstein is wrong (e.g. using a pendelum), etc. I'm pretty sure they also convinced themselves to "Stop believing everything they're told about how "hard" something is"

Oh man, that reminds me of an experience I had in college. I was working with the aerospace department on their fusion reactor (I was just writing software to help them process data from it, not involved in the science itself). My boss kept getting calls from crackpots who'd go on and on and on about their bogus theories, and how they were being shut out of the mainstream by small minded fools, etc etc.

It was pretty frustrating. He was too nice a guy to tell them off or even cut them off quickly.

My advice to any crackpots who are really sure they're actually geniuses: Get into the stock market (with a SMALL investment). If you're as smart as you think you are, you can find an angle and turn $100 into $1,000,000 or more, and then if anything it'll be GOOD that nobody ever believed in you. I've run across arbitrage opportunities that would have made me fiendishly rich if I'd noticed them sooner myself, believe it or not. Just be careful and don't mess with box spreads.

No, but even those from "left field", if genuine, tend to take the time and care to write things up properly, to use the nomenclature of the field, to address obvious potential concerns up front.

If you're asserting something that's likely to encounter resistance, it's worth being clear and careful.

Previously discussed:

https://news.ycombinator.com/item?id=16041560

https://news.ycombinator.com/item?id=14940636

https://news.ycombinator.com/item?id=14355834

https://news.ycombinator.com/item?id=12018221

... and previously submitted without discussion:

https://news.ycombinator.com/item?id=17581023

https://news.ycombinator.com/item?id=15904199

Is the "past" link being deprecated or something?

> Feynman concluded: “for my money Fermat’s theorem is true”. > "the main job of theoretical physics is to prove yourself wrong as soon as possible."

Great example of the main difference between mathematicians and theoretical physicists .

This reminds me of another magician, Enrico Fermi, who was also an extremely good mathematician but didn't pursue rigor or precision for the sake of it: 20% was good enough precision for him for most cases.

I feel like it is a very "physics" motivated approach to at least "investigating" this theorem. Calculating probabilities is a line of thinking Feynman would be familiar with (quantum mechanics). Physics is often responsible for mathematical development, while they are different, they complement each other. It's nice to see different perspectives, and how ideas are connected.

> > Feynman concluded: “for my money Fermat’s theorem is true”.

> > "the main job of theoretical physics is to prove yourself wrong as soon as possible."

> Great example of the main difference between mathematicians and theoretical physicists.

Actually, I'm not sure I agree: even before Wiles's proof, almost every mathematician would have been willing to wager, at least conversationally, on the truth of FLT; and mathematicians also are in the business of proving themselves wrong as soon as possible. The only catch is that we don't count an inability to prove yourself wrong as a proof that you're right ….

The difference lies in the fact that absolute rigor to assess truths is not as fundamental in theoretical physics as it is in mathematics. Uncertainty is accepted. Physics puts a premium on empirical results and intuition over the more formal treatments common in mathematics (many important results/tools are not mathematically well-defined e.g. Feynman path-integral in d > 1).

Agreed! I didn't mean to claim that there isn't a difference, for there is a wide one; only that the two particular quotes chosen seemed (unlike most other things Feynman said!) not to illustrate them.

Fair enough. Agree that the second quote doesn’t illustrate my point, contrary to the first one. Cheers!

This proof (or "plausibility argument") bugs me so much. Just because something thins out and becomes rare doesn't mean it doesn't exist.

As n gets bigger, the probability of n being a perfect square gets smaller and smaller. In the limit, the probability is zero.

Does this mean square numbers don't exist?

By Feynman's argument, you can prove that square numbers almost certainly keep on existing.

Roughly, it goes as such:

1) the probability of N being a perfect square is proportional to 1/sqrt(N).

2) For any N_0 arbitrarily high, if you integrate from N_0 to infinity the expression (1/sqrt(N) dN), you get infinity.

3) The expression in 2) is the "Feynman equivalent" of the expected number of square numbers above N_0.

So Feynman's nonproof actually turns out to be true, despite it not being a proof in this case as well.

Okay, let's pick something rarer. Rational numbers.

If you integrate the characteristic function of the rational numbers over any interval, you get zero because rational numbers are very rare.

So they don't exist either?

To be less glib, I don't see Feynmann's argument to be bringing anything new. We already knew that counterexamples, if they existed, would be very rare because we tried looking for them with computers and we couldn't find them. But stuff being rare still doesn't prove anything.

Many of us were fooled by Skewe's number:

https://en.wikipedia.org/wiki/Skewes%27s_number

There's no way to conclude that this exists via brute calculation. It's just inconceivably large and would have eluded any of Feynmann's methods.

Your rational numbers argument does indeed fail, but for a different reason. It's not valid to go from a sum over the rationals to an integral in the reals in that way, however it

isperfectly valid to go from a sum over the integers to an integral over the reals (in certain situations), if you only want an estimate (see eg. the Euler Maclaurin formula https://en.wikipedia.org/wiki/Euler–Maclaurin_formula ). This was fine in Feynman's argument.The integral of the characteristic function is zero, but the probability of finding a rational in any interval is one. In Feynman’s argument, it seems he was relying on this latter probability.

So I don’t think you have produced a compelling counterexample yet (though I expect you are right that one exists).

No one is claiming that this kind of argument constitutes a proof, but this kind of back of the envelope calculation is very powerful. For example in the theory of prime numbers there is a heuristic that says that primes roughly behave like randomly selected numbers where Prob(N is prime) = 1 / log(N) [this is a simplification, but that's the crux of it]. With this heuristic you can accurately predict whether a large class of statements about primes are true or false, and can get extremely precise estimates about things like "how many twin primes are there less than N", or "how many solutions in primes are there to p1 + p2 = 2*N, for some huge N", which is one way of phrasing two famous prime number conjectures.

It doesn't lead you directly proof - but often just knowing what the answer 'should be' can be a real guiding light.

This argument doesn't calculate the probability of an individual number, it sums the probability over

everynumber.This kind of thing is used a lot in number theory to figure out the plausibility of some theorem. A lot of open number theory problems are of the form "Prove [unlikely event] never happens."

Edit: Removed braindead argument.

This isn't a proof.

I know, but it's being exhibited as an example of how back-of-the-envelope type of approximations by physicists can be just as good as rigid mathematical thinking. And I don't find this to be a convincing example of how loose physicist arguments can work.

Schwartz distributions, infintesimals; okay, fine, those turned out to be a weird trick that can be formalised. But sometimes their tricks are just plain wrong and this is one example of a trick that just is wrong and can't be formalised.

That’s not what’s happening.

Consider, many useful primality tests are statistical in nature. It’s pure math, and exact answer is possible but it’s still useful to get a quick check to see if something is a waste of time.

Really, if a full solution takes 20 years you don’t want to actually spend 20 years without having a very good idea it’s going to work.

I think jordigh is saying that the method is not statistically sound, i.e. that it will not (necessarily) give accurate probability estimates. They're not criticizing the method simply for being statistical.

jordigh hasn't given support for that claim.

Using zero probability on infinite sets to ascertain the inexistence of an object doesn't work. Lots of things have measure zero in an infinite set. For example, out of all the integers, the probability of picking 7 is zero. Any finite set will have density zero in the integers too.

So I find his small probability, however tiny, that out of all possible integer tuples none of them are a counterexample to be utterly unconvincing and ultimately misguided.

To be clear: my complaint is that there is no way to turn this kind of argument into an actual proof. We could salvage other physicist arguments, but not this one. Probability zero on an infinite set cannot mean inexistence. And the rest of what he's doing, trying to determine that counterexamples must be rare, is "well duh, we knew that, because we've been looking for them."

Is anyone still trying to come up with Fermat's original "truly marvelous proof"? Or have math folk talked themselves out of its possible existence?

I recall reading that there is an, incorrect, proof that would match the kind of proof we expect from Fermat and thus is believed to be the one he had in mind. However I was unable to find it. Instead I found a discussion on how likely it is that he had a proof : https://hsm.stackexchange.com/questions/3/what-evidence-is-t...

I had a number theory professor who said there are a number of deceptively promising avenues of attack using simpler methods than Wiles, but that end up being dead ends. He said mathematicians are now mostly of the opinion that Fermat was probably onto one of these.

Some even speculate Fermat didn't write this note. Most of his works were collected by his son who is suspected of adding this commentary to a conjecture that was considered many times by Fermat.

Well, I'm not 'math folk' (grey beard programmer), but I love the Horizon documentary on Andrew Wiles and his solution, and I'd love to hear why my intuitive understanding is inapplicable from people who know better than myself. (Note that this will not in any way be a proof, but just the train of thought I believe to be the line of thinking Fermat may have used to construct his proper mathematical proof.)

My idea here is based upon physical/visual intuition, starting with why it works for n=2 (squares) and then why it cannot work for n=3 (cubes) and then that n>3 is necessarily more complex than n=3 thus cannot work either.

[Note that I will use lower case letters for the sides/roots and their uppercase letters to denote the areas or volumes. Thus, the full equation is Z=Y+X, with X = x^n, resulting in z^n = y^n + x^n. I also use (for n=2), dy = z - y, and Dy = 2(y(dy)) + dy^2, and dx = z - x, and Dx = 2(x(dx)) + dx^2. I'm sorry my dx and dy conflict with calculus notation but my notation means dx is "the difference between z and x" which is the same as "the length that must be added to x to equal z" and Dx is "total amount that must be added to X to get Z". Therefore (for n=2), Dx = Y = 2(x(dx)) + dx^2, and Dy = X = 2(y(dy)) + dy^2. ]

For n=2, Z=Y+X works because (what can be visualized as a square) X can be "smushed" over two sides and their joining corner of (the other square) Y evenly, such that Z = Y + Dy = Y + 2(y(dy)) + dy^2. The term "2(y(dy))" is the amount that must be added along each of the two sides, and the term "dy^2" is the amount that must be added at the corner to complete the perfect square Z.

So, for example, 5^2 = 4^2 + 3^2 because both 3^2 = 9 = 2(4(1)) + 1^2 = 2(4) + 1 = 8 + 1, and 4^2 = 16 = 2(3(2)) + 2^2 = 2(6) + 4 = 12 + 4.

Now, for n=3, we must visualize the situation where the cube X is smushed over the cube Y's three faces and its joining corner. (Now X=x^3 and Y=y^3.)

The equations for dx and dy are the same, but Dx and Dy have expanded by a dimension: Dx = Y = 3(x^2)(dx) + 3(x)(dx^2) + dx^3, and likewise Dy = X = 3(y^2)(dy) + 3(y)(dy^2) + dy^3.

The term "3(x^2)(dx)" is the amount that must be added to three faces of the cube Y, the term "3(x)(dx^2)" is the amount that must be added along the three edges joining those three faces of the cube Y, and the term "dx^3" is the amount that must be added at the corner.

Now, I haven't the maths to prove why Dx and Dy for n=3 won't have integer solutions but my intuition says it has something to do with the fact that it's three dimensions and, therefore, a couple of odd numbers multiplying around in there (the first two terms) and the fact that there are only two cubes being smushed together to try and reconstitute another perfect cube. I also imagine Fermat could actually mathematically prove why it's impossible. Perhaps it can be shown that Dx and Dy cannot both have diophantine solutions. These are just guesses.

As for n>3, the terms (and physical/visual representations) will only become more complex and there will be still only two terms with which to reconstitute the hypercube.

Anyway, that's my intuition about the entire problem and I have to imagine that a proof that Fermat can easily intuit yet is (a bit?) too large to fit in the margin must surely tread down a simple path, perhaps even one that relies on a physical/visual interpretation of what the equations can be likened to.

I look forward to this being eviscerated or flatly rejected, if appropriate, or at least corrected for inconsistencies. If it serves anyone in their exploration of this insidiously complex yet apparently simple-seeming problem, my joy would only grow. If my name would someday appear in a mathematical paper that a real mathematician produces as a result of this, well that would be out of this world for this poverty-striken math wannabe.

[Edited to fix my n=3 equations.]

Your intuition is similar in spirit to Feynman's. For n>2, the number of plausible candidates is sparse, whereas for n<=2, there are many plausible candidates.

_DON'T DOWN VOTE JUST BECAUSE YOU CAN'T DO MATH_

The proof Fermat hinted to was about the difference between squares. All whole numbers taken to a power greater than two (n^3) can be represented as the difference between two whole squares (x^2 - y^2). These differences can then be shown as the sum of consecutive odd numbers:

When you examine the odd number series that results from each base, you'll discover that there will always be a gap if you try and combine two odd number series together, which explains Fermat's little joke about margins. The same trick works for higher powers.It's not that hard people. Stop believing everything you're told about how "hard" something is.

HINT:The number of odd numbers in the series exactly matches the starting square base numberCan you please read the site guidelines and follow them?

https://news.ycombinator.com/newsguidelines.html

Fuck you ya sjw cunt. Take your pussy ass guidelines and shove them up your ass.

I'm working to understand this, but I can't seem to fit it together. Following Feynman's lead in this sort of thing, can you give me an explicit example of why the equation x^13+y^13=z^13 has no solutions? Or even just use your technique to explain why x^5+y^5=z^5 has no solutions?

Thanks.

Do you really believe that:

(a) This constitutes a proof;

(b) This is the "proof" that Fermat had;

(c) Mathematicians missed this for over 350 year?

I'm not quite sure exactly what you are claiming.

It's completely arrogant to assume that because it hasn't been solved by "better" people that I couldn't solve it.

It looks like you edited this comment, but I'm serious. I'm trying to understand your proof, but I'm having trouble seeing what the steps are for higher powers than 3 or 4. I already know the proofs for then cases n=3 and n=4, but I can't see how what you say works in the case, say, n=5, or n=13.

Seriously, can you walk us through the steps of why x^5+y^5=z^5 has no (non-trivial) solutions?

And to be fair, there are cases where, say, undergrads have proven significant results that had been outstanding for a long time. Proving that prime recognition is in P is one such case. but in that case they published a complete, clear paper. In this case I can't really see what you're saying you've done, or why it's true, which is why a walk-through of the case n=5 would be so helpful.

Thanks.

========

For anyone interested, this is what the comment used to say ...The series of odd numbers must be consecutive and they simply are not when you add two different series together for all powers greater than two. It's okay. You'll get it.

Copy this into wolfram alpha:

p=2, n=5, x=1/2(p+1)p^((n-1)/2), y=1/2(p-1)p^((n-1)/2)

Try changing the values and playing with it until you understand that there will always be x^2 - y^2 for all p^n, n>2.

OK, so I'm taking it from your comment that you really do believe this constitutes a proof. Thanks for the reply.

I see it more like walking away knowing that I have checkmate whether the other person can see it or not. If you pay attention to the conditions that must necessarily exist, like the that the difference between two whole squares can be represented as EITHER an integer base taken to a power higher than two, or the sum of a series of consecutive odd numbers, the number of terms equaling the original base number of the higher power equation.

>about: Fuck you, hater.

Oh, you're

thatguy.I got to call them like I see them. Thanks for announcing that you stepped on that land mine.

This argument puzzled me, but unfortunately this is as far as I got:

~~Lemma 1~~:

Proof:z^n = x^2 - y^2 = (x+y)(x-y), leading to the system of equations

x + y = z^(n-1)

x - y = z

which may be solved for x and y:

x = (z^(n-1) + z)/2

y = x - z

QED.

~~Lemma 2~~:

By induction:(n)^2 = (n-1)^2 + (2(n-1)+1) = \sum_{k=0}^{n-1}(2k + 1)

QED.

~~~Fermat's Last Theorem~~~:

Proof:Without loss of generality, assume a > b, then rewrite z^n as a sum of sequential odd numbers starting at b:

z^n = a^2 - b^2 = \sum_{k=b}^{a-1}(2k+1)

x^n and y^n can similarly be written as a sum of sequenatal odd numbers:

x^n = c^2 - d^2 = \sum_{k=d}^{c-1}(2k+1)

y^n = e^2 - f^2 = \sum{k=f}^{e-1}(2k+1)

By substitution to the Theorem's equation:

\sum_{k=b}^{a-1}(2k+1) = \sum_{k=d}^{c-1}(2k+1) + \sum_{f}^{e-1}(2k+1)

Which is true if and only if there are no gaps in the bounds of summation on the right side, so d = b, c = f, and e = a. But then

a^2 - b^2 = (f^2 + (-a^2))^n + (b^2 + (-f^2))^n

and this is certainly not true by the binomial theorem. We've reached a contradiction so QED.

^^ This is where I'm stuck. I don't actually know why that would not be true by the binomial theorem? It seems like simple expansion should do it.

Off topic: why doesn't HN support LaTeX?

Likely because MathML isn't widely supported, and has even been removed from Chrome.

MathJax works an absolute treat.

Except on Android where the math takes up more horizontal space than the layout engine thinks it should, causing overlap with text after the math.

I'll pass that report on to them - have you reported it already? In which browser is that happening?

Sounds interesting, but not sure what you mean by:

> you'll discover that there will always be a gap if you try and combine two odd number series together

Can you elaborate?

ox_n's last theorem

Consecutive base numbers will necessarily alternate between even and odd. So even the closest base numbers still have a gap between their resulting odd number series, which only increases as the distance between base numbers increases.

Still don't get it. What do you mean by "base numbers" and what do you mean by "alternate between even and odd"?

'ox_n appears to be trying to prove (by a pigeonhole agument) that the equation x^n + y^n = z^n in unsolvable for _some z_. That's much weaker than proving that it is unsatisfiable at _every_ z.

> It's not that hard people. Stop believing everything you're told about how "hard" something is.

There are still many problems in physics and mathematics which are considered "hard" (e.g., dark energy, Riemann hypothesis, etc). Can we crack them by simply adopting your positive mindset?

What other mindset do you see working better?

I don't think the "you can do anything" mindset works in real life. It helps self-help book authors sell their stuff, but it's not a good strategy to live by. (Incidentally, this reminds me of Key & Peele's "You can fly" sketch).

What does work though is this: advanced formal education in a topic. Once you have that you can start thinking on how to solve some simple open problems. And if you are lucky and turn out to be extremely smart, you may be able to tackle more challenging problems. Some amount of self confidence may also you to keep going but doesn't make you a genius overnight.

Simply going to a mindset where things are 'not hard' is closer to delusion than it is to anything else.

In academia we get often emails from people who solved quantum gravity (e.g. using fire), show us how einstein is wrong (e.g. using a pendelum), etc. I'm pretty sure they also convinced themselves to "Stop believing everything they're told about how "hard" something is"

Oh man, that reminds me of an experience I had in college. I was working with the aerospace department on their fusion reactor (I was just writing software to help them process data from it, not involved in the science itself). My boss kept getting calls from crackpots who'd go on and on and on about their bogus theories, and how they were being shut out of the mainstream by small minded fools, etc etc.

It was pretty frustrating. He was too nice a guy to tell them off or even cut them off quickly.

My advice to any crackpots who are really sure they're actually geniuses: Get into the stock market (with a SMALL investment). If you're as smart as you think you are, you can find an angle and turn $100 into $1,000,000 or more, and then if anything it'll be GOOD that nobody ever believed in you. I've run across arbitrage opportunities that would have made me fiendishly rich if I'd noticed them sooner myself, believe it or not. Just be careful and don't mess with box spreads.

So what your saying is... We should ignore letters from patent clerks?

No, but even those from "left field", if genuine, tend to take the time and care to write things up properly, to use the nomenclature of the field, to address obvious potential concerns up front.

If you're asserting something that's likely to encounter resistance, it's worth being clear and careful.

> The same trick works for higher powers.

can you demonstrate?

Yes.

x and y will be a multiple of the base number.

i don't get it, how do you go from difference of squares turning into sequence of odd numbers to absence of power of some integer?