I was confused by this article. It says that for every k, then all sufficiently large sets have an arithmetic progression of length k.

But suppose we set k = 3, and take the set {1, 2, 4, 8, ..., 2^n}. Then no matter how big n is, we have no arithmetic progression of length 3.

Looking into this, it appears that the article misstated the result, called Szemerédi's theorem: “if the positive integers are partitioned into finitely many classes, then at least one of these classes contains arbitrarily long arithmetic progressions”. https://www.semanticscholar.org/paper/A-new-proof-of-Szemeré...

The theorem you quoted is Van der Waerden's theorem [0], not Szemerédi's. It is significantly easier to prove than Szemerédi's result.

The article correctly states Szemerédi's theorem, if you interpret "sufficiently large" to mean "positive density": For every density delta > 0 and length k, there exists an n0 such that every subset of {1, ..., n} (n > n0) of density at least delta (that is, at least delta * n elements) contains an arithmetic progression of length k.

For n large, your set has density roughly (log N)/N (where N = 2^n), which is not bounded from below by any constant delta > 0; this is why Szemerédi's theorem does not apply. Speaking of your example, you might be interested in Behrend's construction [1], a way of constructing "large" sets (for another definition of large) with no 3-term arithmetic progression.

Szemerédi's theorem easily implies Van der Waerden's theorem because if you colour the integers with finitely many colours, you can apply Szemerédi's theorem to the largest colour class.

There's a lot of interesting trivia related to this. For instance, the sequence of perfect squares {1,4,9,...} contains 3-term subsequences that are in arithmetic progression (e.g., {1,25,49}). However, it does not contain any 4-term such subsequences. And if you consider the sequence of nth powers, for n > 2, it does not contain any arithmetic progressions at all. This claim turns out to be very similar to Fermat's Last Theorem and can be established with similar techniques. (See https://math.berkeley.edu/~ribet/Articles/acta.pdf, with alpha = 1. Note that this is the same Ribet of Ribet's theorem, which established the link between modularity of elliptic curves and FLT, setting the stage for Wiles.)

The example given in the article elaborates that the sets have bounded range, e.g. for a given size N, the set’s values must range from 0 to mN for some m > 1.

The results therefore depend on both m and N.

Equivalently, you can pick the length of your range first and then pick a ‘density’ d, where d * length is the size of the set (this matches the formulation given in the article).

In my own research, the main focus is to take a (usually known) property and to rephrase it so that the new statement is self-dual. The analogy would be that you "understand a pattern only if you know how to phrase it in a self-dual context".

None of these different approaches necessarily try to presuppose or subsume each other, but they hint towards your outlook and interests. That is why independent discoveries are usually "clearly" independent, in the sense that the whole framework is usually different.

This happened for example with Lawvere and the eventual standard definition of a topos; so too with Grothendieck toposes and the eventual standard definition of a topos. Newton and Leibniz would be the more accessible example.

Let P_1,..., P_m ∈ Z[y] be polynomials with distinct degrees, each having zero constant term. We show that any subset A of {1,..., N} with no nontrivial progressions of the form x, x + P_1(y),..., x + P_m(y) has size |A| ≪ N/(log log N)^(cP1,...,Pm) . Along the way, we prove a general result controlling weighted counts of polynomial progressions by Gowers
norms.

These observations have a bit of the flavor of the birthday paradox and pigeonhole principle: if the size of a certain object is at least so and so, then a certain property is guaranteed. If we have more than 365 people, then a birthday has to repeat.

Also, the Mean Value Theorem comes to mind for some reason.

> Let P1, … , Pm ∈ Z[y] be polynomials with distinct degrees ... progressions of the form x, x+P1(y), … , x+Pm(y)

Such progressions are a generalization of the "shifted geometric" progressions described in the article.

In particular, let P1 to Pm be y^i for i=1,...,m. Consider, say, x=0 and y = 3. Then the above gives us:

0, 3, 9, 27, ... , 3^m

Which is (almost) a geometric progression.

We can "shift" it by setting e.g. x = 1.

Though of course the polynomials can be much more general than just y^i.

The article says:
> In the type of polynomial progressions studied by Peluse, you still pick a starting value, but now you add powers of another number. For example: 2, 2 + 3^1, 2 + 3^2, 2 + 3^3, 2 + 3^4. Or 2, 5, 11, 29, 83. (Her progressions also had only one term for each power, a requirement that makes them more manageable.)

I'm not entirely sure how to interpret that, since I don't see any reference to "one term for each power" in the abstract of the paper, nor in a quick glance through it.

When you say "weird definition", do you mean it's a strange concept, or a strange name for that concept?

Edit: Or do you mean that "polynomial progression" has an existing definition, and this definition is equivalent, but is a strange way of stating it? Or do you mean the definition as stated in the article is wrong / differs from the standard definition?

Thanks! The Math in that paper is mind boggling! I hope to get to a point to understand it someday.

Very naive question: the paper uses integrals (perhaps Lebesgue --- can you tell how naive I am, yet?) for proving some lemmas etc. As a beginner it's hard for me to understand how integrals come into picture in number theory.

Not that it is my plan to do so, can you tell me what all fields of Math (specifically textbooks) and techniques I need to be familiar with to understand the paper?

For e.g., Abstract Algebra (D&F), Functional Analysis (graduate), Combinatorics etc.?

This will give me a much clearer picture on how various fields of Mathematics come into play.

If you want to understand Peluse's paper, I think the most directly relevant background reading would be Tao and Vu's Additive Combinatorics.

Dummit and Foote's Abstract Algebra is an excellent book -- even if not very directly relevant to additive combinatorics. Analysis background would be useful, such as to be found in Stein and Shakarchi's series. Some introductory combinatorics would also be good (e.g. Stanley's Enumerative Combinatorics, or Van Lint and Wilson).

Finally, analytic number theory would also be good to learn. For example, you might read Davenport's Multiplicative Number Theory. (Especially if you want to see integrals come into the picture.)

> Peluse answered that question in a counterintuitive way — by thinking about exactly what it would take for a set of numbers not to contain the pattern you’re looking for.

Proof by contradiction isn't that counter intuitive.

And it's utterly bizarre that this article doesn't explain that this technique is proof by contradiction, or give an example of what it is. The proof that the square root of two can't be expressed as a ratio of integers (hence is not-ratio-able, or irrational) is approachable by highschool freshmen, and would add so much to the article.

When I was coming up with and solving problems with a math friend I learned that proof by contradiction was perhaps the most common way to prove things. It’s strange to people who don’t work with math everyday, to everyone else it’s secondhand.

To me it does not seem like it is proof by contradiction, or maybe not in the straight forward way that contradictions are used in proofs. It just happens to think about the problem in a complementary (of complement sets ...) way.

I was confused by this article. It says that for every k, then all sufficiently large sets have an arithmetic progression of length k.

But suppose we set k = 3, and take the set {1, 2, 4, 8, ..., 2^n}. Then no matter how big n is, we have no arithmetic progression of length 3.

Looking into this, it appears that the article misstated the result, called Szemerédi's theorem: “if the positive integers are partitioned into finitely many classes, then at least one of these classes contains arbitrarily long arithmetic progressions”. https://www.semanticscholar.org/paper/A-new-proof-of-Szemeré...

The theorem you quoted is Van der Waerden's theorem [0], not Szemerédi's. It is significantly easier to prove than Szemerédi's result.

The article correctly states Szemerédi's theorem, if you interpret "sufficiently large" to mean "positive density": For every density delta > 0 and length k, there exists an n0 such that every subset of {1, ..., n} (n > n0) of density at least delta (that is, at least delta * n elements) contains an arithmetic progression of length k.

For n large, your set has density roughly (log N)/N (where N = 2^n), which is not bounded from below by any constant delta > 0; this is why Szemerédi's theorem does not apply. Speaking of your example, you might be interested in Behrend's construction [1], a way of constructing "large" sets (for another definition of large) with no 3-term arithmetic progression.

Szemerédi's theorem easily implies Van der Waerden's theorem because if you colour the integers with finitely many colours, you can apply Szemerédi's theorem to the largest colour class.

[0] https://en.wikipedia.org/wiki/Van_der_Waerden%27s_theorem [1] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1078964/pdf/pna...

There's a lot of interesting trivia related to this. For instance, the sequence of perfect squares {1,4,9,...} contains 3-term subsequences that are in arithmetic progression (e.g., {1,25,49}). However, it does not contain any 4-term such subsequences. And if you consider the sequence of nth powers, for n > 2, it does not contain any arithmetic progressions at all. This claim turns out to be very similar to Fermat's Last Theorem and can be established with similar techniques. (See https://math.berkeley.edu/~ribet/Articles/acta.pdf, with alpha = 1. Note that this is the same Ribet of Ribet's theorem, which established the link between modularity of elliptic curves and FLT, setting the stage for Wiles.)

The example given in the article elaborates that the sets have bounded range, e.g. for a given size N, the set’s values must range from 0 to mN for some m > 1.

The results therefore depend on both m and N.

Equivalently, you can pick the length of your range first and then pick a ‘density’ d, where d * length is the size of the set (this matches the formulation given in the article).

> catch a pattern by figuring how to avoid it

In my own research, the main focus is to take a (usually known) property and to rephrase it so that the new statement is self-dual. The analogy would be that you "understand a pattern only if you know how to phrase it in a self-dual context".

None of these different approaches necessarily try to presuppose or subsume each other, but they hint towards your outlook and interests. That is why independent discoveries are usually "clearly" independent, in the sense that the whole framework is usually different.

This happened for example with Lawvere and the eventual standard definition of a topos; so too with Grothendieck toposes and the eventual standard definition of a topos. Newton and Leibniz would be the more accessible example.

Bounds for sets with no polynomial progressions

Abstract:

Let P_1,..., P_m ∈ Z[y] be polynomials with distinct degrees, each having zero constant term. We show that any subset A of {1,..., N} with no nontrivial progressions of the form x, x + P_1(y),..., x + P_m(y) has size |A| ≪ N/(log log N)^(cP1,...,Pm) . Along the way, we prove a general result controlling weighted counts of polynomial progressions by Gowers norms.

[-1] https://arxiv.org/pdf/1909.00309.pdf

These observations have a bit of the flavor of the birthday paradox and pigeonhole principle: if the size of a certain object is at least so and so, then a certain property is guaranteed. If we have more than 365 people, then a birthday has to repeat.

Also, the Mean Value Theorem comes to mind for some reason.

The mean value theorem also guarantees existence (of a parallel line tangent).

Did not read the paper but it seems to me bit weird definition: that adding a constant to geometric progression makes it polynomial progression.

The abstract of the paper (https://arxiv.org/abs/1909.00309) says

> Let P1, … , Pm ∈ Z[y] be polynomials with distinct degrees ... progressions of the form x, x+P1(y), … , x+Pm(y)

Such progressions are a generalization of the "shifted geometric" progressions described in the article.

In particular, let P1 to Pm be y^i for i=1,...,m. Consider, say, x=0 and y = 3. Then the above gives us:

0, 3, 9, 27, ... , 3^m

Which is (almost) a geometric progression.

We can "shift" it by setting e.g. x = 1.

Though of course the polynomials can be much more general than just y^i.

The article says: > In the type of polynomial progressions studied by Peluse, you still pick a starting value, but now you add powers of another number. For example: 2, 2 + 3^1, 2 + 3^2, 2 + 3^3, 2 + 3^4. Or 2, 5, 11, 29, 83. (Her progressions also had only one term for each power, a requirement that makes them more manageable.)

I'm not entirely sure how to interpret that, since I don't see any reference to "one term for each power" in the abstract of the paper, nor in a quick glance through it.

When you say "weird definition", do you mean it's a strange concept, or a strange name for that concept?

Edit: Or do you mean that "polynomial progression" has an existing definition, and this definition is equivalent, but is a strange way of stating it? Or do you mean the definition as stated in the article is wrong / differs from the standard definition?

I'd expect polynomial to follow definition of polynome:

x^n + c where n, c are fixed and x goes 0,1,2,... with the progression

instead of

n^x + c

..the latter I'd call exponential progression

Possibly that it doesn't seem to warrant an entirely new name? that "shifted geometric progression" or something might as well do.

See my other response to rini17 for more info.

But yeah, seems to me these "shifted geometric progressions" are actually just a special case of a more general concept of a "polynomial progression".

I am a Math student and I am curious about the topics in this article.

I am curious as to what all branches of Mathematics is the paper related to? It's a result about polynomials, so Algebra and Analysis?

This area is usually known as "additive combinatorics".

https://en.wikipedia.org/wiki/Arithmetic_combinatorics

Thanks! The Math in that paper is mind boggling! I hope to get to a point to understand it someday.

Very naive question: the paper uses integrals (perhaps Lebesgue --- can you tell how naive I am, yet?) for proving some lemmas etc. As a beginner it's hard for me to understand how integrals come into picture in number theory.

Not that it is my plan to do so, can you tell me what all fields of Math (specifically textbooks) and techniques I need to be familiar with to understand the paper?

For e.g., Abstract Algebra (D&F), Functional Analysis (graduate), Combinatorics etc.?

This will give me a much clearer picture on how various fields of Mathematics come into play.

Thanks again!

If you want to understand Peluse's paper, I think the most directly relevant background reading would be Tao and Vu's

Additive Combinatorics.Dummit and Foote's

Abstract Algebrais an excellent book -- even if not very directly relevant to additive combinatorics. Analysis background would be useful, such as to be found in Stein and Shakarchi's series. Some introductory combinatorics would also be good (e.g. Stanley'sEnumerative Combinatorics, or Van Lint and Wilson).Finally, analytic number theory would also be good to learn. For example, you might read Davenport's

Multiplicative Number Theory. (Especially if you want to see integrals come into the picture.)Thank you! This is exactly what I was looking for. This gives me a sense as to where I stand relative to understanding this paper.

> Peluse answered that question in a counterintuitive way — by thinking about exactly what it would take for a set of numbers not to contain the pattern you’re looking for.

Proof by contradiction isn't that counter intuitive.

And it's utterly bizarre that this article doesn't explain that this technique is proof by contradiction, or give an example of what it is. The proof that the square root of two can't be expressed as a ratio of integers (hence is not-ratio-able, or irrational) is approachable by highschool freshmen, and would add so much to the article.

When I was coming up with and solving problems with a math friend I learned that proof by contradiction was perhaps the most common way to prove things. It’s strange to people who don’t work with math everyday, to everyone else it’s secondhand.

It's not just math, it's a pretty basic concept in philosophy, or any area where critical thinking is required

To me it does not seem like it is proof by contradiction, or maybe not in the straight forward way that contradictions are used in proofs. It just happens to think about the problem in a complementary (of complement sets ...) way.

> hence is not-ratio-able, or irrational

TIL !

Interestingly, the etymology seems to go:

1. ἄλογος (alogos, Greek for unsayable or unreasonable)

2. irrationalis (Latin translation of ἄλογος)

3. rationalis (Latin backformation)

4. irrational/rational (English translation)

5. ratio (English backformation)

Steps 3 and 5, the backformations, are perhaps in the opposite direction one would expect.

https://english.stackexchange.com/a/218079