## How to start a paper?

Starting a paper is easy. That is, if you don’t care for the marketing, don’t want to be memorable, and just want to get on with the story and quickly communicate what you have proved. Fair enough.

But that only works when your story is very simple, as in “here is a famous conjecture which we solve in this paper”. You are implicitly assuming that the story of the conjecture has been told elsewhere, perhaps many times, so that the reader is ready to see it finally resolved. But if your story is more complicated, this “get to the point” approach doesn’t really work (and yes, I argue in this blog post and this article there is always a story). Essentially you need to prepare the reader for what’s to come.

In my “*How to write a clear math paper*” (see also my blog post) I recommend writing the *Foreword *— a paragraph or two devoted to philosophy underlying your work or a high level explanation of the key idea in your paper before you proceed to state the main result:

Consider putting in the Foreword some highly literary description of what you are doing. If it’s beautiful or sufficiently memorable, it might be quoted in other papers, sometimes on a barely related subject, and bring some extra clicks to your work. Feel free to discuss the big picture, NSF project outline style, mention some motivational examples in other fields of study, general physical or philosophical principles underlying your work, etc. There is no other place in the paper to do this, and I doubt referees would object if you keep your Foreword under one page. For now such discussions are relegated to surveys and monographs, which is a shame since as a result some interesting perspectives of many people are missing.

Martin Krieger has a similar idea which he discusses at length in his 2018 *AMS Notices* article *Don’t Just Begin with “Let A be an algebra…” * This convinced me that I really should follow his (and my own) advice.

So recently I took a stock of my open opening lines (usually, joint with coauthors), and found a mixed bag. I decided to list some of them below for your amusement. I included only those which are less closely related to the subject matter of the article, so might appeal to broader audience. I am grateful to all my collaborators which supported or at least tolerated this practice.

### Combinatorics matters

Combinatorics has always been a battleground of tools and ideas. That’s why it’s so hard to do, or even define.

Combinatorial inequalities(2019)

The subject of enumerative combinatorics is both classical and modern. It is classical, as the basic counting questions go back millennia; yet it is modern in the use of a large variety of the latest ideas and technical tools from across many areas of mathematics. The remarkable successes from the last few decades have been widely publicized; yet they come at a price, as one wonders if there is anything left to explore. In fact, are there enumerative problems that cannot be resolved with existing technology?

Complexity problems in enumerative combinatorics (2018), see also this blog post.

Combinatorial sequences have been studied for centuries, with results ranging from minute properties of individual sequences to broad results on large classes of sequences. Even just listing the tools and ideas can be exhausting, which range from algebraic to bijective, to probabilistic and number theoretic. The existing technology is so strong, it is rare for an open problem to remain unresolved for more than a few years, which makes the surviving conjectures all the more interesting and exciting.

Pattern avoidance is not P-recursive(2015), see also this blog post.

In Enumerative Combinatorics, the results are usually easy to state. Essentially, you are counting the number of certain combinatorial objects: exactly, asymptotically, bijectively or otherwise. Judging the importance of the results is also relatively easy: the more natural or interesting the objects are, and the stronger or more elegant is the final formula, the better. In fact, the story or the context behind the results is usually superfluous since they speak for themselves.

Hook inequalities (2020)

### Proof deconstruction

There are two schools of thought on what to do when an interesting combinatorial inequality is established. The first approach would be to treat it as a tool to prove a desired result. The inequality can still be sharpened or generalized as needed, but this effort is aimed with applications as the goal and not about the inequality per se.

The second approach is to treat the inequality as a result of importance in its own right. The emphasis then shifts to finding the “right proof” in an attempt to understand, refine or generalize it. This is where the nature of the inequality intervenes — when both sides count combinatorial objects, the desire to relate these objects is overpowering.

Effective poset inequalities (2022)

There is more than one way to explain a miracle. First, one can show how it is made, a step-by-step guide to perform it. This is the most common yet the least satisfactory approach as it takes away the joy and gives you nothing in return. Second, one can investigate away every consequence and implication, showing that what appears to be miraculous is actually both reasonable and expected. This takes nothing away from the miracle except for its shining power, and puts it in the natural order of things. Finally, there is a way to place the apparent miracle as a part of the general scheme. Even, or especially, if this scheme is technical and unglamorous, the underlying pattern emerges with the utmost clarity.

Hook formulas for skew shapes IV (2021)

In Enumerative Combinatorics, when it comes to fundamental results, one proof is rarely enough, and one is often on the prowl for a better, more elegant or more direct proof. In fact, there is a wide belief in multitude of “proofs from the Book”, rather than a singular best approach. The reasons are both cultural and mathematical: different proofs elucidate different aspects of the underlying combinatorial objects and lead to different extensions and generalizations.

Hook formulas for skew shapes II (2017)

### Hidden symmetries

The phrase “

hidden symmetries” in the title refers to coincidences between the numbers of seemingly different (yet similar) sets of combinatorial objects. When such coincidences are discovered, they tend to be fascinating because they reflect underlying algebraic symmetries — even when the combinatorial objects themselves appear to possess no such symmetries.It is always a relief to find a simple combinatorial explanation of hidden symmetries. A direct bijection is the most natural approach, even if sometimes such a bijection is both hard to find and to prove. Such a bijection restores order to a small corner of an otherwise disordered universe, suggesting we are on the right path in our understanding. It is also an opportunity to learn more about our combinatorial objects.

Bijecting hidden symmetries for skew staircase shapes (2021)

Hidden symmetries are pervasive across the natural sciences, but are always a delight whenever discovered. In Combinatorics, they are especially fascinating, as they point towards both advantages and limitations of the tools. Roughly speaking, a combinatorial approach strips away much of the structure, be it algebraic, geometric, etc., while allowing a direct investigation often resulting in an explicit resolution of a problem. But this process comes at a cost — when the underlying structure is lost, some symmetries become invisible, or “hidden”.

Occasionally this process runs in reverse. When a hidden symmetry is discovered for a well-known combinatorial structure, it is as surprising as it is puzzling, since this points to a rich structure which yet to be understood (sometimes uncovered many years later). This is the situation of this paper.

Hidden symmetries of weighted lozenge tilings (2020)

### Problems in Combinatorics

How do you approach a massive open problem with countless cases to consider? You start from the beginning, of course, trying to resolve either the most natural, the most interesting or the simplest yet out of reach special cases. For example, when looking at the billions and billions of stars contemplating the immense challenge of celestial cartography, you start with the

Durfee squares, symmetric partitions and bounds on Kronecker coefficients (2022)closest(Alpha Centauri and Barnard’s Star), thebrightest(Sirius and Canopus), or themost useful(Polaris aka North Star), but not with the galaxy far, far away.

Different fields have different goals and different open problems. Most of the time, fields peacefully coexist enriching each other and the rest of mathematics. But occasionally, a conjecture from one field arises to present a difficult challenge in another, thus exposing its technical strengths and weaknesses. The story of this paper is our effort in the face of one such challenge.

Kronecker products, characters, partitions, and the tensor square conjectures (2016)

It is always remarkable and even a little suspicious, when a nontrivial property can be proved for a large class of objects. Indeed, this says that the result is “global”, i.e. the property is a consequence of the underlying structure rather than individual objects. Such results are even more remarkable in combinatorics, where the structures are weak and the objects are plentiful. In fact, many reasonable conjectures in the area fail under experiments, while some are ruled out by theoretical considerations.

Log-concave poset inequalities (2021)

Sometimes a conjecture is more than a straightforward claim to be proved or disproved. A conjecture can also represent an invitation to understand a certain phenomenon, a challenge to be confirmed or refuted in every particular instance. Regardless of whether such a conjecture is true or false, the advances toward resolution can often reveal the underlying nature of the objects.

On the number of contingency tables and the independence heuristic (2022)

### Combinatorial Interpretations

Finding a combinatorial interpretation is an everlasting problem in Combinatorics. Having combinatorial objects assigned to numbers brings them depth and structure, makes them alive, sheds light on them, and allows them to be studied in a way that would not be possible otherwise. Once combinatorial objects are found, they can be related to other objects via bijections, while the numbers’ positivity and asymptotics can then be analyzed.

What is in #P and what is not? (2022)

Traditionally, Combinatorics works with numbers. Not with structures, relations between the structures, or connections between the relations — just numbers. These numbers tend to be nonnegative integers, presented in the form of some exact formula or disguised as probability. More importantly, they always count the number of some combinatorial objects.

This approach, with its misleading simplicity, led to a long series of amazing discoveries, too long to be recounted here. It turns out that many interesting combinatorial objects satisfy some formal relationships allowing for their numbers to be analyzed. More impressively, the very same combinatorial objects appear in a number of applications across the sciences.

Now, as structures are added to Combinatorics, the nature of the numbers and our relationship to them changes. They no longer count something explicit or tangible, but rather something ephemeral or esoteric, which can only be understood by invoking further results in the area. Even when you think you are counting something combinatorial, it might take a theorem or a even the whole theory to realize that what you are counting is well defined.

This is especially true in Algebraic Combinatorics where the numbers can be, for example, dimensions of invariant spaces, weight multiplicities or Betti numbers. Clearly, all these numbers are nonnegative integers, but as defined they do not count anything per se, at least in the most obvious or natural way.

What is a combinatorial interpretation? (2022)

### Covering all bases

It is a truth universally acknowledged, that a combinatorial theory is often judged not by its intrinsic beauty but by the examples and applications. Fair or not, this attitude is historically grounded and generally accepted. While eternally challenging, this helps to keep the area lively, widely accessible, and growing in unexpected directions.

Hook formulas for skew shapes III (2019)

In the past several decades, there has been an explosion in the number of connections and applications between Geometric and Enumerative Combinatorics. Among those, a number of new families of “combinatorial polytopes” were discovered, whose volume has a combinatorial significance. Still, whenever a new family of

Triangulations of Cayley and Tutte polytopes (2013)n-dimensional polytopes is discovered whose volume is a familiar integer sequence (up to scaling), it feels like a “minor miracle”, a familiar face in a crowd in a foreign country, a natural phenomenon in need of an explanation.

The problem of choosing one or few objects among the many has a long history and probably existed since the beginning of human era (e.g. “

When and how n choose k (1996)Choose twelve men from among the people” Joshua 4:2). Historically this choice was mostly rational and random choice was considered to be a bad solution. Times have changed, however. [..] In many cases random solution has become desirable, if not the only possibility. Which means that it’s about time we understand the nature of a random choice.

### Books are ideas

In his famous 1906 “white suit” speech, Mark Twain recalled a meeting before the House of Lords committee, where he argued in favor of perpetual copyright. According to Twain, the chairman of the committee with “some resentment in his manner,” countered: “

What is a book? A book is just built from base to roof on ideas, and there can be no property in it.”Sidestepping the copyright issue, the unnamed chairman had a point. In the year 2021, in the middle of the pandemic, books are ideas. They come in a variety of electronic formats and sizes, they can be “borrowed” from the “cloud” for a limited time, and are more ephemeral than long lasting. Clinging to the bygone era of safety and stability, we just keep thinking of them as sturdy paper volumes.

When it comes to math books, the ideas are fundamental. Really, we judge them largely based on the ideas they present, and we are willing to sacrifice both time and effort to acquire these ideas. In fact, as a literary genre, math books get away with a slow uninventive style, dull technical presentation, anticlimactic ending, and no plot to speak of. The book under review is very different. [..]

See this books review and this blog post (2021).

**Warning**: This post is not meant to be a writing advice. The examples I give are merely for amusement purposes and definitely not be emulated. I am happy with some of these quotes and a bit ashamed of others. Upon reflection, the style is overly dramatic most likely because I am overcompensating for something. But hey — if you are still reading this you probably enjoyed it…

## Why you shouldn’t be too pessimistic

In our math research we make countless choices. We chose a problem to work on, decide whether its claim is true or false, what tools to use, what earlier papers to study which might prove useful, who to collaborate with, which computer experiments might be helpful, etc. Choices, choices, choices… Most our choices are private. Others are public. This blog is about wrong public choices that I made misjudging some conjectures by being overly pessimistic.

#### The meaning of conjectures

As I have written before, conjectures are crucial to the developments of mathematics and to my own work in particular. The concept itself is difficult, however. While traditionally conjectures are viewed as some sort of “*unproven laws of nature*“, that comparison is widely misleading as many conjectures are descriptive rather than quantitative. To understand this, note the stark contrast with experimental physics, as many mathematical conjectures are not particularly testable yet remain quite interesting. For example, if someone conjectures there are infinitely many *Fermat primes*, the only way to dissuade such person is to actually disprove the claim.

There is also an important social aspect of conjecture making. For a person who poses a conjecture, there is a certain clairvoyance respected by other people in the area. Predictions are never easy, especially of a precise technical nature, so some bravery or self-assuredness is required. Note that social capital is spent every time a conjecture is posed. In fact, a lot of it is lost when it’s refuted, you come out even if it’s proved relatively quickly, and you gain only if the conjecture becomes popular or proved possibly many years later. There is also a “*boy who cried wolf*” aspect for people who make too many conjectures of dubious quality — people will just tune out.

Now, for the person working on a conjecture, there is also a *betting aspect* one cannot ignore. As in, are you sure you are working in the right direction? Perhaps, the conjecture is simply *false *and you are wasting your time… I wrote about this all before in the post linked above, and the life/career implications on the solver are obvious. The success in solving a well known conjecture is often regarded much higher than a comparable result nobody asked about. This may seem unfair, and there is a bit of celebrity culture here. Thinks about it this way — two lead actors can have similar acting skills, but the one who is a star will usually attract a much larger audience…

#### Stories of conjectures

Not unlike what happens to papers and mathematical results, conjectures also have stories worth telling, even if these stories are rarely discussed at length. In fact, these “** conjecture stories**” fall into a few types. This is a little bit similar to the “

*types of scientific papers*” meme, but more detailed. Let me list a few scenarios, from the least to the most mathematically helpful:

**(1)** * Wishful thinking*. Say, you are working on a major open problem. You realize that a famous conjecture

**A**follows from a combination of three conjectures

**B**,

**C**and

**D**whose sole motivation is their applications to

**A**. Some of these smaller conjectures are beyond the existing technology in the area and cannot be checked computationally beyond a few special cases. You then declare that this to be your “

*program*” and prove a small special case of

**C**. Somebody points out that

**D**is trivially false. You shrug, replace it with a weaker

**D’**which suffices for your program but is harder to disprove. Somebody writes a long state of the art paper disproving

**D’**. You shrug again and suggest an even weaker conjecture

**D”**. Everyone else shrugs and moves on.

**(2)** ** Reconfirming long held beliefs**. You are working in a major field of study aiming to prove a famous open problem

**A**. Over the years you proved a number of special cases of

**A**and became one the leaders of the area. You are very optimistic about

**A**discussing it in numerous talks and papers. Suddenly

**A**is disproved in some esoteric situations, undermining the motivation of much of your older and ongoing work. So you propose a weaker conjecture

**A’**as a replacement for

**A**in an effort to salvage both the field and your reputation. This makes happy everyone in the area and they completely ignore the disproof of

**A**from this point on, pretending it’s completely irrelevant. Meanwhile, they replace

**A**with

**A’**in all subsequent papers and beamer talk slides.

**(3)** ** Accidental discovery.** In your ongoing work you stumble at a coincidence. It seem, all objects of a certain kind have some additional property making them “

*nice*“. You are clueless why would that be true, since being

*nice*belongs to another area

**X**. Being

*nice*is also too abstract to be checked easily on a computer. You consult a colleague working in

**X**whether this is obvious/plausible/can be proved and receive No/Yes/Maybe answers to these three questions. You are either unable to prove the property or uninterested in problem, or don’t know much about

**X**. So you mention it in the

*Final Remarks*section of your latest paper in vain hope somebody reads it. For a few years, every time you meet somebody working in

**X**you mention to them your “nice conjecture”, so much that people laugh at you behind your back.

** (4) Strong computational evidence.** You are doing computer experiments related to your work. Suddenly certain numbers appear to have an unexpectedly nice formula or a generating function. You check with OEIS and the sequence is there indeed, but not with the meaning you wanted. You use the “

*scientific method*” to get a few more terms and they indeed support your conjectural formula. Convinced this is not an instance of the “

*strong law of small numbers*“, you state the formula as a conjecture.

**(5) Being contrarian. ** You think deeply about famous conjecture

**A**. Not only your realize that there is no way one can approach

**A**in full generality, but also that it contradicts some intuition you have about the area. However,

**A**was stated by a very influential person

*N*and many people believe in

**A**proving it in a number of small special cases. You want to state a

**non-A**conjecture, but realize the inevitable PR disaster of people directly comparing you to

*N*. So you either state that you don’t believe in

**A**, or that you believe in a conjecture

**B**which is either slightly stronger or slightly weaker than

**non-A**, hoping the history will prove you right.

**(6) Being inspirational.** You think deeply about the area and realize that there is a fundamental principle underlying certain structures in your work. Formalizing this principle requires a great deal of effort and results in a conjecture

**A**. The conjecture leads to a large body of work by many people, even some counterexamples in esoteric situations, leading to various fixes such as

**A’**. But at that point

**A’**is no longer the goal but more of a direction in which people work proving a number of

**A**-related results.

Obviously, there are many other possible stories, while some stories are are a mixture of several of these.

#### Why do I care? Why now?

In the past few years I’ve been collecting references to my papers which solve or make some progress towards my conjectures and open problems, putting links to them on my research page. Turns out, over the years I made a lot of those. Even more surprisingly, there are quite a few papers which address them. Here is a small sampler, in random order:

**(1)** Scott Sheffield proved my *ribbon tilings *conjecture.

**(2)** Alex Lubotzky proved my conjecture on *random generation* of a finite group.

**(3)** Our generalized *loop-erased random walk* conjecture (joint with Igor Gorodezky) was recently proved by Heng Guo and Mark Jerrum.

**(4)** Our *Young tableau bijections* conjecture (joint with Ernesto Vallejo) was resolved by André Henriques and Joel Kamnitzer.

**(5)** My *size Ramsey numbers* conjecture led to a series of papers, and was completely resolved only recently by Nemanja Draganić, Michael Krivelevich and Rajko Nenadov.

**(6)** One of my *partition bijection* problems was resolved by Byungchan Kim.

The reason I started collecting these links is kind of interesting. I was very impressed with George Lusztig and Richard Stanley‘s lengthy writeups about their collected papers that I mentioned in this blog post. While I don’t mean to compare myself to these giants, I figured the casual reader might want to know if a conjecture in some paper had been resolved. Thus the links on my website. I recommend others also do this, as a navigational tool.

#### What gives?

Well, looks like none of my conjectures have been disproved yet. That’s a good news, I suppose. However, by going over my past research work I did discover that on three occasions when I was thinking about other people’s conjectures, I was much too negative. This is probably the result of my general inclination towards “*negative thinking*“, but each story is worth telling.

**( i)** Many years ago, I spent some time thinking about

*Babai’s conjecture*which states that there are universal constants

*C*,

*c*>0, such that for every simple group

*G*and a generating set

*S*, the diameter of the

*Cayley graph*Cay(

*G,S*) is at most

*C*(log |

*G*|)

^{c}. There has been a great deal of work on this problem, see e.g. this paper by Sean Eberhard and Urban Jezernik which has an overview and references.

Now, I was thinking about the case of the symmetric group trying to apply *arithmetic combinatorics* ideas and going nowhere. In my frustration, in a talk I gave (Galway, 2009), I wrote on the slides that “there is much less hope” to resolve Babai’s conjecture for *A _{n} *than for simple groups of Lie type or bounded rank. Now, strictly speaking that judgement was correct, but much too gloomy. Soon after, Ákos Seress and Harald Helfgott

**a remarkable quasi-polynomial upper bound in this case. To my embarrassment, they referenced my slides as a validation of the importance of their work.**

*proved*Of course, Babai’s conjecture is very far from being resolved for *A _{n}*. In fact, it is possible that the diameter is always

*O*(

*n*

^{2}). We just have no idea. For simple groups of Lie type or large rank the existing worst case diameter bounds are exponential and much too weak compared to the desired bound. As Eberhard and Jezernik amusingly wrote in the paper linked above, “

*we are still exponentially stupid*“…

**( ii)** When he was my postdoc at UCLA, Alejandro Morales told me about a curious conjecture in this paper (Conjecture 5.1), which claimed that the number of certain nonsingular matrices over the finite field

**F**

*is polynomial in*

_{q}*q*with positive coefficients. He and coauthors proved the conjecture is some special cases, but it was wide open in full generality.

Now, I thought about this type of problems before and was very skeptical. I spent a few days working on the problem to see if any of my tools can disprove it, and failed miserably. But in my stubbornness I remained negative and suggested to Alejandro that he should drop the problem, or at least stop trying to prove rather than disprove the conjecture. I was wrong to do that.

Luckily, Alejandro ignored my suggestion and soon after ** proved **the polynomial part of the conjecture together with Joel Lewis. Their proof is quite elegant and uses certain recurrences coming from the

*rook theory*. These recurrences also allow a fast computation of these polynomials. Consequently, the authors made a number of computer experiments and

*the positivity of coefficients part of the conjecture. So the moral is not to be so negative. Sometimes you need to prove a positive result first before moving to the dark side.*

**disproved****( iii)** The final story is about the beautiful

*Benjamini conjecture*in probabilistic combinatorics. Roughly speaking, it says that for every finite vertex transitive graph

*G*on

*n*vertices and diameter

*O*(

*n*/log

*n*) the critical percolation constant

*p*

_{c}<1. More precisely, the conjecture claims that there is

*p*<1-ε, such that a

*p*-percolation on

*G*has a connected component of size >

*n*/2 with probability at least δ, where constants ε, δ>0 depend on the constant implied by the

*O*(*) notation, but not on

*n*. Here by “

*p*-percolation” we mean a random subgraph of

*G*with probability

*p*of keeping and 1-

*p*of deleting an edge, independently for all edges of

*G*.

Now, Itai Benjamini is a fantastic conjecture maker of the best kind, whose conjectures are both insightful and well motivated. Despite the somewhat technical claim, this conjecture is quite remarkable as it suggested a finite version of the “*p*_{c}<1″ phenomenon for infinite groups of superlinear growth. The latter is the famous *Benjamini–Schramm conjecture* (1996), which was recently ** proved **in a remarkable breakthrough by Hugo Duminil-Copin, Subhajit Goswami, Aran Raoufi, Franco Severo and Ariel Yadin. While I always believed in that conjecture and even proved a tiny special case of it, finite versions tend to be much harder in my experience.

In any event, I thought a bit about the Benjamini conjecture and talked to Itai about it. He convinced me to work on it. Together with Chis Malon, we wrote a paper proving the claim for some Cayley graphs of abelian and some more general classes of groups. Despite our best efforts, we could not prove the conjecture even for Cayley graphs of abelian groups in full generality. Benjamini noted that the conjecture is tight for products of two cyclic groups, but that justification did not sit well with me. There seemed to be no obvious way to prove the conjecture even for the Cayley graph of *S _{n}* generated by a transposition and a long cycle, despite the very small

*O*(

*n*

^{2}) diameter. So we wrote in the introduction: “In this paper we present a number of positive results toward this unexpected, and, perhaps, overly optimistic conjecture.”

As it turns out, it was us who were being overly pessimistic, even if we never actually stated that we believe the conjecture is false. Most recently, in an amazing development, Tom Hutchcroft and Matthew Tointon **proved **a slightly weaker version of the conjecture by adapting the methods of Duminil-Copin et al. They assume the *O*(*n*/(log* n*)^{c}) upper bound on the diameter which they prove is sufficient, for some universal constant *c*>1. They also extend our approach with Malon to prove the conjecture for all Cayley graphs of abelian groups. So while the Benjamini conjecture is not completely resolved, my objections to it are no longer valid.

#### Final words on this

All in all, it looks like I was never formally wrong even if I was a little dour occasionally (*Yay*!?). Turns out, some conjectures are actually true or at least likely to hold. While I continue to maintain that not enough effort is spent on trying to disprove the conjectures, it is very exciting when they are proved. * Congratulations* to Harald, Alejandro, Joel, Tom and Matthew, and posthumous congratulations to Ákos for their terrific achievements!

## The Unity of Combinatorics

I just finished my very first * book review* for the

*Notices of the AMS*. The authors are Ezra Brown and Richard Guy, and the book title is the same as the blog post. I had mixed feelings when I accepted the assignment to write this. I knew this would take a lot of work (I was wrong — it took a

*huge*amount of work). But the reason I accepted is because I strongly suspected that there is

*“unity of combinatorics”, so I wanted to be proved wrong. Here is how the book begins:*

**no**One reason why Combinatorics has been slow to become accepted as part of mainstream Mathematics is the common belief that it consists of a bag of isolated tricks, a number of areas: [very long list – IP] with little or no connection between them. We shall see that they have numerous threads weaving them together into a beautifully patterned tapestry.

Having read the book, I continue to maintain that there is no unity. The book review became a balancing act — how do you write a somewhat positive review if you don’t believe into the mission of the book? Here is the first paragraph of the portion of the review where I touch upon themes very familiar to readers of this blog:

As I see it, the whole idea of combinatorics as a “

slow to become accepted” field feels like a throwback to the long forgotten era. This attitude was unfair but reasonably common back in 1970, outright insulting and relatively uncommon in 1995, and was utterly preposterous in 2020.

After a lengthy explanation I conclude:

To finish this line of thought, it gives me no pleasure to conclude that the case for the unity of combinatorics is too weak to be taken seriously. Perhaps, the unity of mathematics as a whole is an easier claim to establish, as evident from [Stanley’s] quotes. On the other hand, this lack of unity is not necessarily a bad thing, as we would be amiss without the rich diversity of cultures, languages, open problems, tools and applications of different areas.

Enjoy the full review! And please comment on the post with your own views on this alleged “unity”.

P.S. A large part of the book is freely downloadable. I made this website for the curious reader.

**Remark** (ADDED April 17, 2021)

Ezra “Bud” Brown gave a talk on the book illustrating many of the connections I discuss in the review. This was at a memorial conference celebrating Richard Guy’s legacy. I was not aware of the video until now. Watch the whole talk.

## What if they are all wrong?

* Conjectures *are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.

The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. *Some *are eminently reachable, the only question being who will get there first (think 100 meter dash). *Others *are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated *third type* are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic *fourth type* — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.

Now, conjectures famously provide a foundation of the *scientific method*, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that *citations* are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most *GoogleScholar* profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “*I dedicated my life to the study of the XYZ Conjecture*” (even if they never publish anything), than “*I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family*“. Right. Obviously…

But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? * What if some/many of these conjectures are actually wrong, what then?* Should you be flying that starship if

*there is no there there*? An idealist would argue something like “

*it’s a journey, not a destination*“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.

#### What *are *conjectures in mathematics?

That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, *something *about the conjecture needs to *interesting *and *inspiring*.

#### What makes a conjecture interesting?

This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.

One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is *self-referential*. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it *uninteresting*?

Also, wouldn’t it be *interesting *if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?

None of this are new ideas, of course. For example, faced with the need to justify the “great” *BC conjecture*, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:

We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “

The Baum-Connes conjecture“, 2019]

Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics* *is already “*one unit*“. If it is, why does it need a new “*feeling of* *unity*“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?

If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “*mathematics is a boring subject because every statement is equivalent to saying that some set is empty*.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.

#### What makes us believe a conjecture is true?

We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is *true *in exactly the same way we argue it’s *interesting*: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?

This is where it gets complicated. Say, you are working on the “*abc conjecture*” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the *Erdős–Ulam problem* about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the *Harborth’s conjecture* (aka the “*integral Fáry problem*“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a *positive solution* to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.

I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?

#### What do people say?

It’s worth starting with a general (if slightly poetic) modern description:

In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf,

The Subtle Art of the Mathematical Conjecture, 2019]

Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:

[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper,

Heroic Science, 1974]

Here is how he reconciled somewhat the apparent contradiction:

On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]

Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “*proof and conjecture*” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “*My favorite problems*“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by *Zbl. Math*.)

Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:

Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say,

cut off their fingers. [Peter Sarnak, UCLA, c. 2012]

This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if *Sarnak’s conjecture* is disproved someday.

Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:

Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “

Reflections Around the Ramanujan Centenary“, 1988]

#### So, what’s the problem?

Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “*severe attempts at refuting their own conjectures*“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.

Take prizes. Famously, Clay Math Institute gives **$1 million** for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the * P vs. NP problem* and the

*, it gives*

**Navier–Stokes Equation problem****(**

*nothing***$0**) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “

*primary objectives and purposes*“:

To recognize extraordinary achievements and advances in mathematical research.

So it sounds like CMI does not think that disproving the * Riemann Hypothesis* needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “

*the opposite of a profound truth may well be another profound truth*“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “

*advances to nowhere*“?

I am sure you can see that my blood is boiling, but let’s proceed to the * P vs. NP problem*. What if it’s

*independent of ZFC*? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that

**P=NP**, or at least a

*nonconstructive proof*that

**P=NP**?

If your head is not spinning hard enough, here is another amusing quote:

Of course, it’s possible that

P vs. NPis unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence ofP vs. NPis itself independent of set theory, and so on ad infinitum! But one can at least say that, ifP vs. NP(or for that matter, theRiemann hypothesis,Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson,, 2016].P vs. NP

Speaking of * Goldbach’s Conjecture*, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a

**$1 million**prize by a publishing house for the

*proof of the conjecture*. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra

**$100**in premium. For another

**$50**I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!

Curiously, even for *Goldbach’s Conjecture*, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:

[On whether

Goldbach’s Conjectureis ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper,Conjectures and Refutations, 1963]

Ugh. Perhaps. I suppose *anything *can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:

Not even God knows the number of permutations of 1000 avoiding the

1324 pattern. [Doron Zeilberger, quoted here, 2005]

Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 10^{1017} (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute *precisely*, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.

But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:

Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah,

Rutgers Univ. Colloqium, 2001]

#### Why do I care?

For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as *questions *or *open problems*, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.

For example, here is my old bog post on my disproof of the *Noonan-Zeiberger Conjecture* (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both *Barvinok’s Problem*, *Kannan’s Problem*, and *Woods Conjecture*. Just this year I disproved three conjectures:

- The
*Kirillov–Klyachko Conjecture*(2004) that the*reduced Kronecker coefficients*satisfy the saturation property (this paper, joint with Greta Panova). - The
*Brandolini et al. Conjecture*(2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber). *Kenyon’s Problem*(c. 2005) that every integral curve in**R**^{3}is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).

On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable *independence heuristic* by I. J. Good for the number of *contingency tables*, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.

In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.

#### My favorite disproofs and counterexamples:

There are many. Here are just a few, some famous and some not-so-famous, in historical order:

*Fermat‘s conjecture*(letter to Pascal, 1640) on primality of*Fermat numbers*, disproved by Euler (1747)*Tait’s conjecture*(1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)*General Burnside Problem*(1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)*Keller’s conjecture*(1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)*Borsuk’s Conjecture*(1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)*Hirsch Conjecture*(1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)*Woods’s conjecture*(1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)*Connes embedding problem*(1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)

In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.

#### Why should you disprove conjectures?

There are three reasons, of different nature and importance.

**First**, disproving conjectures is * opportunistic*. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.

**Second**, disproving conjectures is * beautiful*. Let me explain. Conjectures tend to be

*rigid*, as in “objects of the type

*pqr*satisfy property

*abc*.” People like me believe in the idea of “

*universality*“. Some might call it “

*completeness*” or even “

*Murphy’s law*“, but the general principle is always the same. Namely: it is not sufficient that one

*that all*

**wishes***pqr*satisfy

*abc*to actually believe in the implication; rather, there has to be a

*why*

**strong reason***abc*should hold. Barring that,

*pqr*can possibly be almost anything, so in particular

*non-abc*. While some would argue that

*non-abc*objects are “ugly” or at least “not as nice” as

*abc*, the idea of

*means that your objects can be of*

*universality**every color of the rainbow*— nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own

*sense of beauty*, but it’s an acquired taste I suppose.

**Third**, disproving conjectures is * constructive*. It depends on the nature of the conjecture, of course, but one is often faced with necessity to

*construct*a counterexample. Think of this as an engineering problem of building some

*pqr*which at the same time is not

*abc*. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.

#### What should the institutions do?

If you are an *institution which awards prizes*, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.

What you should do is say in the official rules: “We have [**this much money**] and an independent scientific committee which will award any progress on [**this problem**] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves *Goldbach’s Conjecture* for integers > exp(exp(10^{100000})), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.

#### What should the journals do?

In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of* Zeilberger’s Opinions*, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following *UVW *conjecture” or “We develop a new algorithm which confirms the *UVW* conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.

To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no *small* counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “*null graphs*“, the ship has sailed for you…

Let me give you a concrete example where a computational effort is indispensable. The curious *Lovász conjecture* states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why *pqr* = “vertex transitive” should imply *abc *= “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).

Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?

#### Editor’s dilemma

There are three real criteria for evaluation a solution of an open problem by the journal:

- Is this an old, famous, or well-studied problem?
- Are the tools interesting or innovative enough to be helpful in future studies?
- Are the implications of the solution to other problems important enough?

Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.

Now imagine two parallel worlds, where in the first world the conjecture is *proved* on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is *disproved* on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?

If you recognized that the first world is a story of Hao Huang‘s elegant proof of the *induced subgraphs of hypercubes conjecture*, which implies the *sensitivity conjecture*. The *Annals *published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the *Annals* accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old *Kepler’s conjecture* which was ** proved **in a remarkable computational work.

Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?

#### What do other people do?

Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:

** Some **were

*dumbfounded*: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”

** Others **were

*simplistic*: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”

** Third **were

*defensive*: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”

** Fourth **were

*biblical*: “I tend to work 6 days a week towards the proof and one day towards the disproof.”

** Fifth **were

*practical*: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”

If the last two seem sensible to you to, that’s because they are. However, I bet * fourth* are just grandstanding — no way they actually do that. The

*sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?*

**fifth**#### What should you do?

**First**, remember to *make conjectures*. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.

**Second**, learn to *check your conjectures* computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.

**Third**, learn to *make experiments*, explore the area computationally. That’s how you make new conjectures.

**Fourth**, *understand yourself*. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.

**Fifth**, actively *look for collaborators*. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them.

**Sixth**, *be brave* and *optimistic*! Whether you decide to prove, disprove a conjecture, or simply state a new conjecture, go for it! Ignore the judgements by the likes of Sarnak and Zeilberger. Trust me — they don’t really mean it.

## The power of negative thinking, part I. Pattern avoidance

In my latest paper with Scott Garrabrant we disprove the *Noonan-Zeilberger Conjecture*. Let me informally explain what we did and why people should try to disprove conjectures more often. This post is the first in a series. Part II will appear shortly.

#### What did we do?

Let *F ⊂ S _{k}* be a finite set of permutations and let

*C*(

_{n}*F*) denote the number of permutations

*σ ∈ S*avoiding the set of patterns

_{n}*F*. The

*Noonan-Zeilbeger conjecture*(1996), states that the sequence {

*C*(

_{n}*F*)} is always

*P-recursive*. We disprove this conjecture. Roughly, we show that every Turing machine T

*can be simulated by a set of patterns*

*F,*so that the number

*a*of paths of length n accepted by by T is equal to

_{n }*C*(

_{n}*F*) mod 2. I am oversimplifying things quite a bit, but that’s the gist.

What is left is to show how to construct a machine T such that {*a _{n}*} is not equal (mod 2) to

**any**P-recursive sequence. We have done this in our previous paper, where give a negative answer to a question by Kontsevich. There, we constructed a set of 19 generators of

*GL(4,Z)*, such that the probability of return sequence is not P-recursive.

When all things are put together, we obtain a set *F* of about 30,000 permutations in *S _{80}* for which {

*C*(

_{n}*F*)} is non-P-recursive. Yes, the construction is huge, but so what? What’s a few thousand permutations between friends? In fact, perhaps a single pattern (1324) is already non-P-recursive. Let me explain the reasoning behind what we did and why our result is much stronger than it might seem.

#### Why we did what we did

First, a very brief history of the NZ-conjecture (see Kirtaev’s book for a comprehensive history of the subject and vast references). Traditionally, pattern avoidance dealt with exact and asymptotic counting of pattern avoiding permutations for small sets of patterns. The subject was initiated by MacMahon (1915) and Knuth (1968) who showed that we get Catalan numbers for patterns of length 3. The resulting combinatorics is often so beautiful or at least plentiful, it’s hard to imagine how can it not be, thus the NZ-conjecture. It was clearly very strong, but resisted all challenges until now. Wilf reports that Richard Stanley disbelieved it (Richard confirmed this to me recently as well), but hundreds of papers seemed to confirm its validity in numerous special cases.

Curiously, the case of the (1324) pattern proved difficult early on. It remains unresolved whether *C _{n}*(1324) is P-recursive or not. This pattern broke Doron Zeilberger’s belief in the conjecture, and he proclaimed that it’s probably non-P-recursive and thus NZ-conjecture is probably false. When I visited Doron last September he told me he no longer has strong belief in either direction and encouraged me to work on the problem. I took a train back to Manhattan looking over New Jersey’s famously scenic Amtrack route. Somewhere near Pulaski Skyway I called Scott to drop everything, that we should start working on this problem.

You see, when it comes to pattern avoidance, things move from best to good to bad to awful. When they are bad, they are so bad, it can be really hard to prove that they are bad. But why bother – we can try to figure out something awful. The set of patterns that we constructed in our paper is so awful, that proving it is awful ain’t so bad.

#### Why is our result much stronger than it seems?

That’s because the proof extends to other results. Essentially, we are saying that everything bad you can do with Turing machines, you can do with pattern avoidance (mod 2). For example, why is (1324) so hard to analyze? That’s because it’s even hard to compute both theoretically and experimentally – the existing algorithms are recursive and exponential in *n*. Until our work, the existing hope for disproving the NZ-conjecture hinged on finding an appropriately bad set of patterns such that computing {*C _{n}*(

*F*)} is easy. Something like this sequence which has a nice recurrence, but is provably non-P-recursive. Maybe. But in our paper, we can do worse, a lot worse…

We can make a finite set of patterns *F,* such that computing {*C _{n}*(

*F*) mod 2} is “provably” non-polynomial (Th 1.4). Well, we use quotes because of the complexity theory assumptions we must have. The conclusion is much stronger than non-P-recursiveness, since every P-recursive sequence has a trivial polynomial in

*n*algorithm computing it. But wait, it gets worse!

We prove that for two sets of patterns *F* and *G*, the problem* “C _{n}*(

*F*) =

*C*(

_{n}*G*) mod 2 for all n” is undecidable (Th 1.3). This is already a disaster, which takes time to sink in. But then it gets even worse! Take a look at our Corollary 8.1. It says that there are two sets of patterns

*F*and

*G*, such that you can never prove nor disprove that

*C*(

_{n}*F*) =

*C*(

_{n}*G*) mod 2. Now that’s what I call truly awful.

#### What gives?

Well, the original intuition behind the NZ-conjecture was clearly wrong. Many nice examples is not a good enough evidence. But the conjecture was so plausible! Where did the intuition fail? Well, I went to re-read Polya’s classic “*Mathematics and Plausible Reasoning*“, and it all seemed reasonable. That is both Polya’s arguments and the NZ-conjecture (if you don’t feel like reading the whole book, at least read Barry Mazur’s interesting and short followup).

Now think about Polya’s arguments from the point of view of complexity and computability theory. Again, it sounds very “plausible” that large enough sets of patterns behave badly. Why wouldn’t they? Well, it’s complicated. Consider this example. If someone asks you if every 3-connected planar cubic graph has a Hamiltonian cycle, this sounds plausible (this is Tait’s conjecture). All small examples confirm this. Planar cubic graphs do have very special structure. But if you think about the fact that even for planar graphs, Hamiltonicity is NP-complete, it doesn’t sound plausible anymore. The fact that Tutte found a counterexample is no longer surprising. In fact, the decision problem was recently proved to be NP-complete in this case. But then again, if you require 4-connectivity, then *every* planar graph has a Hamiltonian cycle. Confused enough?

Back to the patterns. Same story here. When you look at many small cases, everything is P-recursive (or yet to be determined). But compare this with Jacob Fox’s theorem that for a random single pattern π, the sequence {*C _{n}*(π)} grows much faster than originally expected (cf. Arratia’s Conjecture). This suggests that small examples are not representative of complexity of the problem. Time to think about disproving ALL conjectures based on that evidence.

If there is a moral in this story, it’s that what’s “plausible” is really hard to judge. The more you know, the better you get. Pay attention to small crumbs of evidence. And think negative!

#### What’s wrong with being negative?

Well, conjectures tend to be optimistic – they are wishful thinking by definition. Who would want to conjecture that for some large enough *a,b,c* and *n,* there exist a solution of *a*^{n} + *b*^{n} = *c*^{n}? However, being so positive has a drawback – sometimes you get things badly wrong. In fact, even polynomial Diophantine equations can be as complicated as one wishes. Unfortunately, there is a strong bias in Mathematics against counterexamples. For example, only two of the Clay Millennium Problems automatically pay $1 million for a counterexample. That’s a pity. I understand why they do this, just disagree with the reasoning. If anything, we should encourage thinking in the direction where there is not enough research, not in the direction where people are already super motivated to resolve the problem.

In general, it is always a good idea to keep an open mind. Forget all this “power of positive thinking“, it’s not for math. If you think a conjecture might be false, ignore everybody and just go for disproof. Even if it’s one of those famous unsolved conjectures in mathematics. If you don’t end up disproving the conjecture, you might have a bit of trouble publishing computational evidence. There are some journals who do that, but not that many. Hopefully, this will change soon…

#### Happy ending

When we were working on our paper, I wrote to Doron Zeilberger if he ever offered a reward for the NZ-conjecture, and for the disproof or proof only? He replied with an unusual award, for the proof and disproof in equal measure. When we finished the paper I emailed to Doron. And he paid. Nice… 🙂