### Archive

Archive for the ‘New papers’ Category

## Why you shouldn’t be too pessimistic

In our math research we make countless choices. We chose a problem to work on, decide whether its claim is true or false, what tools to use, what earlier papers to study which might prove useful, who to collaborate with, which computer experiments might be helpful, etc. Choices, choices, choices… Most our choices are private. Others are public. This blog is about wrong public choices that I made misjudging some conjectures by being overly pessimistic.

#### The meaning of conjectures

As I have written before, conjectures are crucial to the developments of mathematics and to my own work in particular. The concept itself is difficult, however. While traditionally conjectures are viewed as some sort of “unproven laws of nature“, that comparison is widely misleading as many conjectures are descriptive rather than quantitative. To understand this, note the stark contrast with experimental physics, as many mathematical conjectures are not particularly testable yet remain quite interesting. For example, if someone conjectures there are infinitely many Fermat primes, the only way to dissuade such person is to actually disprove the claim.

There is also an important social aspect of conjecture making. For a person who poses a conjecture, there is a certain clairvoyance respected by other people in the area. Predictions are never easy, especially of a precise technical nature, so some bravery or self-assuredness is required. Note that social capital is spent every time a conjecture is posed. In fact, a lot of it is lost when it’s refuted, you come out even if it’s proved relatively quickly, and you gain only if the conjecture becomes popular or proved possibly many years later. There is also a “boy who cried wolf” aspect for people who make too many conjectures of dubious quality — people will just tune out.

Now, for the person working on a conjecture, there is also a betting aspect one cannot ignore. As in, are you sure you are working in the right direction? Perhaps, the conjecture is simply false and you are wasting your time… I wrote about this all before in the post linked above, and the life/career implications on the solver are obvious. The success in solving a well known conjecture is often regarded much higher than a comparable result nobody asked about. This may seem unfair, and there is a bit of celebrity culture here. Thinks about it this way — two lead actors can have similar acting skills, but the one who is a star will usually attract a much larger audience…

#### Stories of conjectures

Not unlike what happens to papers and mathematical results, conjectures also have stories worth telling, even if these stories are rarely discussed at length. In fact, these “conjecture stories” fall into a few types. This is a little bit similar to the “types of scientific papers” meme, but more detailed. Let me list a few scenarios, from the least to the most mathematically helpful:

(1) Wishful thinking. Say, you are working on a major open problem. You realize that a famous conjecture A follows from a combination of three conjectures B, C and D whose sole motivation is their applications to A. Some of these smaller conjectures are beyond the existing technology in the area and cannot be checked computationally beyond a few special cases. You then declare that this to be your “program” and prove a small special case of C. Somebody points out that D is trivially false. You shrug, replace it with a weaker D’ which suffices for your program but is harder to disprove. Somebody writes a long state of the art paper disproving D’. You shrug again and suggest an even weaker conjecture D”. Everyone else shrugs and moves on.

(2) Reconfirming long held beliefs. You are working in a major field of study aiming to prove a famous open problem A. Over the years you proved a number of special cases of A and became one the leaders of the area. You are very optimistic about A discussing it in numerous talks and papers. Suddenly A is disproved in some esoteric situations, undermining the motivation of much of your older and ongoing work. So you propose a weaker conjecture A’ as a replacement for A in an effort to salvage both the field and your reputation. This makes happy everyone in the area and they completely ignore the disproof of A from this point on, pretending it’s completely irrelevant. Meanwhile, they replace A with A’ in all subsequent papers and beamer talk slides.

(3) Accidental discovery. In your ongoing work you stumble at a coincidence. It seem, all objects of a certain kind have some additional property making them “nice“. You are clueless why would that be true, since being nice belongs to another area X. Being nice is also too abstract to be checked easily on a computer. You consult a colleague working in X whether this is obvious/plausible/can be proved and receive No/Yes/Maybe answers to these three questions. You are either unable to prove the property or uninterested in problem, or don’t know much about X. So you mention it in the Final Remarks section of your latest paper in vain hope somebody reads it. For a few years, every time you meet somebody working in X you mention to them your “nice conjecture”, so much that people laugh at you behind your back.

(4) Strong computational evidence. You are doing computer experiments related to your work. Suddenly certain numbers appear to have an unexpectedly nice formula or a generating function. You check with OEIS and the sequence is there indeed, but not with the meaning you wanted. You use the “scientific method” to get a few more terms and they indeed support your conjectural formula. Convinced this is not an instance of the “strong law of small numbers“, you state the formula as a conjecture.

(5) Being contrarian. You think deeply about famous conjecture A. Not only your realize that there is no way one can approach A in full generality, but also that it contradicts some intuition you have about the area. However, A was stated by a very influential person N and many people believe in A proving it in a number of small special cases. You want to state a non-A conjecture, but realize the inevitable PR disaster of people directly comparing you to N. So you either state that you don’t believe in A, or that you believe in a conjecture B which is either slightly stronger or slightly weaker than non-A, hoping the history will prove you right.

(6) Being inspirational. You think deeply about the area and realize that there is a fundamental principle underlying certain structures in your work. Formalizing this principle requires a great deal of effort and results in a conjecture A. The conjecture leads to a large body of work by many people, even some counterexamples in esoteric situations, leading to various fixes such as A’. But at that point A’ is no longer the goal but more of a direction in which people work proving a number of A-related results.

Obviously, there are many other possible stories, while some stories are are a mixture of several of these.

#### Why do I care? Why now?

In the past few years I’ve been collecting references to my papers which solve or make some progress towards my conjectures and open problems, putting links to them on my research page. Turns out, over the years I made a lot of those. Even more surprisingly, there are quite a few papers which address them. Here is a small sampler, in random order:

(1) Scott Sheffield proved my ribbon tilings conjecture.

(2) Alex Lubotzky proved my conjecture on random generation of a finite group.

(3) Our generalized loop-erased random walk conjecture (joint with Igor Gorodezky) was recently proved by Heng Guo and Mark Jerrum.

(4) Our Young tableau bijections conjecture (joint with Ernesto Vallejo) was resolved by André Henriques and Joel Kamnitzer.

(5) My size Ramsey numbers conjecture led to a series of papers, and was completely resolved only recently by Nemanja Draganić, Michael Krivelevich and Rajko Nenadov.

(6) One of my partition bijection problems was resolved by Byungchan Kim.

The reason I started collecting these links is kind of interesting. I was very impressed with George Lusztig and Richard Stanley‘s lengthy writeups about their collected papers that I mentioned in this blog post. While I don’t mean to compare myself to these giants, I figured the casual reader might want to know if a conjecture in some paper had been resolved. Thus the links on my website. I recommend others also do this, as a navigational tool.

#### What gives?

Well, looks like none of my conjectures have been disproved yet. That’s a good news, I suppose. However, by going over my past research work I did discover that on three occasions when I was thinking about other people’s conjectures, I was much too negative. This is probably the result of my general inclination towards “negative thinking“, but each story is worth telling.

(i) Many years ago, I spent some time thinking about Babai’s conjecture which states that there are universal constants C, c >0, such that for every simple group G and a generating set S, the diameter of the Cayley graph Cay(G,S) is at most C(log |G|)c. There has been a great deal of work on this problem, see e.g. this paper by Sean Eberhard and Urban Jezernik which has an overview and references.

Now, I was thinking about the case of the symmetric group trying to apply arithmetic combinatorics ideas and going nowhere. In my frustration, in a talk I gave (Galway, 2009), I wrote on the slides that “there is much less hope” to resolve Babai’s conjecture for An than for simple groups of Lie type or bounded rank. Now, strictly speaking that judgement was correct, but much too gloomy. Soon after, Ákos Seress and Harald Helfgott proved a remarkable quasi-polynomial upper bound in this case. To my embarrassment, they referenced my slides as a validation of the importance of their work.

Of course, Babai’s conjecture is very far from being resolved for An. In fact, it is possible that the diameter is always O(n2). We just have no idea. For simple groups of Lie type or large rank the existing worst case diameter bounds are exponential and much too weak compared to the desired bound. As Eberhard and Jezernik amusingly wrote in the paper linked above, “we are still exponentially stupid“…

(ii) When he was my postdoc at UCLA, Alejandro Morales told me about a curious conjecture in this paper (Conjecture 5.1), which claimed that the number of certain nonsingular matrices over the finite field Fq is polynomial in q with positive coefficients. He and coauthors proved the conjecture is some special cases, but it was wide open in full generality.

Now, I thought about this type of problems before and was very skeptical. I spent a few days working on the problem to see if any of my tools can disprove it, and failed miserably. But in my stubbornness I remained negative and suggested to Alejandro that he should drop the problem, or at least stop trying to prove rather than disprove the conjecture. I was wrong to do that.

Luckily, Alejandro ignored my suggestion and soon after proved the polynomial part of the conjecture together with Joel Lewis. Their proof is quite elegant and uses certain recurrences coming from the rook theory. These recurrences also allow a fast computation of these polynomials. Consequently, the authors made a number of computer experiments and disproved the positivity of coefficients part of the conjecture. So the moral is not to be so negative. Sometimes you need to prove a positive result first before moving to the dark side.

(iii) The final story is about the beautiful Benjamini conjecture in probabilistic combinatorics. Roughly speaking, it says that for every finite vertex transitive graph G on n vertices and diameter O(n/log n) the critical percolation constant pc <1. More precisely, the conjecture claims that there is p<1-ε, such that a p-percolation on G has a connected component of size >n/2 with probability at least δ, where constants ε, δ>0 depend on the constant implied by the O(*) notation, but not on n. Here by “p-percolation” we mean a random subgraph of G with probability p of keeping and 1-p of deleting an edge, independently for all edges of G.

Now, Itai Benjamini is a fantastic conjecture maker of the best kind, whose conjectures are both insightful and well motivated. Despite the somewhat technical claim, this conjecture is quite remarkable as it suggested a finite version of the “pc<1″ phenomenon for infinite groups of superlinear growth. The latter is the famous Benjamini–Schramm conjecture (1996), which was recently proved in a remarkable breakthrough by Hugo Duminil-Copin, Subhajit Goswami, Aran Raoufi, Franco Severo and Ariel Yadin. While I always believed in that conjecture and even proved a tiny special case of it, finite versions tend to be much harder in my experience.

In any event, I thought a bit about the Benjamini conjecture and talked to Itai about it. He convinced me to work on it. Together with Chis Malon, we wrote a paper proving the claim for some Cayley graphs of abelian and some more general classes of groups. Despite our best efforts, we could not prove the conjecture even for Cayley graphs of abelian groups in full generality. Benjamini noted that the conjecture is tight for products of two cyclic groups, but that justification did not sit well with me. There seemed to be no obvious way to prove the conjecture even for the Cayley graph of Sn generated by a transposition and a long cycle, despite the very small O(n2) diameter. So we wrote in the introduction: “In this paper we present a number of positive results toward this unexpected, and, perhaps, overly optimistic conjecture.”

As it turns out, it was us who were being overly pessimistic, even if we never actually stated that we believe the conjecture is false. Most recently, in an amazing development, Tom Hutchcroft and Matthew Tointon proved a slightly weaker version of the conjecture by adapting the methods of Duminil-Copin et al. They assume the O(n/(log n)c) upper bound on the diameter which they prove is sufficient, for some universal constant c>1. They also extend our approach with Malon to prove the conjecture for all Cayley graphs of abelian groups. So while the Benjamini conjecture is not completely resolved, my objections to it are no longer valid.

#### Final words on this

All in all, it looks like I was never formally wrong even if I was a little dour occasionally (Yay!?). Turns out, some conjectures are actually true or at least likely to hold. While I continue to maintain that not enough effort is spent on trying to disprove the conjectures, it is very exciting when they are proved. Congratulations to Harald, Alejandro, Joel, Tom and Matthew, and posthumous congratulations to Ákos for their terrific achievements!

## The Unity of Combinatorics

April 10, 2021 1 comment

I just finished my very first book review for the Notices of the AMS. The authors are Ezra Brown and Richard Guy, and the book title is the same as the blog post. I had mixed feelings when I accepted the assignment to write this. I knew this would take a lot of work (I was wrong — it took a huge amount of work). But the reason I accepted is because I strongly suspected that there is no “unity of combinatorics”, so I wanted to be proved wrong. Here is how the book begins:

One reason why Combinatorics has been slow to become accepted as part of mainstream Mathematics is the common belief that it consists of a bag of isolated tricks, a number of areas: [very long list – IP] with little or no connection between them. We shall see that they have numerous threads weaving them together into a beautifully patterned tapestry.

Having read the book, I continue to maintain that there is no unity. The book review became a balancing act — how do you write a somewhat positive review if you don’t believe into the mission of the book? Here is the first paragraph of the portion of the review where I touch upon themes very familiar to readers of this blog:

As I see it, the whole idea of combinatorics as a “slow to become accepted” field feels like a throwback to the long forgotten era. This attitude was unfair but reasonably common back in 1970, outright insulting and relatively uncommon in 1995, and was utterly preposterous in 2020.

After a lengthy explanation I conclude:

To finish this line of thought, it gives me no pleasure to conclude that the case for the unity of combinatorics is too weak to be taken seriously. Perhaps, the unity of mathematics as a whole is an easier claim to establish, as evident from [Stanley’s] quotes. On the other hand, this lack of unity is not necessarily a bad thing, as we would be amiss without the rich diversity of cultures, languages, open problems, tools and applications of different areas.

Enjoy the full review! And please comment on the post with your own views on this alleged “unity”.

Ezra “Bud” Brown gave a talk on the book illustrating many of the connections I discuss in the review. This was at a memorial conference celebrating Richard Guy’s legacy. I was not aware of the video until now. Watch the whole talk.

## How to tell a good mathematical story

As I mentioned in my previous blog post, I was asked to contribute to  to the Early Career Collection in the Notices of the AMS. The paper is not up on their website yet, but I already submitted the proofs. So if you can’t wait — the short article is available here. I admit that it takes a bit of a chutzpah to teach people how to write, so take it as you will.

Like my previous “how to write” article (see also my blog post), this article is mildly opinionated, but hopefully not overly so to remain useful. It is again aimed at a novice writer. There is a major difference between the way fiction is written vs. math, and I am trying to capture it somehow. To give you some flavor, here is a quote:

What kind of a story? Imagine a non-technical and non-detailed version of the abstract of your paper. It should be short, to the point, and straightforward enough to be a tweet, yet interesting enough for one person to want to tell it, and for the listener curious enough to be asking for details. Sounds difficult if not impossible? You are probably thinking that way, because distilled products always lack flavor compared to the real thing. I hear you, but let me give you some examples.

Take Aesop’s fable “The Tortoise and the Hare” written over 2500 years ago. The story would be “A creature born with a gift procrastinated one day, and was overtaken by a very diligent creature born with a severe handicap.” The names of these animals and the manner in which one lost to another are less relevant to the point, so the story is very dry. But there are enough hints to make some readers curious to look up the full story.

Now take “The Terminator”, the original 1984 movie. The story here is (spoiler alert! ) “A man and a machine come from another world to fight in this world over the future of the other world; the man kills the machine but dies at the end.” If you are like me, you probably have many questions about the details, which are in many ways much more exciting than the dry story above. But you see my point – this story is a bit like an extended tag line, yet interesting enough to be discussed even if you know the ending.

## What if they are all wrong?

Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.

The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.

Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…

But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.

#### What are conjectures in mathematics?

That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.

#### What makes a conjecture interesting?

This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.

One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?

Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?

None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:

We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]

Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?

If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.

#### What makes us believe a conjecture is true?

We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?

This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.

I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?

#### What do people say?

It’s worth starting with a general (if slightly poetic) modern description:

In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]

Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:

[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]

Here is how he reconciled somewhat the apparent contradiction:

On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]

Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)

Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:

Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]

This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.

Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:

Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]

#### So, what’s the problem?

Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.

Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:

To recognize extraordinary achievements and advances in mathematical research.

So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?

I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?

If your head is not spinning hard enough, here is another amusing quote:

Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].

Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra$100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome! Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that: [On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963] Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote: Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005] Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime. But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this: Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001] #### Why do I care? For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do. For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures: 1. The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova). 2. The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber). 3. Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin). On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice. In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality. #### My favorite disproofs and counterexamples: There are many. Here are just a few, some famous and some not-so-famous, in historical order: 1. Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747) 2. Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946) 3. General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964) 4. Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992) 5. Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993) 6. Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010) 7. Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017) 8. Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020) In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area. #### Why should you disprove conjectures? There are three reasons, of different nature and importance. First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician. Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose. Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile. #### What should the institutions do? If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really. What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough. #### What should the journals do? In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such. To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you… Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed). Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper? #### Editor’s dilemma There are three real criteria for evaluation a solution of an open problem by the journal: 1. Is this an old, famous, or well-studied problem? 2. Are the tools interesting or innovative enough to be helpful in future studies? 3. Are the implications of the solution to other problems important enough? Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently. Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published? If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work. Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue? #### What do other people do? Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them: Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.” Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.” Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.” Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.” Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.” If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another? #### What should you do? First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it. Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously. Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures. Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture. Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them. Sixth, be brave and optimistic! Whether you decide to prove, disprove a conjecture, or simply state a new conjecture, go for it! Ignore the judgements by the likes of Sarnak and Zeilberger. Trust me — they don’t really mean it. ## Some good news April 17, 2019 Leave a comment Two of my former Ph.D. students won major prizes recently — Matjaž Konvalinka and Danny Nguyen. Matjaž is an Associate Professor at University of Ljubljana, Danny is a Lewis Research Assistant Professor at University of Michigan, Ann Arbor. Congratulations to both of them! (1) The 2019 Robbins Prize is awarded to Roger Behrend, Ilse Fischer and Matjaž Konvalinka for their paper “Diagonally and antidiagonally symmetric alternating sign matrices of odd order”. The Robbins Prize is given in Combinatorics and related areas of interest is named after the late David P. Robbins and is given once every 3 years by AMS and MAA. In many ways, this paper completes the long project of enumerating alternating sign matrices (ASMs) initiated by William Mills, David Robbins, and Howard Rumsey in the early 1980s. The original #ASM(n)=#TSSCPP(n) conjecture follows from Andrews’s proof of the conjectured product formula for #TSSCPP(n), and Zeilberger’s 84 page computer assisted proof of the the same conjectured product formula for #ASM(n). This led to a long series of remarkable developments which include Kuperberg’s proof using the Izergin-Korepin determinant for the six vertex model, the Cantini–Sportiello proof of the Razumov-Stroganov conjecture, and a recent self-contained determinantal proof for the number of ASMs by Fischer. Bressoud’s book (and this talkslides) is a good introduction. But the full story is yet to be written. (2) The 2018 Sacks Prize is awarded to Danny Nguyen for his UCLA Ph.D. dissertation on the complexity of short formulas in Presburger Arithmetic (PA) and many related works (some joint with me, some with others). See also the UCLA announcement. The Sacks Prize is given by the international Association for Symbolic Logic for “the most outstanding doctoral dissertation in mathematical logic“. It is sometimes shared between two awardees, and sometimes not given at all. This year Danny is the sole winner of the prize. Danny’s dissertation is a compilation of eight (!) papers Danny wrote during his graduate studies, all on the same or closely related subject. These papers advance and mostly finish off the long program of understanding the boundary of what’s feasible in PA. The most important of these is our joint FOCS paper which basically says that Integer Programming and Parametric Integer Programming is all that’s left in P, while all longer formulas are NP-hard. See Featured MathSciNet Review by Sasha Barvinok and an overlapping blog post by Gil Kalai discussing these results. See also Danny’s FOCS talk video and my MSRI talk video presenting this work. ## ICM Paper March 14, 2018 2 comments Well, I finally finished my ICM paper. It’s only 30 pp, but it took many sleepless nights to write and maybe about 10 years to understand what exactly do I want to say. The published version will be a bit shorter – I had to cut section 4 to satisfy their page limitations. Basically, I give a survey of various recent and not-so-recent results in Enumerative Combinatorics around three major questions: (1) What is a formula? (2) What is a good bijection? (3) What is a combinatorial interpretation? Not that I answer these questions, but rather explain how one could answer them from computational complexity point of view. I tried to cover as much ground as I could without overwhelming the reader. Clearly, I had to make a lot of choices, and a great deal of beautiful mathematics had to be omitted, sometimes in favor of the Computational Combinatorics approach. Also, much of the survey surely reflects my own POV on the subject. I sincerely apologize to everyone I slighted and who disagrees with my opinion! Hope you still enjoy the reading. Let me mention that I will wait for a bit before posting the paper on the arXiv. I very much welcome all comments and suggestions! Post them here or email privately. P.S. In thinking of how approach this paper, I read a large number of papers in previous ICM proceedings, e.g. papers by Noga Alon, Mireille Bousquet-Mélou, Paul Erdős, Philippe Flajolet, Marc Noy, János Pach, Richard Stanley, Benny Sudakov, and many others. They are all terrific and worth reading even if just to see how the field has been changing over the years. I also greatly benefited from a short introductory article by Doron Zeilberger, which I strongly recommend. ## Fibonacci times Euler November 5, 2016 2 comments Recall the Fibonacci numbers $F_n$ given by 1,1,2,3,5,8,13,21… There is no need to define them. You all know. Now take the Euler numbers (OEIS) $E_n$ 1,1,1,2,5,16,61,272… This is the number of alternating permutations in $S_n$ with the exponential generating function $\sum_{n=0}^\infty E_n t^n/n! = \tan(t)+\sec(t)$. Both sequences are incredibly famous. Less known are connection between them. (1) Define the Fibonacci polytope $\Phi_n$ to be a convex hull of 0/1 points in $\Bbb R^n$ with no two 1 in a row. Then $\Phi_n$ has $F_{n+1}$ vertices and vol$(\Phi_n)=E_n/n!$ This is a nice exercise. (2) $F_n \cdot E_n \ge n!$ (by just a little). For example, $F_4 \cdot E_4 = 5 \cdot 5 = 25 > 4!$. This follows from the fact that $F_n \sim \frac{1}{\sqrt{5}} \, \phi^{n+1}$ and $E_n\sim \frac{4}{\pi}\left(\frac{2}{\pi}\right)^{n} n!$, where $\phi=(1+\sqrt{5})/2$ is the golden ratio. Thus, the product $F_n \cdot E_n \sim c n! \left(\frac{2\phi}{\pi}\right)^n$. Since $\pi = 3.14$ and $2\phi = 3.24$, the inequality $F_n \cdot E_n \ge n!$ is easy to see, but still a bit surprising that the numbers are so close. Together with Greta Panova and Alejandro Morales we wrote a little note “Why is π < 2φ?” which gives a combinatorial proof of (2) via a direct surjection. Thus we obtain an indirect proof of the inequality in the title. The note is not a research article; rather, it is aimed at a general audience of college students. We will not be posting it on the arXiv, so I figure this blog is a good place to advertise it. The note also explains that the inequality (2) also follows from Sidorenko’s theorem on complementary posets. Let me briefly mention a connection between (1) and (2) which is not mentioned in the note. I will assume you just spent 5 min and read the note at this point. Following Stanley, the volume of $\Phi_n$ is equal to the volume of the chain polytope (=stable set polytope), see Two Poset Polytopes. But the latter is exactly the polytope that Bollobás, Brightwell and Sidorenko used in their proof of the upper bound via polar duality. ## The power of negative thinking, part I. Pattern avoidance May 26, 2015 3 comments In my latest paper with Scott Garrabrant we disprove the Noonan-Zeilberger Conjecture. Let me informally explain what we did and why people should try to disprove conjectures more often. This post is the first in a series. Part II will appear shortly. #### What did we do? Let F ⊂ Sk be a finite set of permutations and let Cn(F) denote the number of permutations σ ∈ Sn avoiding the set of patterns F. The Noonan-Zeilbeger conjecture (1996), states that the sequence {Cn(F)} is always P-recursive. We disprove this conjecture. Roughly, we show that every Turing machine T can be simulated by a set of patterns F, so that the number aof paths of length n accepted by by T is equal to Cn(F) mod 2. I am oversimplifying things quite a bit, but that’s the gist. What is left is to show how to construct a machine T such that {an} is not equal (mod 2) to any P-recursive sequence. We have done this in our previous paper, where give a negative answer to a question by Kontsevich. There, we constructed a set of 19 generators of GL(4,Z), such that the probability of return sequence is not P-recursive. When all things are put together, we obtain a set F of about 30,000 permutations in S80 for which {Cn(F)} is non-P-recursive. Yes, the construction is huge, but so what? What’s a few thousand permutations between friends? In fact, perhaps a single pattern (1324) is already non-P-recursive. Let me explain the reasoning behind what we did and why our result is much stronger than it might seem. #### Why we did what we did First, a very brief history of the NZ-conjecture (see Kirtaev’s book for a comprehensive history of the subject and vast references). Traditionally, pattern avoidance dealt with exact and asymptotic counting of pattern avoiding permutations for small sets of patterns. The subject was initiated by MacMahon (1915) and Knuth (1968) who showed that we get Catalan numbers for patterns of length 3. The resulting combinatorics is often so beautiful or at least plentiful, it’s hard to imagine how can it not be, thus the NZ-conjecture. It was clearly very strong, but resisted all challenges until now. Wilf reports that Richard Stanley disbelieved it (Richard confirmed this to me recently as well), but hundreds of papers seemed to confirm its validity in numerous special cases. Curiously, the case of the (1324) pattern proved difficult early on. It remains unresolved whether Cn(1324) is P-recursive or not. This pattern broke Doron Zeilberger’s belief in the conjecture, and he proclaimed that it’s probably non-P-recursive and thus NZ-conjecture is probably false. When I visited Doron last September he told me he no longer has strong belief in either direction and encouraged me to work on the problem. I took a train back to Manhattan looking over New Jersey’s famously scenic Amtrack route. Somewhere near Pulaski Skyway I called Scott to drop everything, that we should start working on this problem. You see, when it comes to pattern avoidance, things move from best to good to bad to awful. When they are bad, they are so bad, it can be really hard to prove that they are bad. But why bother – we can try to figure out something awful. The set of patterns that we constructed in our paper is so awful, that proving it is awful ain’t so bad. #### Why is our result much stronger than it seems? That’s because the proof extends to other results. Essentially, we are saying that everything bad you can do with Turing machines, you can do with pattern avoidance (mod 2). For example, why is (1324) so hard to analyze? That’s because it’s even hard to compute both theoretically and experimentally – the existing algorithms are recursive and exponential in n. Until our work, the existing hope for disproving the NZ-conjecture hinged on finding an appropriately bad set of patterns such that computing {Cn(F)} is easy. Something like this sequence which has a nice recurrence, but is provably non-P-recursive. Maybe. But in our paper, we can do worse, a lot worse… We can make a finite set of patterns F, such that computing {Cn(F) mod 2} is “provably” non-polynomial (Th 1.4). Well, we use quotes because of the complexity theory assumptions we must have. The conclusion is much stronger than non-P-recursiveness, since every P-recursive sequence has a trivial polynomial in n algorithm computing it. But wait, it gets worse! We prove that for two sets of patterns F and G, the problem “Cn(F) = Cn(G) mod 2 for all n” is undecidable (Th 1.3). This is already a disaster, which takes time to sink in. But then it gets even worse! Take a look at our Corollary 8.1. It says that there are two sets of patterns F and G, such that you can never prove nor disprove that Cn(F) = Cn(G) mod 2. Now that’s what I call truly awful. #### What gives? Well, the original intuition behind the NZ-conjecture was clearly wrong. Many nice examples is not a good enough evidence. But the conjecture was so plausible! Where did the intuition fail? Well, I went to re-read Polya’s classic “Mathematics and Plausible Reasoning“, and it all seemed reasonable. That is both Polya’s arguments and the NZ-conjecture (if you don’t feel like reading the whole book, at least read Barry Mazur’s interesting and short followup). Now think about Polya’s arguments from the point of view of complexity and computability theory. Again, it sounds very “plausible” that large enough sets of patterns behave badly. Why wouldn’t they? Well, it’s complicated. Consider this example. If someone asks you if every 3-connected planar cubic graph has a Hamiltonian cycle, this sounds plausible (this is Tait’s conjecture). All small examples confirm this. Planar cubic graphs do have very special structure. But if you think about the fact that even for planar graphs, Hamiltonicity is NP-complete, it doesn’t sound plausible anymore. The fact that Tutte found a counterexample is no longer surprising. In fact, the decision problem was recently proved to be NP-complete in this case. But then again, if you require 4-connectivity, then every planar graph has a Hamiltonian cycle. Confused enough? Back to the patterns. Same story here. When you look at many small cases, everything is P-recursive (or yet to be determined). But compare this with Jacob Fox’s theorem that for a random single pattern π, the sequence {Cn(π)} grows much faster than originally expected (cf. Arratia’s Conjecture). This suggests that small examples are not representative of complexity of the problem. Time to think about disproving ALL conjectures based on that evidence. If there is a moral in this story, it’s that what’s “plausible” is really hard to judge. The more you know, the better you get. Pay attention to small crumbs of evidence. And think negative! #### What’s wrong with being negative? Well, conjectures tend to be optimistic – they are wishful thinking by definition. Who would want to conjecture that for some large enough a,b,c and n, there exist a solution of an + bn = cn? However, being so positive has a drawback – sometimes you get things badly wrong. In fact, even polynomial Diophantine equations can be as complicated as one wishes. Unfortunately, there is a strong bias in Mathematics against counterexamples. For example, only two of the Clay Millennium Problems automatically pay$1 million for a counterexample.  That’s a pity.  I understand why they do this, just disagree with the reasoning.  If anything, we should encourage thinking in the direction where there is not enough research, not in the direction where people are already super motivated to resolve the problem.

In general, it is always a good idea to keep an open mind.  Forget all this “power of positive thinking“, it’s not for math.  If you think a conjecture might be false, ignore everybody and just go for disproof.  Even if it’s one of those famous unsolved conjectures in mathematics.   If you don’t end up disproving the conjecture, you might have a bit of trouble publishing computational evidence.  There are some journals who do that, but not that many.  Hopefully, this will change soon…

#### Happy ending

When we were working on our paper, I wrote to Doron Zeilberger if he ever offered a reward for the NZ-conjecture, and for the disproof or proof only?  He replied with an unusual award, for the proof and disproof in equal measure.  When we finished the paper I emailed to Doron.  And he paid.  Nice… 🙂

## Computational combinatorics

July 25, 2012 1 comment

Say, you have written a paper. You want to submit it to a journal. But in what field? More often than not, the precise field/area designation for this paper is easy to determine, or at least easy to place it into some large category. Even if the paper is in between fields, this is often well regarded and understood situation, nothing wrong with that. Say, the paper is resolving a problem in field X with tools from field Y. Submit to X-journal unless the application is routine and the crux of the innovation is in refining the tools. Then submit to Y-journal.

However, when it comes to CS, things are often less clear. This is in part because of the novelty of the subject, and in part due to the situation in CS theory, which is in constant flux and search for direction (a short Wikipedia article is as rather vague and unhelpful, even more so than these generic WP articles tend to be).

The point of this post is to introduce/describe the area of “Computational Combinatorics“. Although Google returns 20K hits for this term (including experts, courses, textbooks), the meaning is either obscure or misleading. We want to clarify what we mean, critique everyone else, and make a stake for the term!

1) What I want computational combinatorics to mean is “theoretical CS aspects of combinatorics” (and to a lesser extend “practical..”), which is essentially part of combinatorics but the tools and statements use compute science terminology (for a concise description of complexity aspects, see dated but excellent survey by David Shmoys and Eva Tardos). I will give a recent example below, but basically if you want to prove a negative result in combinatorics (as in “one should not expect a nice formula for the number of 3-colorings or perfect matchings of a general graph”), then CS language (and basic tools) is a way to go. When people use “computational combinatorics” to mean “basic results in combinatorics that are useful for further studies of computer science”, they are being misleading. A proper name for such course is “Introduction to Combinatorics” or “Combinatorics for Computer Scientists”, etc.

2) In two recent papers, Jed Yang and I proved several complexity results on tilings. To explain them, let me start with the following beautiful result by Éric Rémila, built on earlier papers by Thurston, Conway & Lagarias, and Kenyon & Kenyon:

Tileability of a simply connected region in the plane with two types of rectangles can be decided in polynomial time.

First, we show that when the number of rectangles is sufficiently large (originally about 106, later somewhat decreased), one should not expect such a result. Formally, we prove that tileability is NP-hard in this case. We then show that in 3-dim the topology of the region gives no advantage. Among other results, we prove that tileability of contractible regions with 2x2x1 slabs is
NP-complete, and counting 2x1x1 domino tilings of contractible regions is #P-complete.

Now, the CS Theory point of view on these types of results changed drastically over time. Roughly, 30 years ago they were mainstream. About 20 years ago they were still of interest, but no longer important. Nowadays they are marginal at best – the field has moved on. My point is that the result are of interest in Combinatorics and Combinatorics only. Indeed, it has long been observed that applying combinatorial group theory to tilings (as done by Thurston, Rémila, etc.) is more of an art than a science. Although we believe that already for three general rectangles in the plane the problem is intractable, proving such a result is exceedingly difficult. Our various results solve weak versions of this problem.

3) The ontology (classification) in mathematics has always been a mess (this is not unusual). For example, combinatorial enumeration is the same as enumerative combinatorics. On the other hand, as far as I can tell, analytic geometry has nothing to do with geometric analysis. There is also no “monotonicity” to speak about: even though group theory is a part of algebra, the geometric group theory is neither a part of geometric algebra, nor of algebraic geometry, although traditionally contains combinatorial group theory. Distressingly, there are two completely different (competing) notions of “algebraic combinatorics” (see here and there), and algebraic graph theory which is remarkably connected to both of these. The list goes on.

4) So, why name a field at all, given the mess we have? That’s mostly because we really want to incorporate the CS aspects of combinatorics as a legitimate branch of mathematics. Theory CS is already over the top combinatorial (check out the number of people who believe that P=?NP will be resolved with combinatorics), but when a problem arises in combinatorics from within, this part of combinatorics needs a name to call home. I propose using the term computational combinatorics, in line with computational group theory, computational geometry, computational topology, etc., as a part of the loosely defined computational mathematics. I feel that the adjective “computational” is broad and flexible enough to incorporate both theoretical/complexity aspects as well as some experimental work, and combinatorial software development (as in WZ theory), compared to other adjectives, such as “algorithmic”, “computable”, “effective”, “computer-sciency”, etc. So, please, AMS, next time you revise your MSC, consider adding “Computational Combinatorics” as 05Fxx.

P.S. A well known petition asks for graph theory to have its own MSC code (specifically, 07), due to the heavy imbalance in the number of graph theory vs. the rest of combinatorics papers. Without venturing an opinion, let me mention that perhaps, adding a top level “computational combinatorics” subfield of combinatorics will remedy this as well – surely some papers will migrate there from graph theory. Just a thought…

## A lost bijection

July 11, 2012 1 comment

One can argue whether some proofs are from the book, while others maybe not. Some such proofs are short but non-elementary, others are elementary but slightly tedious, yet others are short but mysterious, etc. (see here for these examples). BTW, can one result have two or more “proofs from the book”?

However, very occasionally you come across a proof that is both short, elementary and completely straightforward. One would call such a proof trivial, if not for the fact that it’s brand new. I propose a new term for these – let’s call them lost proofs, loosely defined as proofs which should have been discovered decades or centuries ago, but evaded this fate for whatever accidental historical circumstances (as in lost world, get it?) And when you find such a proof you sort of can’t believe it. Really? This is true? This is new? Really? Really?!?

Let me describe one such lost proof. This story started in 1857 when Arthur Cayley wrote “On a problem in the partition of numbers”, with the following curious result:

The number of integer sequences $(a_1,\ldots,a_n)$ such that $1\le a_1 \le 2$, and $1\le a_{i+1} \le 2 a_i$ for $1\le i < n$, is equal to the total number of partitions of integers $N \in \{0,1,\ldots,2^{n}-1\}$ into parts $1,2,4,\ldots,2^{n-1}$.

For example, for $n=2$ the first set is sequences is $\{(1, 1), (1, 2), (2, 1), (2, 2), (2, 3), (2, 4)\}$, while the second of partitions is $\{21, 2, 1^3, 1^2, 1, \varnothing\}$, both with six elements.

This result was discovered about 20-25 years too soon. In 1879-1882, while at Johns Hopkins University, Sylvester pioneered what he called a “constructive partition theory”, and had he seen his good friend’s older paper, he probably would have thought about finding a bijective proof. Apparently, he didn’t. In all fairness to everybody involved, Cayley had written over 900 papers.

Now, the problem was rediscovered by Minc (1959), and notably by Andrews, Paule, Riese and Strehl (2001) as a biproduct of computer experiments. Both Cayley’s and APRS’s proofs are analytic (using generating functions). More recent investigations by Corteel, Lee and Savage (2005 and 2007 papers), and Beck, Braun and Le (2011) proved various extensions, still using generating functions.

We are now ready for the “lost proof”, which is really a lost bijection. It’s given by just one formula, due to Matjaž Konvalinka and me:

$\Psi: (a_1,a_2,a_3,\ldots,a_n) \to [2^{n-1}]^{2-a_1} [2^{n-2}]^{2a_1-a_2}[2^{n-3}]^{2a_2-a_3} \ldots 1^{2a_{n-1}-a_n}$

For example, for $n=2$ we get the following bijection:

$\Psi: (1,1) \to 21, \ (1,2) \to 2, \ (2,1) \to 1^3, \ (2,2) \to 1^2, \ (2,3) \to 1, \ (2,4) \to \varnothing.$

Of course, once the bijection is found, the proof of Cayley’s theorem is completely straightforward. Also, once you have such an affine formula, many extensions become trivial.

One wonders, how does one can come up with such a bijection. The answer is: simply compute it assuming there is an affine map. It tends to be unique. Also, we have done this before (for convex partitions and LR-coefficients). There is a reason why this bijection is so similar to Sylvie Corteel’s “brilliant human-generated one-line proof” in the words of Doron Zeilberger. So it’s just amazing that this simple proof has been “lost” for over 150 years, until now…

See our paper (Konvalinka and Pak, “Cayley compositions, partitions, polytopes, and geometric bijections”, 2012) for applications of this “lost bijection” and my survey (Pak, “Partition Bijections, a Survey”, 2006) for more on partition bijections.

Categories: Mathematics, New papers