## What math stories to tell and not to tell?

*Storytelling* can be surprisingly powerful. When a story is skillfully told, you get an almost magical feeling of being a part of it, making you care deeply about protagonists. Even if under ordinary circumstances you have zero empathy for the Civil War era outlaws or emperor penguins of Antarctica, you suddenly may find yourself engrossed with their fortune. This is a difficult skill to master, but the effects are visible even when used in earnest by the beginners.

Recently I started thinking about the kind of stories mathematicians should be telling. This was triggered by Angela Gibney‘s kind invitation to contribute an article on math writing to the * Early Career Collection* in the

*Notices of the AMS*. So I looked at a few older articles and found them just

*. I am not the target audience for some of them, but I just kept reading them all one after another until I exhausted the whole collection.*

**wonderful**My general advice — read the collection! Read a few pieces by some famous people or some people you know. If you like them, keep on reading. As I wrote in this blog post, you rarely get an insight into mathematician’s thinking unless they happen to write an autobiography or gave an interview. While this is more of a “*how to*” genre, most pieces are written in the first person narrative and do tell some interesting stories or have some curious points of view.

It is possible I am the last person to find out about the collection. I am not a member of the *AMS*, I don’t read the *Notices*, and it’s been a long time since anyone considered me “early career”. I found a few articles a little self-centered (but who am I to judge), and I would quibble with some advice (see below). But even those articles I found compelling and thought-provoking.

Having read the collection, I decided to write about *mathematical storytelling*. This is not something that comes naturally to most people in the field. Math stories (as opposed to stories about mathematicians) tend to be rather dry and unexciting, especially in the early years of studying. I will blog my own article *some other time*, but for now let me address the question in the title.

#### Stories to tell

With a few notable exceptions, just about all stories are worth telling. Whether in your autobiography or in your personal blog, as long as they are interesting to *somebody* — it’s all good. Given the lack of good stories, or any math stories really, it’s a good bet *somebody *will find your stories interesting. Let me expound on that.

Basically, anything **personal **works. To give examples from the collection, see e.g. stories by Mark Andrea de Cataldo, Alicia Prieto-Langarica, Terry Tao and John Urschel. Most autobiographies are written in this style, but a short blog post is also great. Overcoming an embarrassment caused by such public disclosure can be difficult, which makes it even more valuable to the readers.

Anything **historical **works, from full length monographs on history of math to short point of view pieces. Niche and off the beaten path stories are especially valuable. I personally like the classical *History of Mathematical Notations* by Florian Cajori, and *Combinatorics: Ancient & Modern*, a nice collection edited by Robin Wilson and John Watkins, with a several articles authored by names you will recognize. Note that an oral history can be also very valuable, see the kind of stories discussed by László Lovász and Endre Szemerédi mentioned in this blog post and *Dynkin’s interviews* I discussed here.

Anything **juicy **works. I mean, if you have a story of some famous mathematician doing something unusual (good or bad, or just plain weird), that attracts attention. This was the style of Steven Krantz’s two *Math Apocryphia* books, with many revealing and embarrassing anecdotes giving a sense of the bygone era.

Anything **inspirational **works. A beautiful example of this style is Francis Su’s *Farewell Address* as *MAA* President and part of his moving follow up book (the book has other interesting material as well). From the collection, let me single out *Finding Your Reward* by Skip Garibaldi which also aims to inspire. Yet another example is Bill Thurston‘s must read MO answer “*What’s a mathematician to do?*“

Any **off the beaten path** math style is great. Think of “*The Strong Law of Small Numbers*” by Richard Guy, or many conjectures Terry Tao discusses in his blog. Think of “*Missed opportunities*” by Freeman Dyson, “*Tilings of space by knotted tiles*” by Colin Adams, or “*One sentence proof…* ” by Don Zagier (see also a short discussion here) — these are all remarkable and memorable pieces of writing that don’t conform to the usual peer review paradigm.

Finally, anything **philosophical **or **metamathematical **finds an audience. I am thinking of “*Is it plausible?*” by Barry Mazur, “*Theorems for a Price*” by Doron Zeilberger, “*You and Your Research*” by Richard Hamming, “*Mathematics as Metaphor*” by Yuri Manin, or even “*Prime Numbers and the Search for Extraterrestrial Intelligence*” by Carl Pomerance. We are all in search of some kind of answers, I suppose, so reading others thinking aloud about these deep questions always helps.

#### Practice makes perfect

Before I move to the other side, here is a simple advice on how to write a good story. Write as much as possible! There is no way around this. Absolutely no substitute, really. I’ve given this advice plenty of times, and so have everyone else. Let me conclude by this quote by Don Knuth which is a bit similar to Robert Lazarsfeld‘s advice. It makes my point much better and with with more authority that I can ever provide:

Of equal importance to solving a problem is the communication of that solution to others. The best way to improve your writing skills is to practice, practice, practice.

Seize every opportunity to write mini-essays about the theoretical work you are doing. Compose a blog for your friends, or even for yourself. When you write programs, write literate programs.

One of the best strategies to follow while doing PhD research is to prepare weekly reports of exactly what you are doing. What questions did you pursue that week? What positive answers did you get? What negative answers did you get? What are the major stumbling blocks that seem to be present at the moment? What related work are you reading?

Donald Knuth – On Writing up Research (posted by Omer Reingold),

Theory Dish, Feb 26, 2018

#### Don’t be a journalist

In this interesting article in the same collection, Jordan Ellenberg writes:

Why don’t journalists talk about math as it really is? Because they don’t know how it really is. We do. And if we want the public discourse about math to be richer, broader, and deeper, we need to tell our own stories.

He goes on to suggest that one should start writing a blog and then pitch some articles to real newspapers and news magazines. He gives his own bio as one example (among others) of pitching and publishing in mainstream publications such as *Slate* and the *New York Times*. Obviously, I agree with the first (blog) part (duh!), but I am rather negative on the second part. I know, I know, this sounds discouraging, but hear me out.

**First**, what Jordan is not telling you is how hard he had to work on his craft before getting to the point of being acceptable to the general audience. This started with him getting Summa Cum Laude A.B. degree from Harvard in both Math *and *English (if I recall correctly), and then publishing a well-received novel, all *before *starting his regular *Slate* column. Very few math people have this kind of background on which they can build popular appeal.

**Second**, this takes away jobs from real journalists. Like every highly competitive intellectual profession, journalism requires years of study and practice. It has its own principles and traditions, graduate schools, etc. Call it a chutzpah or a Dunning–Kruger effect, but just because you are excellent in *harmonic analysis* doesn’t mean you can do even a mediocre job as a writer. Again — some people *can* do both, but most cannot. If anything, I suspect a negative correlation between math and writing skills.

Here is another way to think about this. Most people do realize that they don’t need to email their pretty iPhone pictures of a Machu Picchu sunrise to be published by the *National Geographic*. Or that their cobbler family recipe maybe not exactly be what *Gourmet Magazine* is looking for. Why would you think that writing is much easier then?

**Third**, this cheapens our profession to some degree. You really don’t need a Ph.D. in *algebraic number theory* and two perfect scores at the IMO to write about *Powerball *or *baseball*. You need a M.S. in *statistics* and really good writing skills. There are plenty of media sites which do that now, such as 538. There is even the whole DDJ specialization with many practitioners and a handful of Pulitzer prizes. Using quantitative methods is now mainstream, so what exactly are *you *bringing to the table?

**Fourth**, it helps to be honest. Jordan writes: “Editors like an angle. If there’s a math angle to a story in the news, pitch it! As someone with a degree in math, you have something to offer that most writers don’t.” This is true in the rare instances when, say, a *Fields medal* in your area is awarded, or something like that. But if it’s in an area far away from yours, then, uhm, you got nothing over many thousands of other people.

Now, please don’t take this as “don’t comment on current affairs” advice. No, no — please do! Comment away on your blog or on your podcast. Just don’t take jobs away from journalists. Help them instead! Write them emails, correct their mistakes. Let them interview you as an “expert”, whatever. Part of the reason the math related articles are so poor is because of mathematicians’ apathy and frequent disdain to the media, not because we don’t write newspaper articles — it’s really not our job.

Let me conclude with an anecdote about me reaching out to a newspaper. Once upon a time, long ago, flights used to distribute real newspapers to the passengers. I was sitting in the back and got a *Wall Street Journal* which I read out of boredom during takeoff. There was an article discussing the EU expansion and the fact that by some EU rules, the headquarters need a translator from every language to every other language. The article predicted dark days ahead, since it’s basically impossible to find people who can translate some smaller languages, such as from Maltese to Lithuanian. The article provided a helpful graph showing the number of translators needed as a function of the number of countries and claimed the ** exponential growth**.

I was not amused, cut out the article, and emailed the author upon arrival, saying that with all my authority as an assistant professor at MIT, I promise that *n*(*n*-1) grows polynomially, not exponentially. I got back a surprisingly apologetic reply. The author confessed he was a math major in college, but was using the word without thinking. I don’t know if *WSJ* ever published a correction, but I bet the author will not be using this word so casually anymore, and if he ever advanced to the editorial position will propagate this knowledge to others. So there — that’s *my *personal contribution to improving public discourse…

#### Don’t be an apologist

In another beautifully written article in the Early Career collection, Izzet Coskun gives “advice on how to communicate mathematics quickly in informal settings”. He writes:

Whether before a promotion committee, at a party where one might meet future politicians or future parents of future colleagues, in the elevator on the way up to tea, or in the dean’s office at a job interview, we often have the opportunity to explain our work to a general audience. The time we have is usually short [..] Our audience will not be familiar with our terminology. Communicating mathematics in such settings is challenging.

He then gives a lot of very useful practical advice on how to prepare to such “math under a minute” conversation, how to be engaging, accessible, etc. It’s an all around good advice. However, I disagree. Here is my simple advice: **Don’t Do It**! If it’s a dean and this is a job interview, feel free to use any math jargon you want — it’s not your fault your field is technical, and the dean of sciences is used to it anyway. Otherwise, just say **NO**.

It’s true that sometimes your audience is friendly and is sincere in their interest in your work. In that case no matter what you say will disappoint them. There is a *really good chance* they can’t understand a word of what you say. They just think they can, and you are about to disillusion them.

But more often than not, the audience is actually not friendly, as was the case of a party Izzet described in his article. Many people harbor either a low regard or an outright resentment towards math stemming from their school years or some kind of “life experience”. These folks simply want to reinforce their views, and no matter what you say that will be taken as “you see, math is both hard, boring and* *useless”.

One should not confuse the unfriendlies with stupid or uneducated people. On the contrary, a lot of educated people think this way. A prime example is Amy Wax with her inimitable quote:

If we got rid of ninety percent of the math Ph.D. programs, would we really be worse off in any material respect? I think that’s a serious question.

I discussed this quote at length in this blog post. There, I tried to answer her question. But after a few back-and-force emails (which I didn’t make public), it became clear that she is completely uninterested in the actual learning of *what math is* and *what it does*. She just wants to have her own answer validated by some area practitioners. Oh, well…

So here is the real reason why I think answering such people is pointless. No matter what you say, you come across as an ** apologist for the field**. If people really want to understand what math is for, there are plenty of sources. In fact, have several bookshelves with extremely well written book-length answers. But it’s not your job to educate them! Worse, it is completely unreasonable to expect you to answer in “under one minute”.

Think about reactions of people when they meet other professionals. Someone says “I develop new DNA based cancer treatments” or “I work on improving VLSI architecture”, or “I device new option pricing strategies”. Is there a follow up request to explain it in “under one minute”? Not really. Let me give you a multiple choice. Is that because people think that:

a) these professions are boring compared to math and they would rather hear about the latter?

b) they know exactly what these professionals do, but math is so darn mysterious?

c) they know these professions are technical and hard to understand, but even children can understand math, so how hard can *that *be?

d) these professions are clearly useful, but what do math people do — solve quadratic equations all day?

If you answered a) or b) you have more faith in humanity than I do. If you answered c) you never spoke to anyone about math at a party. So d) is the only acceptable answer, even if it’s an exaggeration. Some people (mostly under 7) think that I “add numbers all day”, some people (mostly in social sciences) think that I “take derivatives all day”, etc., you get the point. My advice — don’t correct them. This makes them unhappy. Doesn’t matter if they are 7 or 77 — when you correct them the unhappiness is real and visible…

So here is a **summary** of how I deal with such questions. If people ask what I do, I answer “*I do math research and I teach*“. If they ask what kind of research I say “*advanced math*“. If they ask for details I tell them “*it’s complicated*“. If they ask why, I tell them “*because it takes many years of study to even understand the math lingo, so if I tell you what I do this sounds like I am speaking a foreign language*“.

If they ask what are the applications of my research (and they always do), I tell them “*teaching graduate classes*“. If they ask for “*practical*” applications, whatever that means, I tell them “*this puts money into my Wells Fargo account*“. At this point they move on exhausted by the questions. On the one hand I didn’t lie except in the last answer. On the other — nobody cares if I even have a WF account (I don’t, but it’s none of their business either).

One can ask — why do I care so much? What’s so special about my work that I am so apprehensive? In truth, nothing really. There are other aspects of my identity I also find difficult discussing in public. The most relevant is “*What is Combinatorics?*” which for some reason is asked over and over as if there is a good answer (see this blog post for my own answer and this Wikipedia article I wrote). When I hear people explaining what it is, half the time it sounds like they are apologizing for something they didn’t do…

There are other questions relevant to my complex identity that I am completely uninterested in discussing. Like “*What do you think of the Russian President?*” or “*Who is a Jew?*“, or “*Are you a Zionist?*” It’s not that my answers are somehow novel, interesting or controversial (they are not). It’s more like I am afraid to hear responses from the people who asked me these questions. More often than not I find *their *answers unfortunate or plain offensive, and I would rather not know *that*.

Let me conclude on a positive note, by telling a party story of my own. Once, during hors d’oeuvres (remember those?), one lady, a well known LA lawyer, walked to me and said: “*I hear you are a math professor at UCLA. This is so fascinating! Can you tell me what you do? Just WOW me!*” I politely declined along the lines above. She insisted: “*There has to be *something *that I can understand!*” I relented: “*Ok, there is one theorem I can tell you. In fact, this result landed me a tenure*.” She was all ears.

I continued: “*Do you know what’s a square-root-of-two?*” She nodded. “*Well, I proved that this number can never be a ratio of two integers, for example it’s not equal to 17/12 or anything like that.*” “*Oh, shut-the-F-up!*” she exclaimed. “*Are you serious? You can prove* ** that**?” — she was clearly suspicious. “

*Yes, I can*“, I confirmed vigorously, “

*in fact, two Russian newspapers even printed headlines about that back a few years ago. We love math over there, you know*.”

“*But of course!*“, she said, “*American media never writes about math. It’s such a shame! That’s why I never heard of your work. My son is much too young for this, but I must tell my nieces — they love science!*” I nodded approvingly. She drifted away very happy, holding a small plate of meat stuffed potato croquettes, enriched with this newly acquired knowledge. I do hope her nieces liked that theorem — it is cool indeed. And the proof is so super neat…

## What if they are all wrong?

* Conjectures *are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.

The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. *Some *are eminently reachable, the only question being who will get there first (think 100 meter dash). *Others *are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated *third type* are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic *fourth type* — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.

Now, conjectures famously provide a foundation of the *scientific method*, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that *citations* are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most *GoogleScholar* profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “*I dedicated my life to the study of the XYZ Conjecture*” (even if they never publish anything), than “*I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family*“. Right. Obviously…

But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? * What if some/many of these conjectures are actually wrong, what then?* Should you be flying that starship if

*there is no there there*? An idealist would argue something like “

*it’s a journey, not a destination*“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.

#### What *are *conjectures in mathematics?

That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, *something *about the conjecture needs to *interesting *and *inspiring*.

#### What makes a conjecture interesting?

This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.

One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is *self-referential*. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it *uninteresting*?

Also, wouldn’t it be *interesting *if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?

None of this are new ideas, of course. For example, faced with the need to justify the “great” *BC conjecture*, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:

We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “

The Baum-Connes conjecture“, 2019]

Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics* *is already “*one unit*“. If it is, why does it need a new “*feeling of* *unity*“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?

If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “*mathematics is a boring subject because every statement is equivalent to saying that some set is empty*.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.

#### What makes us believe a conjecture is true?

We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is *true *in exactly the same way we argue it’s *interesting*: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?

This is where it gets complicated. Say, you are working on the “*abc conjecture*” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the *Erdős–Ulam problem* about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the *Harborth’s conjecture* (aka the “*integral Fáry problem*“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a *positive solution* to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.

I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?

#### What do people say?

It’s worth starting with a general (if slightly poetic) modern description:

In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf,

The Subtle Art of the Mathematical Conjecture, 2019]

Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:

[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper,

Heroic Science, 1974]

Here is how he reconciled somewhat the apparent contradiction:

On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]

Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “*proof and conjecture*” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “*My favorite problems*“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by *Zbl. Math*.)

Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:

Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say,

cut off their fingers. [Peter Sarnak, UCLA, c. 2012]

This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if *Sarnak’s conjecture* is disproved someday.

Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:

Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “

Reflections Around the Ramanujan Centenary“, 1988]

#### So, what’s the problem?

Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “*severe attempts at refuting their own conjectures*“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.

Take prizes. Famously, Clay Math Institute gives **$1 million** for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the * P vs. NP problem* and the

*, it gives*

**Navier–Stokes Equation problem****(**

*nothing***$0**) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “

*primary objectives and purposes*“:

To recognize extraordinary achievements and advances in mathematical research.

So it sounds like CMI does not think that disproving the * Riemann Hypothesis* needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “

*the opposite of a profound truth may well be another profound truth*“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “

*advances to nowhere*“?

I am sure you can see that my blood is boiling, but let’s proceed to the * P vs. NP problem*. What if it’s

*independent of ZFC*? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that

**P=NP**, or at least a

*nonconstructive proof*that

**P=NP**?

If your head is not spinning hard enough, here is another amusing quote:

Of course, it’s possible that

P vs. NPis unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence ofP vs. NPis itself independent of set theory, and so on ad infinitum! But one can at least say that, ifP vs. NP(or for that matter, theRiemann hypothesis,Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson,, 2016].P vs. NP

Speaking of * Goldbach’s Conjecture*, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a

**$1 million**prize by a publishing house for the

*proof of the conjecture*. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra

**$100**in premium. For another

**$50**I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!

Curiously, even for *Goldbach’s Conjecture*, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:

[On whether

Goldbach’s Conjectureis ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper,Conjectures and Refutations, 1963]

Ugh. Perhaps. I suppose *anything *can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:

Not even God knows the number of permutations of 1000 avoiding the

1324 pattern. [Doron Zeilberger, quoted here, 2005]

Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 10^{1017} (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute *precisely*, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.

But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:

Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah,

Rutgers Univ. Colloqium, 2001]

#### Why do I care?

For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as *questions *or *open problems*, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.

For example, here is my old bog post on my disproof of the *Noonan-Zeiberger Conjecture* (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both *Barvinok’s Problem*, *Kannan’s Problem*, and *Woods Conjecture*. Just this year I disproved three conjectures:

- The
*Kirillov–Klyachko Conjecture*(2004) that the*reduced Kronecker coefficients*satisfy the saturation property (this paper, joint with Greta Panova). - The
*Brandolini et al. Conjecture*(2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber). *Kenyon’s Problem*(c. 2005) that every integral curve in**R**^{3}is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).

On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable *independence heuristic* by I. J. Good for the number of *contingency tables*, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.

In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.

#### My favorite disproofs and counterexamples:

There are many. Here are just a few, some famous and some not-so-famous, in historical order:

*Fermat‘s conjecture*(letter to Pascal, 1640) on primality of*Fermat numbers*, disproved by Euler (1747)*Tait’s conjecture*(1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)*General Burnside Problem*(1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)*Keller’s conjecture*(1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)*Borsuk’s Conjecture*(1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)*Hirsch Conjecture*(1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)*Woods’s conjecture*(1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)*Connes embedding problem*(1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)

In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.

#### Why should you disprove conjectures?

There are three reasons, of different nature and importance.

**First**, disproving conjectures is * opportunistic*. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.

**Second**, disproving conjectures is * beautiful*. Let me explain. Conjectures tend to be

*rigid*, as in “objects of the type

*pqr*satisfy property

*abc*.” People like me believe in the idea of “

*universality*“. Some might call it “

*completeness*” or even “

*Murphy’s law*“, but the general principle is always the same. Namely: it is not sufficient that one

*that all*

**wishes***pqr*satisfy

*abc*to actually believe in the implication; rather, there has to be a

*why*

**strong reason***abc*should hold. Barring that,

*pqr*can possibly be almost anything, so in particular

*non-abc*. While some would argue that

*non-abc*objects are “ugly” or at least “not as nice” as

*abc*, the idea of

*means that your objects can be of*

*universality**every color of the rainbow*— nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own

*sense of beauty*, but it’s an acquired taste I suppose.

**Third**, disproving conjectures is * constructive*. It depends on the nature of the conjecture, of course, but one is often faced with necessity to

*construct*a counterexample. Think of this as an engineering problem of building some

*pqr*which at the same time is not

*abc*. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.

#### What should the institutions do?

If you are an *institution which awards prizes*, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.

What you should do is say in the official rules: “We have [**this much money**] and an independent scientific committee which will award any progress on [**this problem**] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves *Goldbach’s Conjecture* for integers > exp(exp(10^{100000})), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.

#### What should the journals do?

In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of* Zeilberger’s Opinions*, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following *UVW *conjecture” or “We develop a new algorithm which confirms the *UVW* conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.

To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no *small* counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “*null graphs*“, the ship has sailed for you…

Let me give you a concrete example where a computational effort is indispensable. The curious *Lovász conjecture* states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why *pqr* = “vertex transitive” should imply *abc *= “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).

Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?

#### Editor’s dilemma

There are three real criteria for evaluation a solution of an open problem by the journal:

- Is this an old, famous, or well-studied problem?
- Are the tools interesting or innovative enough to be helpful in future studies?
- Are the implications of the solution to other problems important enough?

Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.

Now imagine two parallel worlds, where in the first world the conjecture is *proved* on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is *disproved* on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?

If you recognized that the first world is a story of Hao Huang‘s elegant proof of the *induced subgraphs of hypercubes conjecture*, which implies the *sensitivity conjecture*. The *Annals *published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the *Annals* accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old *Kepler’s conjecture* which was ** proved **in a remarkable computational work.

Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?

#### What do other people do?

Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:

** Some **were

*dumbfounded*: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”

** Others **were

*simplistic*: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”

** Third **were

*defensive*: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”

** Fourth **were

*biblical*: “I tend to work 6 days a week towards the proof and one day towards the disproof.”

** Fifth **were

*practical*: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”

If the last two seem sensible to you to, that’s because they are. However, I bet * fourth* are just grandstanding — no way they actually do that. The

*sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?*

**fifth**#### What should you do?

**First**, remember to *make conjectures*. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.

**Second**, learn to *check your conjectures* computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.

**Third**, learn to *make experiments*, explore the area computationally. That’s how you make new conjectures.

**Fourth**, *understand yourself*. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.

**Fifth**, actively *look for collaborators*. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them.

**Sixth**, *be brave* and *optimistic*! Whether you decide to prove, disprove a conjecture, or simply state a new conjecture, go for it! Ignore the judgements by the likes of Sarnak and Zeilberger. Trust me — they don’t really mean it.

## The power of negative thinking, part I. Pattern avoidance

In my latest paper with Scott Garrabrant we disprove the *Noonan-Zeilberger Conjecture*. Let me informally explain what we did and why people should try to disprove conjectures more often. This post is the first in a series. Part II will appear shortly.

#### What did we do?

Let *F ⊂ S _{k}* be a finite set of permutations and let

*C*(

_{n}*F*) denote the number of permutations

*σ ∈ S*avoiding the set of patterns

_{n}*F*. The

*Noonan-Zeilbeger conjecture*(1996), states that the sequence {

*C*(

_{n}*F*)} is always

*P-recursive*. We disprove this conjecture. Roughly, we show that every Turing machine T

*can be simulated by a set of patterns*

*F,*so that the number

*a*of paths of length n accepted by by T is equal to

_{n }*C*(

_{n}*F*) mod 2. I am oversimplifying things quite a bit, but that’s the gist.

What is left is to show how to construct a machine T such that {*a _{n}*} is not equal (mod 2) to

**any**P-recursive sequence. We have done this in our previous paper, where give a negative answer to a question by Kontsevich. There, we constructed a set of 19 generators of

*GL(4,Z)*, such that the probability of return sequence is not P-recursive.

When all things are put together, we obtain a set *F* of about 30,000 permutations in *S _{80}* for which {

*C*(

_{n}*F*)} is non-P-recursive. Yes, the construction is huge, but so what? What’s a few thousand permutations between friends? In fact, perhaps a single pattern (1324) is already non-P-recursive. Let me explain the reasoning behind what we did and why our result is much stronger than it might seem.

#### Why we did what we did

First, a very brief history of the NZ-conjecture (see Kirtaev’s book for a comprehensive history of the subject and vast references). Traditionally, pattern avoidance dealt with exact and asymptotic counting of pattern avoiding permutations for small sets of patterns. The subject was initiated by MacMahon (1915) and Knuth (1968) who showed that we get Catalan numbers for patterns of length 3. The resulting combinatorics is often so beautiful or at least plentiful, it’s hard to imagine how can it not be, thus the NZ-conjecture. It was clearly very strong, but resisted all challenges until now. Wilf reports that Richard Stanley disbelieved it (Richard confirmed this to me recently as well), but hundreds of papers seemed to confirm its validity in numerous special cases.

Curiously, the case of the (1324) pattern proved difficult early on. It remains unresolved whether *C _{n}*(1324) is P-recursive or not. This pattern broke Doron Zeilberger’s belief in the conjecture, and he proclaimed that it’s probably non-P-recursive and thus NZ-conjecture is probably false. When I visited Doron last September he told me he no longer has strong belief in either direction and encouraged me to work on the problem. I took a train back to Manhattan looking over New Jersey’s famously scenic Amtrack route. Somewhere near Pulaski Skyway I called Scott to drop everything, that we should start working on this problem.

You see, when it comes to pattern avoidance, things move from best to good to bad to awful. When they are bad, they are so bad, it can be really hard to prove that they are bad. But why bother – we can try to figure out something awful. The set of patterns that we constructed in our paper is so awful, that proving it is awful ain’t so bad.

#### Why is our result much stronger than it seems?

That’s because the proof extends to other results. Essentially, we are saying that everything bad you can do with Turing machines, you can do with pattern avoidance (mod 2). For example, why is (1324) so hard to analyze? That’s because it’s even hard to compute both theoretically and experimentally – the existing algorithms are recursive and exponential in *n*. Until our work, the existing hope for disproving the NZ-conjecture hinged on finding an appropriately bad set of patterns such that computing {*C _{n}*(

*F*)} is easy. Something like this sequence which has a nice recurrence, but is provably non-P-recursive. Maybe. But in our paper, we can do worse, a lot worse…

We can make a finite set of patterns *F,* such that computing {*C _{n}*(

*F*) mod 2} is “provably” non-polynomial (Th 1.4). Well, we use quotes because of the complexity theory assumptions we must have. The conclusion is much stronger than non-P-recursiveness, since every P-recursive sequence has a trivial polynomial in

*n*algorithm computing it. But wait, it gets worse!

We prove that for two sets of patterns *F* and *G*, the problem* “C _{n}*(

*F*) =

*C*(

_{n}*G*) mod 2 for all n” is undecidable (Th 1.3). This is already a disaster, which takes time to sink in. But then it gets even worse! Take a look at our Corollary 8.1. It says that there are two sets of patterns

*F*and

*G*, such that you can never prove nor disprove that

*C*(

_{n}*F*) =

*C*(

_{n}*G*) mod 2. Now that’s what I call truly awful.

#### What gives?

Well, the original intuition behind the NZ-conjecture was clearly wrong. Many nice examples is not a good enough evidence. But the conjecture was so plausible! Where did the intuition fail? Well, I went to re-read Polya’s classic “*Mathematics and Plausible Reasoning*“, and it all seemed reasonable. That is both Polya’s arguments and the NZ-conjecture (if you don’t feel like reading the whole book, at least read Barry Mazur’s interesting and short followup).

Now think about Polya’s arguments from the point of view of complexity and computability theory. Again, it sounds very “plausible” that large enough sets of patterns behave badly. Why wouldn’t they? Well, it’s complicated. Consider this example. If someone asks you if every 3-connected planar cubic graph has a Hamiltonian cycle, this sounds plausible (this is Tait’s conjecture). All small examples confirm this. Planar cubic graphs do have very special structure. But if you think about the fact that even for planar graphs, Hamiltonicity is NP-complete, it doesn’t sound plausible anymore. The fact that Tutte found a counterexample is no longer surprising. In fact, the decision problem was recently proved to be NP-complete in this case. But then again, if you require 4-connectivity, then *every* planar graph has a Hamiltonian cycle. Confused enough?

Back to the patterns. Same story here. When you look at many small cases, everything is P-recursive (or yet to be determined). But compare this with Jacob Fox’s theorem that for a random single pattern π, the sequence {*C _{n}*(π)} grows much faster than originally expected (cf. Arratia’s Conjecture). This suggests that small examples are not representative of complexity of the problem. Time to think about disproving ALL conjectures based on that evidence.

If there is a moral in this story, it’s that what’s “plausible” is really hard to judge. The more you know, the better you get. Pay attention to small crumbs of evidence. And think negative!

#### What’s wrong with being negative?

Well, conjectures tend to be optimistic – they are wishful thinking by definition. Who would want to conjecture that for some large enough *a,b,c* and *n,* there exist a solution of *a*^{n} + *b*^{n} = *c*^{n}? However, being so positive has a drawback – sometimes you get things badly wrong. In fact, even polynomial Diophantine equations can be as complicated as one wishes. Unfortunately, there is a strong bias in Mathematics against counterexamples. For example, only two of the Clay Millennium Problems automatically pay $1 million for a counterexample. That’s a pity. I understand why they do this, just disagree with the reasoning. If anything, we should encourage thinking in the direction where there is not enough research, not in the direction where people are already super motivated to resolve the problem.

In general, it is always a good idea to keep an open mind. Forget all this “power of positive thinking“, it’s not for math. If you think a conjecture might be false, ignore everybody and just go for disproof. Even if it’s one of those famous unsolved conjectures in mathematics. If you don’t end up disproving the conjecture, you might have a bit of trouble publishing computational evidence. There are some journals who do that, but not that many. Hopefully, this will change soon…

#### Happy ending

When we were working on our paper, I wrote to Doron Zeilberger if he ever offered a reward for the NZ-conjecture, and for the disproof or proof only? He replied with an unusual award, for the proof and disproof in equal measure. When we finished the paper I emailed to Doron. And he paid. Nice… 🙂