Archive
It could have been worse! Academic lessons of 2020
Well, this year sure was interesting, and not in a good way. Back in 2015, I wrote a blog post discussing how video talks are here to stay, and how we should all agree to start giving them and embrace watching them, whether we like it or not. I was right about that, I suppose. OTOH, I sort of envisioned a gradual acceptance of this practice, not the shock therapy of a phase transition. So, what happened? It’s time to summarize the lessons and roll out some new predictions.
Note: this post is about the academic life which is undergoing some changes. The changes in real life are much more profound, but are well discussed elsewhere.
Teaching
This was probably the bleakest part of the academic life, much commented upon by the media. Good thing there is more to academia than teaching, no matter what the ignorant critics think. I personally haven’t heard anyone saying post-March 2020, that online education is an improvement. If you are like me, you probably spent much more time preparing and delivering your lectures. The quality probably suffered a little. The students probably didn’t learn as much. Neither party probably enjoyed the experience too much. They also probably cheated quite a bit more. Oh, well…
Let’s count the silver linings. First, it will all be over some time next year. At UCLA, not before the end of Summer. Maybe in the Fall… Second, it could’ve been worse. Much worse. Depending on the year, we would have different issues. Back in 1990, we would all be furloughed for a year living off our savings. In 2000, most families had just one personal computer (and no smartphones, obviously). Let the implications of that sink in. But even in 2010 we would have had giant technical issues teaching on Skype (right?) by pointing our laptop cameras on blackboards with dismal effect. The infrastructure which allows good quality streaming was also not widespread (people were still using Redbox, remember?)
Third, the online technology somewhat mitigated the total disaster of studying in the pandemic time. Students who are stuck in faraway countries or busy with family life can watch stored videos of lectures at their convenience. Educational and grading software allows students to submit homeworks and exams online, and instructors to grade them. Many other small things not worth listing, but worth being thankful for.
Fourth, the accelerated embrace of the educational technology could be a good thing long term, even when things go back to normal. No more emails with scanned late homeworks, no more canceled/moved office hours while away at conferences. This can all help us become better at teaching.
Finally, a long declared “death of MOOCs” is no longer controversial. As a long time (closeted) opponent to online education, I am overjoyed that MOOCs are no longer viewed as a positive experience for university students, more like something to suffer through. Here in CA we learned this awhile ago, as the eagerness of the current Gov. Newsom (back then Lt. Gov.) to embrace online courses did not work out well at all. Back in 2013, he said that the whole UC system needs to embrace online education, pronto: “If this doesn’t wake up the U.C. [..] I don’t know what will.” Well, now you know, Governor! I guess, in 2020, I don’t have to hide my feelings on this anymore…
Research
I always thought that mathematicians can work from anywhere with a good WiFi connection. True, but not really – this year was a mixed experience as lonely introverts largely prospered research wise, while busy family people and extraverts clearly suffered. Some day we will know how much has research suffered in 2020, but for me personally it wasn’t bad at all (see e.g. some of my results described in my previous blog post).
Seminars
I am not even sure we should be using the same word to describe research seminars during the pandemic, as the experience of giving and watching math lectures online are so drastically different compared to what we are used to. Let’s count the differences, which are both positive and negative.
- The personal interactions suffer. Online people are much more shy to interrupt, follow up with questions after the talk, etc. The usual pre- or post-seminar meals allow the speaker to meet the (often junior) colleagues who might be more open to ask questions in an informal setting. This is all bad.
- Being online, the seminar opened to a worldwide audience. This is just terrific as people from remote locations across the globe now have the same access to seminars at leading universities. What arXiv did to math papers, covid did to math seminars.
- Again, being online, the seminars are no longer restricting themselves to local speaks or having to make travel arrangements to out of town speakers. Some UCLA seminars this year had many European speakers, something which would be prohibitively expensive just last year.
- Many seminars are now recorded with videos and slides posted online, like we do at the UCLA Combinatorics and LA Combinatorics and Complexity seminars I am co-organizing. The viewers can watch them later, can fast forward, come back and re-watch them, etc. All the good features of watching videos I extolled back in 2015. This is all good.
- On a minor negative side, the audience is no longer stable as it varies from seminar to seminar, further diminishing personal interactions and making level of the audience somewhat unpredictable and hard to aim for.
- As a seminar organizer, I make it a personal quest to encourage people to turn on their cameras at the seminars by saying hello only to those whose faces I see. When the speaker doesn’t see the faces, whether they are nodding or quizzing, they are clueless whether the they are being clear, being too fast or too slow, etc. Stopping to ask for questions no longer works well, especially if the seminar is being recorded. This invariably leads to worse presentations as the speakers can misjudge the audience reactions.
- Unfortunately, not everyone is capable of handling technology challenges equally well. I have seen remarkably well presented talks, as well as some of extremely poor quality talks. The ability to mute yourself and hide behind your avatar is the only saving grace in such cases.
- Even the true haters of online educations are now at least semi-on-board. Back in May, I wrote to Chris Schaberg dubbed by the insufferable Rebecca Schuman as “vehemently opposed to the practice“. He replied that he is no longer that opposed to teaching online, and that he is now in a “it’s really complicated!” camp. Small miracles…
Conferences
The changes in conferences are largely positive. Unfortunately, some conferences from the Spring and Summer of 2020 were canceled and moved, somewhat optimistically, to 2021. Looking back, they should all have been held in the online format, which opens them to participants from around the world. Let’s count upsides and downsides:
- No need for travel, long time commitments and financial expenses. Some conferences continue charging fees for online participation. This seems weird to me. I realize that some conferences are vehicles to support various research centers and societies. Whatever, this is unsustainable as online conferences will likely survive the pandemic. These organizations should figure out some other income sources or die.
- The conferences are now truly global, so the emphasis is purely on mathematical areas than on the geographic proximity. This suggests that the (until recently) very popular AMS meetings should probably die, making AMS even more of a publisher than it is now. I am especially looking forward to the death of “joint meetings” in January which in my opinion outlived their usefulness as some kind of math extravaganza events bringing everyone together. In fact, Zoom simply can’t bring five thousand people together, just forget about it…
- The conferences are now open to people in other areas. This might seem minor — they were always open. However, given the time/money constraints, a mathematician is likely to go only to conferences in their area. Besides, since they rarely get invited to speak at conferences in other areas, travel to such conferences is even harder to justify. This often leads to groupthink as the same people meet year after year at conferences on narrow subjects. Now that this is no longer an obstacle, we might see more interactions between the fields.
- On a negative side, the best kind of conferences are small informal workshops (think of Oberwolfach, AIM, Banff, etc.), where the lectures are advanced and the interactions are intense. I miss those and hope they come back as they are really irreplaceable in the only setting. If all goes well, these are the only conferences which should definitely survive and even expand in numbers perhaps.
Books and journals
A short summary is that in math, everything should be electronic, instantly downloadable and completely free. Cut off from libraries, thousands of mathematicians were instantly left to the perils of their university library’s electronic subscriptions and their personal book collections. Some fared better than others, in part thanks to the arXiv, non-free journals offering old issues free to download, and some ethically dubious foreign websites.
I have been writing about my copyleft views for a long time (see here, there and most recently there). It gets more and more depressing every time. Just when you think there is some hope, the resilience of paid publishing and reluctance to change by the community is keeping the unfortunate status quo. You would think everyone would be screaming about the lack of access to books/journals, but I guess everyone is busy doing something else. Still, there are some lessons worth noting.
- You really must have all your papers freely available online. Yes, copyrighted or not, the publishers are ok with authors posting their papers on their personal website. They are not ok when others are posting your papers on their websites, so the free access to your papers is on you and your coauthors (if any). Unless you have already done so, do this asap! Yes, this applies even to papers accessible online by subscription to selected libraries. For example, many libraries including all of UC system no longer have access to Elsevier journals. Please help both us and yourself! How hard is it to put the paper on the arXiv or your personal website? If people like Noga Alon and Richard Stanley found time to put hundreds of their papers online, so can you. I make a point of emailing to people asking them to do that every time I come across a reference which I cannot access. They rarely do, and usually just email me the paper. Oh, well, at least I tried…
- Learn to use databases like MathSciNet and Zentralblatt. Maintain your own website by adding the slides, video links as well as all your papers. Make sure to clean up and keep up to date your Google Scholar profile. When left unattended it can get overrun with random papers by other people, random non-research files you authored, separate items for same paper, etc. Deal with all that – it’s easy and takes just a few minutes (also, some people judge them). When people are struggling trying to do research from home, every bit of help counts.
- If you are signing a book contract, be nice to online readers. Make sure you keep the right to display a public copy on your website. We all owe a great deal of gratitude to authors who did this. Here is my favorite, now supplemented with high quality free online lectures. Be like that! Don’t be like one author (who will remain unnamed) who refused to email me a copy of a short 5 page section from his recent book. I wanted to teach the section in my graduate class on posets this Fall. Instead, the author suggested I buy a paper copy. His loss — I ended up teaching some other material instead. Later on, I discovered that the book is already available on one of those ethically compromised websites. He was fighting a battle he already lost!
Home computing
Different people can take different conclusions from 2020, but I don’t think anyone would argue the importance of having good home computing. There is a refreshing variety of ways in which people do this, and it’s unclear to me what is the optimal set up. With a vaccine on the horizon, people might be reluctant to further invest into new computing equipment (or video cameras, lights, whiteboard, etc.), but the holiday break is actually a good time to marinate on what worked out well and what didn’t.
Read your evaluations and take them to heart. Make changes when you see there are problems. I know, it’s unfair, your department might never compensate you for all this stuff. Still, it’s a small price to pay for having a safe academic job in the time of widespread anxiety.
Predictions for the future
- Very briefly: I think online seminars and conferences are here to stay. Local seminars and small workshops will also survive. The enormous AMS meetings and expensive Theory CS meetings will play with the format, but eventually turn online for good or die untimely death.
- Online teaching will remain being offered by every undergraduate math program to reach out to students across the spectrum of personal circumstances. A small minority of courses, but still. Maybe one section of each calculus, linear algebra, intro probability, discrete math, etc. Some faculty might actually prefer this format to stay away from office one semester. Perhaps, in place of a sabbatical, they can ask for permission to spend a semester some other campus, maybe in another state or country, while they continue teaching, holding seminars, supervising students, etc. This could be a perk of academic life to compete with the “remote work” that many businesses are starting to offer on a permanent basis. Universities would have to redefine what they mean by “residence” requirement for both faculty and students.
- More university libraries will play hardball and unsubscribe from major for-profit publishers. This would again sound hopeful, but not gain a snowball effect for at least the next 10 years.
- There will be some standardization of online teaching requirements across the country. Online cheating will remain widespread. Courts will repeatedly rule that business and institutions can discount or completely ignore all 2020 grades as unreliable in large part because of the cheating scandals.
Final recommendations
- Be nice to your junior colleagues. In the winner-take-all no-limits online era, the established and well-known mathematicians get invited over and over, while their junior colleagues get overlooked, just in time when they really need help (job market might be tough this year). So please go out of your way to invite them to give talks at your seminars. Help them with papers and application materials. At least reply to their emails! Yes, even small things count…
- Do more organizing if you are in position to do so. In the absence of physical contact, many people are too shy and shell-shocked to reach out. Seminars, conferences, workshops, etc. make academic life seem somewhat normal and the breaks definitely allow for more interactions. Given the apparent abundance of online events one my be forgiven to think that no more is needed. But more locally focused online events are actually important to help your communities. These can prove critical until everything is back to normal.
Good luck everybody! Hope 2021 will be better for us all!
What if they are all wrong?
Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.
The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.
Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…
But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.
What are conjectures in mathematics?
That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.
What makes a conjecture interesting?
This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.
One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?
Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?
None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:
We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]
Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?
If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.
What makes us believe a conjecture is true?
We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?
This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.
I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?
What do people say?
It’s worth starting with a general (if slightly poetic) modern description:
In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]
Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:
[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]
Here is how he reconciled somewhat the apparent contradiction:
On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]
Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)
Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:
Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]
This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.
Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:
Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]
So, what’s the problem?
Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.
Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:
To recognize extraordinary achievements and advances in mathematical research.
So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?
I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?
If your head is not spinning hard enough, here is another amusing quote:
Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].
Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!
Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:
[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]
Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:
Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]
Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.
But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:
Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]
Why do I care?
For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.
For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:
- The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
- The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
- Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).
On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.
In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.
My favorite disproofs and counterexamples:
There are many. Here are just a few, some famous and some not-so-famous, in historical order:
- Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
- Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
- General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
- Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
- Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
- Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
- Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
- Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)
In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.
Why should you disprove conjectures?
There are three reasons, of different nature and importance.
First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.
Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.
Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.
What should the institutions do?
If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.
What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.
What should the journals do?
In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.
To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…
Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).
Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?
Editor’s dilemma
There are three real criteria for evaluation a solution of an open problem by the journal:
- Is this an old, famous, or well-studied problem?
- Are the tools interesting or innovative enough to be helpful in future studies?
- Are the implications of the solution to other problems important enough?
Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.
Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?
If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.
Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?
What do other people do?
Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:
Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”
Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”
Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”
Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”
Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”
If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?
What should you do?
First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.
Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.
Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.
Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.
Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them.
Sixth, be brave and optimistic! Whether you decide to prove, disprove a conjecture, or simply state a new conjecture, go for it! Ignore the judgements by the likes of Sarnak and Zeilberger. Trust me — they don’t really mean it.