Archive

Archive for the ‘Mathematics Journals’ Category

My interview

March 9, 2021 1 comment

Readers of this blog will remember my strong advocacy for taking interviews. In a surprising turn of events, Toufik Mansour interviewed me for the journal Enumerative Combinatorics and Applications (ECA). Here is that interview. Not sure if I am the right person to be interviewed, but if you want to see other Toufik’s interviews — click here (I mentioned some of them earlier). I am looking forward to read interviews of many more people in ECA and other journals.

P.S. The interview asks also about this blog, so it seems fitting to mention it here.

Corrections: (March 11, 2021) 1. I misread “What three results do you consider the most influential in combinatorics during the last thirty years?” question as asking about my own three results that are specifically in combinatorics. Ugh, to the original question – none of my results would go on that list. 2. In the pattern avoidance question, I misstated the last condition: I am asking for ec(Π) to be non-algebraic. Sorry everyone for all the confusion!

How to tell a good mathematical story

March 4, 2021 Leave a comment

As I mentioned in my previous blog post, I was asked to contribute to  to the Early Career Collection in the Notices of the AMS. The paper is not up on their website yet, but I already submitted the proofs. So if you can’t wait — the short article is available here. I admit that it takes a bit of a chutzpah to teach people how to write, so take it as you will.

Like my previous “how to write” article (see also my blog post), this article is mildly opinionated, but hopefully not overly so to remain useful. It is again aimed at a novice writer. There is a major difference between the way fiction is written vs. math, and I am trying to capture it somehow. To give you some flavor, here is a quote:

What kind of a story? Imagine a non-technical and non-detailed version of the abstract of your paper. It should be short, to the point, and straightforward enough to be a tweet, yet interesting enough for one person to want to tell it, and for the listener curious enough to be asking for details. Sounds difficult if not impossible? You are probably thinking that way, because distilled products always lack flavor compared to the real thing. I hear you, but let me give you some examples.

Take Aesop’s fable “The Tortoise and the Hare” written over 2500 years ago. The story would be “A creature born with a gift procrastinated one day, and was overtaken by a very diligent creature born with a severe handicap.” The names of these animals and the manner in which one lost to another are less relevant to the point, so the story is very dry. But there are enough hints to make some readers curious to look up the full story.

Now take “The Terminator”, the original 1984 movie. The story here is (spoiler alert! ) “A man and a machine come from another world to fight in this world over the future of the other world; the man kills the machine but dies at the end.” If you are like me, you probably have many questions about the details, which are in many ways much more exciting than the dry story above. But you see my point – this story is a bit like an extended tag line, yet interesting enough to be discussed even if you know the ending.

It could have been worse! Academic lessons of 2020

December 20, 2020 3 comments

Well, this year sure was interesting, and not in a good way. Back in 2015, I wrote a blog post discussing how video talks are here to stay, and how we should all agree to start giving them and embrace watching them, whether we like it or not. I was right about that, I suppose. OTOH, I sort of envisioned a gradual acceptance of this practice, not the shock therapy of a phase transition. So, what happened? It’s time to summarize the lessons and roll out some new predictions.

Note: this post is about the academic life which is undergoing some changes. The changes in real life are much more profound, but are well discussed elsewhere.

Teaching

This was probably the bleakest part of the academic life, much commented upon by the media. Good thing there is more to academia than teaching, no matter what the ignorant critics think. I personally haven’t heard anyone saying post-March 2020, that online education is an improvement. If you are like me, you probably spent much more time preparing and delivering your lectures. The quality probably suffered a little. The students probably didn’t learn as much. Neither party probably enjoyed the experience too much. They also probably cheated quite a bit more. Oh, well…

Let’s count the silver linings. First, it will all be over some time next year. At UCLA, not before the end of Summer. Maybe in the Fall… Second, it could’ve been worse. Much worse. Depending on the year, we would have different issues. Back in 1990, we would all be furloughed for a year living off our savings. In 2000, most families had just one personal computer (and no smartphones, obviously). Let the implications of that sink in. But even in 2010 we would have had giant technical issues teaching on Skype (right?) by pointing our laptop cameras on blackboards with dismal effect. The infrastructure which allows good quality streaming was also not widespread (people were still using Redbox, remember?)

Third, the online technology somewhat mitigated the total disaster of studying in the pandemic time. Students who are stuck in faraway countries or busy with family life can watch stored videos of lectures at their convenience. Educational and grading software allows students to submit homeworks and exams online, and instructors to grade them. Many other small things not worth listing, but worth being thankful for.

Fourth, the accelerated embrace of the educational technology could be a good thing long term, even when things go back to normal. No more emails with scanned late homeworks, no more canceled/moved office hours while away at conferences. This can all help us become better at teaching.

Finally, a long declared “death of MOOCs” is no longer controversial. As a long time (closeted) opponent to online education, I am overjoyed that MOOCs are no longer viewed as a positive experience for university students, more like something to suffer through. Here in CA we learned this awhile ago, as the eagerness of the current Gov. Newsom (back then Lt. Gov.) to embrace online courses did not work out well at all. Back in 2013, he said that the whole UC system needs to embrace online education, pronto: “If this doesn’t wake up the U.C. [..] I don’t know what will.” Well, now you know, Governor! I guess, in 2020, I don’t have to hide my feelings on this anymore…

Research

I always thought that mathematicians can work from anywhere with a good WiFi connection. True, but not really – this year was a mixed experience as lonely introverts largely prospered research wise, while busy family people and extraverts clearly suffered. Some day we will know how much has research suffered in 2020, but for me personally it wasn’t bad at all (see e.g. some of my results described in my previous blog post).

Seminars

I am not even sure we should be using the same word to describe research seminars during the pandemic, as the experience of giving and watching math lectures online are so drastically different compared to what we are used to. Let’s count the differences, which are both positive and negative.

  1. The personal interactions suffer. Online people are much more shy to interrupt, follow up with questions after the talk, etc. The usual pre- or post-seminar meals allow the speaker to meet the (often junior) colleagues who might be more open to ask questions in an informal setting. This is all bad.
  2. Being online, the seminar opened to a worldwide audience. This is just terrific as people from remote locations across the globe now have the same access to seminars at leading universities. What arXiv did to math papers, covid did to math seminars.
  3. Again, being online, the seminars are no longer restricting themselves to local speaks or having to make travel arrangements to out of town speakers. Some UCLA seminars this year had many European speakers, something which would be prohibitively expensive just last year.
  4. Many seminars are now recorded with videos and slides posted online, like we do at the UCLA Combinatorics and LA Combinatorics and Complexity seminars I am co-organizing. The viewers can watch them later, can fast forward, come back and re-watch them, etc. All the good features of watching videos I extolled back in 2015. This is all good.
  5. On a minor negative side, the audience is no longer stable as it varies from seminar to seminar, further diminishing personal interactions and making level of the audience somewhat unpredictable and hard to aim for.
  6. As a seminar organizer, I make it a personal quest to encourage people to turn on their cameras at the seminars by saying hello only to those whose faces I see. When the speaker doesn’t see the faces, whether they are nodding or quizzing, they are clueless whether the they are being clear, being too fast or too slow, etc. Stopping to ask for questions no longer works well, especially if the seminar is being recorded. This invariably leads to worse presentations as the speakers can misjudge the audience reactions.
  7. Unfortunately, not everyone is capable of handling technology challenges equally well. I have seen remarkably well presented talks, as well as some of extremely poor quality talks. The ability to mute yourself and hide behind your avatar is the only saving grace in such cases.
  8. Even the true haters of online educations are now at least semi-on-board. Back in May, I wrote to Chris Schaberg dubbed by the insufferable Rebecca Schuman as “vehemently opposed to the practice“. He replied that he is no longer that opposed to teaching online, and that he is now in a “it’s really complicated!” camp. Small miracles…

Conferences

The changes in conferences are largely positive. Unfortunately, some conferences from the Spring and Summer of 2020 were canceled and moved, somewhat optimistically, to 2021. Looking back, they should all have been held in the online format, which opens them to participants from around the world. Let’s count upsides and downsides:

  1. No need for travel, long time commitments and financial expenses. Some conferences continue charging fees for online participation. This seems weird to me. I realize that some conferences are vehicles to support various research centers and societies. Whatever, this is unsustainable as online conferences will likely survive the pandemic. These organizations should figure out some other income sources or die.
  2. The conferences are now truly global, so the emphasis is purely on mathematical areas than on the geographic proximity. This suggests that the (until recently) very popular AMS meetings should probably die, making AMS even more of a publisher than it is now. I am especially looking forward to the death of “joint meetings” in January which in my opinion outlived their usefulness as some kind of math extravaganza events bringing everyone together. In fact, Zoom simply can’t bring five thousand people together, just forget about it…
  3. The conferences are now open to people in other areas. This might seem minor — they were always open. However, given the time/money constraints, a mathematician is likely to go only to conferences in their area. Besides, since they rarely get invited to speak at conferences in other areas, travel to such conferences is even harder to justify. This often leads to groupthink as the same people meet year after year at conferences on narrow subjects. Now that this is no longer an obstacle, we might see more interactions between the fields.
  4. On a negative side, the best kind of conferences are small informal workshops (think of Oberwolfach, AIM, Banff, etc.), where the lectures are advanced and the interactions are intense. I miss those and hope they come back as they are really irreplaceable in the only setting. If all goes well, these are the only conferences which should definitely survive and even expand in numbers perhaps.

Books and journals

A short summary is that in math, everything should be electronic, instantly downloadable and completely free. Cut off from libraries, thousands of mathematicians were instantly left to the perils of their university library’s electronic subscriptions and their personal book collections. Some fared better than others, in part thanks to the arXiv, non-free journals offering old issues free to download, and some ethically dubious foreign websites.

I have been writing about my copyleft views for a long time (see here, there and most recently there). It gets more and more depressing every time. Just when you think there is some hope, the resilience of paid publishing and reluctance to change by the community is keeping the unfortunate status quo. You would think everyone would be screaming about the lack of access to books/journals, but I guess everyone is busy doing something else. Still, there are some lessons worth noting.

  1. You really must have all your papers freely available online. Yes, copyrighted or not, the publishers are ok with authors posting their papers on their personal website. They are not ok when others are posting your papers on their websites, so the free access to your papers is on you and your coauthors (if any). Unless you have already done so, do this asap! Yes, this applies even to papers accessible online by subscription to selected libraries. For example, many libraries including all of UC system no longer have access to Elsevier journals. Please help both us and yourself! How hard is it to put the paper on the arXiv or your personal website? If people like Noga Alon and Richard Stanley found time to put hundreds of their papers online, so can you. I make a point of emailing to people asking them to do that every time I come across a reference which I cannot access. They rarely do, and usually just email me the paper. Oh, well, at least I tried…
  2. Learn to use databases like MathSciNet and Zentralblatt. Maintain your own website by adding the slides, video links as well as all your papers. Make sure to clean up and keep up to date your Google Scholar profile. When left unattended it can get overrun with random papers by other people, random non-research files you authored, separate items for same paper, etc. Deal with all that – it’s easy and takes just a few minutes (also, some people judge them). When people are struggling trying to do research from home, every bit of help counts.
  3. If you are signing a book contract, be nice to online readers. Make sure you keep the right to display a public copy on your website. We all owe a great deal of gratitude to authors who did this. Here is my favorite, now supplemented with high quality free online lectures. Be like that! Don’t be like one author (who will remain unnamed) who refused to email me a copy of a short 5 page section from his recent book. I wanted to teach the section in my graduate class on posets this Fall. Instead, the author suggested I buy a paper copy. His loss — I ended up teaching some other material instead. Later on, I discovered that the book is already available on one of those ethically compromised websites. He was fighting a battle he already lost!

Home computing

Different people can take different conclusions from 2020, but I don’t think anyone would argue the importance of having good home computing. There is a refreshing variety of ways in which people do this, and it’s unclear to me what is the optimal set up. With a vaccine on the horizon, people might be reluctant to further invest into new computing equipment (or video cameras, lights, whiteboard, etc.), but the holiday break is actually a good time to marinate on what worked out well and what didn’t.

Read your evaluations and take them to heart. Make changes when you see there are problems. I know, it’s unfair, your department might never compensate you for all this stuff. Still, it’s a small price to pay for having a safe academic job in the time of widespread anxiety.

Predictions for the future

  1. Very briefly: I think online seminars and conferences are here to stay. Local seminars and small workshops will also survive. The enormous AMS meetings and expensive Theory CS meetings will play with the format, but eventually turn online for good or die untimely death.
  2. Online teaching will remain being offered by every undergraduate math program to reach out to students across the spectrum of personal circumstances. A small minority of courses, but still. Maybe one section of each calculus, linear algebra, intro probability, discrete math, etc. Some faculty might actually prefer this format to stay away from office one semester. Perhaps, in place of a sabbatical, they can ask for permission to spend a semester some other campus, maybe in another state or country, while they continue teaching, holding seminars, supervising students, etc. This could be a perk of academic life to compete with the “remote work” that many businesses are starting to offer on a permanent basis. Universities would have to redefine what they mean by “residence” requirement for both faculty and students.
  3. More university libraries will play hardball and unsubscribe from major for-profit publishers. This would again sound hopeful, but not gain a snowball effect for at least the next 10 years.
  4. There will be some standardization of online teaching requirements across the country. Online cheating will remain widespread. Courts will repeatedly rule that business and institutions can discount or completely ignore all 2020 grades as unreliable in large part because of the cheating scandals.

Final recommendations

  1. Be nice to your junior colleagues. In the winner-take-all no-limits online era, the established and well-known mathematicians get invited over and over, while their junior colleagues get overlooked, just in time when they really need help (job market might be tough this year). So please go out of your way to invite them to give talks at your seminars. Help them with papers and application materials. At least reply to their emails! Yes, even small things count…
  2. Do more organizing if you are in position to do so. In the absence of physical contact, many people are too shy and shell-shocked to reach out. Seminars, conferences, workshops, etc. make academic life seem somewhat normal and the breaks definitely allow for more interactions. Given the apparent abundance of online events one my be forgiven to think that no more is needed. But more locally focused online events are actually important to help your communities. These can prove critical until everything is back to normal.

Good luck everybody! Hope 2021 will be better for us all!

What if they are all wrong?

December 10, 2020 4 comments

Conjectures are a staple of mathematics. They are everywhere, permeating every area, subarea and subsubarea. They are diverse enough to avoid a single general adjective. They come in al shapes and sizes. Some of them are famous, classical, general, important, inspirational, far-reaching, audacious, exiting or popular, while others are speculative, narrow, technical, imprecise, far-fetched, misleading or recreational. That’s a lot of beliefs about unproven claims, yet we persist in dispensing them, inadvertently revealing our experience, intuition and biases.

The conjectures also vary in attitude. Like a finish line ribbon they all appear equally vulnerable to an outsider, but in fact differ widely from race to race. Some are eminently reachable, the only question being who will get there first (think 100 meter dash). Others are barely on the horizon, requiring both great effort, variety of tools, and an extended time commitment (think ironman triathlon). The most celebrated third type are like those Sci-Fi space expeditions in requiring hundreds of years multigenerational commitments, often losing contact with civilization it left behind. And we can’t forget the romantic fourth type — like the North Star, no one actually wants to reach them, as they are largely used for navigation, to find a direction in unchartered waters.

Now, conjectures famously provide a foundation of the scientific method, but that’s not at all how we actually think of them in mathematics. I argued back in this pointed blog post that citations are the most crucial for the day to day math development, so one should take utmost care in making references. While this claim is largely uncontroversial and serves as a raison d’être for most GoogleScholar profiles, conjectures provide a convenient idealistic way out. Thus, it’s much more noble and virtuous to say “I dedicated my life to the study of the XYZ Conjecture” (even if they never publish anything), than “I am working hard writing so many papers to gain respect of my peers, get a promotion, and provide for my family“. Right. Obviously…

But given this apparent (true or perceived) importance of conjectures, are you sure you are using them right? What if some/many of these conjectures are actually wrong, what then? Should you be flying that starship if there is no there there? An idealist would argue something like “it’s a journey, not a destination“, but I strongly disagree. Getting closer to the truth is actually kind of important, both as a public policy and on an individual level. It is thus pretty important to get it right where we are going.

What are conjectures in mathematics?

That’s a stupid question, right? Conjectures are mathematical claims whose validity we are trying to ascertain. Is that all? Well, yes, if you don’t care if anyone will actually work on the conjecture. In other words, something about the conjecture needs to interesting and inspiring.

What makes a conjecture interesting?

This is a hard question to answer because it is as much psychological as it is mathematical. A typical answer would be “oh, because it’s old/famous/beautiful/etc.” Uhm, ok, but let’s try to be a little more formal.

One typically argues “oh, that’s because this conjecture would imply [a list of interesting claims and known results]”. Well, ok, but this is self-referential. We already know all those “known results”, so no need to prove them again. And these “claims” are simply other conjectures, so this is really an argument of the type “this conjecture would imply that conjecture”, so not universally convincing. One can argue: “look, this conjecture has so many interesting consequences”. But this is both subjective and unintuitive. Shouldn’t having so many interesting conjectural consequences suggest that perhaps the conjecture is too strong and likely false? And if the conjecture is likely to be false, shouldn’t this make it uninteresting?

Also, wouldn’t it be interesting if you disprove a conjecture everyone believes to be true? In some sense, wouldn’t it be even more interesting if until now everyone one was simply wrong?

None of this are new ideas, of course. For example, faced with the need to justify the “great” BC conjecture, or rather 123 pages of survey on the subject (which is quite interesting and doesn’t really need to be justified), the authors suddenly turned reflective. Mindful of self-referential approach which they quickly discard, they chose a different tactic:

We believe that the interest of a conjecture lies in the feeling of unity of mathematics that it entails. [M.P. Gomez Aparicio, P. Julg and A. Valette, “The Baum-Connes conjecture“, 2019]

Huh? Shouldn’t math be about absolute truths, not feelings? Also, in my previous blog post, I mentioned Noga Alon‘s quote that Mathematics is already “one unit“. If it is, why does it need a new “feeling of unity“? Or is that like one of those new age ideas which stop being true if you don’t reinforce them at every occasion?

If you are confused at this point, welcome to the club! There is no objective way to argue what makes certain conjectures interesting. It’s all in our imagination. Nikolay Konstantinov once told me that “mathematics is a boring subject because every statement is equivalent to saying that some set is empty.” He meant to be provocative rather than uninspiring. But the problem he is underlying is quite serious.

What makes us believe a conjecture is true?

We already established that in order to argue that a conjecture is interesting we need to argue it’s also true, or at least we want to believe it to be true to have all those consequences. Note, however, that we argue that a conjecture is true in exactly the same way we argue it’s interesting: by showing that it holds is some special cases, and that it would imply other conjectures which are believed to be true because they are also checked in various special cases. So in essence, this gives “true = interesting” in most cases. Right?

This is where it gets complicated. Say, you are working on the “abc conjecture” which may or may not be open. You claim that it has many consequences, which makes it both likely true and interesting. One of them is the negative solution to the Erdős–Ulam problem about existence of a dense set in the plane with rational pairwise distances. But a positive solution to the E-U problem implies the Harborth’s conjecture (aka the “integral Fáry problem“) that every graph can be drawn in the plane with rational edge lengths. So, counterintuitively, if you follow the logic above shouldn’t you be working on a positive solution to Erdős–Ulam since it would both imply one conjecture and give a counterexample to another? For the record, I wouldn’t do that, just making a polemical point.

I am really hoping you see where I am going. Since there is no objective way to tell if a conjecture is true or not, and what exactly is so interesting about it, shouldn’t we discard our biases and also work towards disproving the conjecture just as hard as trying to prove it?

What do people say?

It’s worth starting with a general (if slightly poetic) modern description:

In mathematics, [..] great conjectures [are] sharply formulated statements that are most likely true but for which no conclusive proof has yet been found. These conjectures have deep roots and wide ramifications. The search for their solution guides a large part of mathematics. Eternal fame awaits those who conquer them first. Remarkably, mathematics has elevated the formulation of a conjecture into high art. [..] A well-chosen but unproven statement can make its author world-famous, sometimes even more so than the person providing the ultimate proof. [Robbert Dijkgraaf, The Subtle Art of the Mathematical Conjecture, 2019]

Karl Popper thought that conjectures are foundational to science, even if somewhat idealized the efforts to disprove them:

[Great scientists] are men of bold ideas, but highly critical of their own ideas: they try to find whether their ideas are right by trying first to find whether they are not perhaps wrong. They work with bold conjectures and severe attempts at refuting their own conjectures. [Karl Popper, Heroic Science, 1974]

Here is how he reconciled somewhat the apparent contradiction:

On the pre-scientific level we hate the very idea that we may be mistaken. So we cling dogmatically to our conjectures, as long as possible. On the scientific level, we systematically search for our mistakes. [Karl Popper, quoted by Bryan Magee, 1971]

Paul Erdős was, of course, a champion of conjectures and open problems. He joked that the purpose of life is “proof and conjecture” and this theme is repeatedly echoed when people write about him. It is hard to overestimate his output, which included hundreds of talks titled “My favorite problems“. He wrote over 180 papers with collections of conjectures and open problems (nicely assembled by Zbl. Math.)

Peter Sarnak has a somewhat opposite point of view, as he believes one should be extremely cautious about stating a conjecture so people don’t waste time working on it. He said once, only half-jokingly:

Since we reward people for making a right conjecture, maybe we should punish those who make a wrong conjecture. Say, cut off their fingers. [Peter Sarnak, UCLA, c. 2012]

This is not an exact quote — I am paraphrasing from memory. Needless to say, I disagree. I don’t know how many fingers he wished Erdős should lose, since some of his conjectures were definitely disproved: one, two, three, four, five, and six. This is not me gloating, the opposite in fact. When you are stating hundreds of conjectures in the span of almost 50 years, having only a handful to be disproved is an amazing batting average. It would, however, make me happy if Sarnak’s conjecture is disproved someday.

Finally, there is a bit of a controversy whether conjectures are worth as much as theorems. This is aptly summarized in this quote about yet another champion of conjectures:

Louis J. Mordell [in his book review] questioned Hardy‘s assessment that Ramanujan was a man whose native talent was equal to that of Euler or Jacobi. Mordell [..] claims that one should judge a mathematician by what he has actually done, by which Mordell seems to mean, the theorems he has proved. Mordell’s assessment seems quite wrong to me. I think that a felicitous but unproved conjecture may be of much more consequence for mathematics than the proof of many a respectable theorem. [Atle Selberg, “Reflections Around the Ramanujan Centenary“, 1988]

So, what’s the problem?

Well, the way I see it, the efforts made towards proving vs. disproving conjectures is greatly out of balance. Despite all the high-minded Popper’s claims about “severe attempts at refuting their own conjectures“, I don’t think there is much truth to that in modern math sciences. This does not mean that disproofs of famous conjectures aren’t celebrated. Sometimes they are, see below. But it’s clear to me that the proofs are celebrated more frequently, and to a much greater degree. I have only anecdotal evidence to support my claim, but bear with me.

Take prizes. Famously, Clay Math Institute gives $1 million for a solution of any of these major open problems. But look closely at the rules. According to the item 5b, except for the P vs. NP problem and the Navier–Stokes Equation problem, it gives nothing ($0) for a disproof of these problems. Why, oh why?? Let’s look into CMI’s “primary objectives and purposes“:

To recognize extraordinary achievements and advances in mathematical research.

So it sounds like CMI does not think that disproving the Riemann Hypothesis needs to be rewarded because this wouldn’t “advance mathematical research”. Surely, you are joking? Whatever happened to “the opposite of a profound truth may well be another profound truth“? Why does the CMI wants to put its thumb on the scale and support only one side? Do they not want to find out the solution whatever it is? Shouldn’t they be eager to dispense with the “wrong conjecture” so as to save numerous researches from “advances to nowhere“?

I am sure you can see that my blood is boiling, but let’s proceed to the P vs. NP problem. What if it’s independent of ZFC? Clearly, CMI wouldn’t pay for proving that. Why not? It’s not like this kind of thing never happened before (see obligatory link to CH). Some people believe that (or at least they did in 2012), and some people like Scott Aaronson take this seriously enough. Wouldn’t this be a great result worthy of an award as much as the proof that P=NP, or at least a nonconstructive proof that P=NP?

If your head is not spinning hard enough, here is another amusing quote:

Of course, it’s possible that P vs. NP is unprovable, but that that fact itself will forever elude proof: indeed, maybe the question of the independence of P vs. NP is itself independent of set theory, and so on ad infinitum! But one can at least say that, if P vs. NP (or for that matter, the Riemann hypothesis, Goldbach’s conjecture, etc.) were proven independent of ZF, it would be an unprecedented development. [Scott Aaronson, P vs. NP, 2016].

Speaking of Goldbach’s Conjecture, the most talked about and the most intuitively correct statement in Number Theory that I know. In a publicity stunt, for two years there was a $1 million prize by a publishing house for the proof of the conjecture. Why just for the proof? I never heard of anyone not believing the conjecture. If I was the insurance underwriter for the prize (I bet they had one), I would allow them to use “for the proof or disproof” for a mere extra $100 in premium. For another $50 I would let them use “or independent of ZF” — it’s a free money, so why not? It’s such a pernicious idea of rewarding only one kind of research outcome!

Curiously, even for Goldbach’s Conjecture, there is a mild divergence of POVs on what the future holds. For example, Popper writes (twice in the same book!) that:

[On whether Goldbach’s Conjecture is ‘demonstrable’] We don’t know: perhaps we may never know, and perhaps we can never know. [Karl Popper, Conjectures and Refutations, 1963]

Ugh. Perhaps. I suppose anything can happen… For example, our civilizations can “perhaps” die out in the next 200 years. But is that likely? Shouldn’t the gloomy past be a warning, not a prediction of the future? The only thing more outrageously pessimistic is this theological gem of a quote:

Not even God knows the number of permutations of 1000 avoiding the 1324 pattern. [Doron Zeilberger, quoted here, 2005]

Thanks, Doron! What a way to encourage everyone! Since we know from numerical estimates that this number is ≈ 3.7 × 101017 (see this paper and this follow up), Zeilberger is suggesting that large pattern avoidance numbers are impossibly hard to compute precisely, already in the range of only about 1018 digits. I really hope he is proved wrong in his lifetime.

But I digress. What I mean to emphasize, is that there are many ways a problem can be resolved. Yet some outcomes are considered more valuable than others. Shouldn’t the research achievements be rewarded, not the desired outcome? Here is yet another colorful opinion on this:

Given a conjecture, the best thing is to prove it. The second best thing is to disprove it. The third best thing is to prove that it is not possible to disprove it, since it will tell you not to waste your time trying to disprove it. That’s what Gödel did for the Continuum Hypothesis. [Saharon Shelah, Rutgers Univ. Colloqium, 2001]

Why do I care?

For one thing, disproving conjectures is part of what I do. Sometimes people are a little shy to unambiguously state them as formal conjectures, so they phrase them as questions or open problems, but then clarify that they believe the answer is positive. This is a distinction without a difference, or at least I don’t see any (maybe they are afraid of Sarnak’s wrath?) Regardless, proving their beliefs wrong is still what I do.

For example, here is my old bog post on my disproof of the Noonan-Zeiberger Conjecture (joint with Scott Garrabrant). And in this recent paper (joint with Danny Nguyen), we disprove in one big swoosh both Barvinok’s Problem, Kannan’s Problem, and Woods Conjecture. Just this year I disproved three conjectures:

  1. The Kirillov–Klyachko Conjecture (2004) that the reduced Kronecker coefficients satisfy the saturation property (this paper, joint with Greta Panova).
  2. The Brandolini et al. Conjecture (2019) that concrete lattice polytopes can multitile the space (this paper, joint with Alexey Garber).
  3. Kenyon’s Problem (c. 2005) that every integral curve in R3 is a boundary of a PL surface comprised of unit triangles (this paper, joint with Alexey Glazyrin).

On top of that, just two months ago in this paper (joint with Han Lyu), we showed that the remarkable independence heuristic by I. J. Good for the number of contingency tables, fails badly even for nearly all uniform marginals. This is not exactly disproof of a conjecture, but it’s close, since the heuristic was introduced back in 1950 and continues to work well in practice.

In addition, I am currently working on disproving two more old conjectures which will remain unnamed until the time we actually resolve them (which might never happen, of course). In summary, I am deeply vested in disproving conjectures. The reasons why are somewhat complicated (see some of them below). But whatever my reasons, I demand and naively fully expect that my disproofs be treated on par with proofs, regardless whether this expectation bears any relation to reality.

My favorite disproofs and counterexamples:

There are many. Here are just a few, some famous and some not-so-famous, in historical order:

  1. Fermat‘s conjecture (letter to Pascal, 1640) on primality of Fermat numbers, disproved by Euler (1747)
  2. Tait’s conjecture (1884) on hamiltonicity of graphs of simple 3-polytopes, disproved by W.T. Tutte (1946)
  3. General Burnside Problem (1902) on finiteness of periodic groups, resolved negatively by E.S. Golod (1964)
  4. Keller’s conjecture (1930) on tilings with unit hypercubes, disproved by Jeff Lagarias and Peter Shor (1992)
  5. Borsuk’s Conjecture (1932) on partitions of convex sets into parts of smaller diameter, disproved by Jeff Kahn and Gil Kalai (1993)
  6. Hirsch Conjecture (1957) on the diameter of graphs of convex polytopes, disproved by Paco Santos (2010)
  7. Woods’s conjecture (1972) on the covering radius of certain lattices, disproved by Oded Regev, Uri Shapira and Barak Weiss (2017)
  8. Connes embedding problem (1976), resolved negatively by Zhengfeng Ji, Anand Natarajan, Thomas Vidick, John Wright and Henry Yuen (2020)

In all these cases, the disproofs and counterexamples didn’t stop the research. On the contrary, they gave a push to further (sometimes numerous) developments in the area.

Why should you disprove conjectures?

There are three reasons, of different nature and importance.

First, disproving conjectures is opportunistic. As mentioned above, people seem to try proving much harder than they try disproving. This creates niches of opportunity for an open-minded mathematician.

Second, disproving conjectures is beautiful. Let me explain. Conjectures tend to be rigid, as in “objects of the type pqr satisfy property abc.” People like me believe in the idea of “universality“. Some might call it “completeness” or even “Murphy’s law“, but the general principle is always the same. Namely: it is not sufficient that one wishes that all pqr satisfy abc to actually believe in the implication; rather, there has to be a strong reason why abc should hold. Barring that, pqr can possibly be almost anything, so in particular non-abc. While some would argue that non-abc objects are “ugly” or at least “not as nice” as abc, the idea of universality means that your objects can be of every color of the rainbow — nice color, ugly color, startling color, quiet color, etc. That kind of palette has its own sense of beauty, but it’s an acquired taste I suppose.

Third, disproving conjectures is constructive. It depends on the nature of the conjecture, of course, but one is often faced with necessity to construct a counterexample. Think of this as an engineering problem of building some pqr which at the same time is not abc. Such construction, if at all possible, might be difficult, time consuming and computer assisted. But so what? What would you rather do: build a mile-high skyscraper (none exist yet) or prove that this is impossible? Curiously, in CS Theory both algorithms and (many) complexity results are constructive (you need gadgets). Even the GCT is partially constructive, although explaining that would take us awhile.

What should the institutions do?

If you are an institution which awards prizes, stop with the legal nonsense: “We award […] only for a publication of a proof in a top journal”. You need to set up a scientific committee anyway, since otherwise it’s hard to tell sometimes if someone deserves a prize. With mathematicians you can expect anything anyway. Some would post two arXiv preprints, give a few lectures and then stop answering emails. Others would publish only in a journal where they are Editor-in-Chief. It’s stranger than fiction, really.

What you should do is say in the official rules: “We have [this much money] and an independent scientific committee which will award any progress on [this problem] partially or in full as they see fit.” Then a disproof or an independence result will receive just as much as the proof (what’s done is done, what else are you going to do with the money?) This would also allow some flexibility for partial solutions. Say, somebody proves Goldbach’s Conjecture for integers > exp(exp(10100000)), way way beyond computational powers for the remaining integers to be checked. I would give this person at least 50% of the prize money, leaving the rest for future developments of possibly many people improving on the bound. However, under the old prize rules such person gets bupkes for their breakthrough.

What should the journals do?

In short, become more open to results of computational and experimental nature. If this sounds familiar, that’s because it’s a summary of Zeilberger’s Opinions, viewed charitably. He is correct on this. This includes publishing results of the type “Based on computational evidence we believe in the following UVW conjecture” or “We develop a new algorithm which confirms the UVW conjecture for n<13″. These are still contributions to mathematics, and the journals should learn to recognize them as such.

To put in context of our theme, it is clear that a lot more effort has been placed on proofs than on finding counterexamples. However, in many areas of mathematics there are no small counterexamples, so a heavy computational effort is crucial for any hope of finding one. Such work is not be as glamorous as traditional papers. But really, when it comes to standards, if a journal is willing to publish the study of something like the “null graphs“, the ship has sailed for you…

Let me give you a concrete example where a computational effort is indispensable. The curious Lovász conjecture states that every finite connected vertex-transitive graph contains a Hamiltonian path. This conjecture got to be false. It hits every red flag — there is really no reason why pqr = “vertex transitive” should imply abc = “Hamiltonian”. The best lower bound for the length of the longest (self-avoiding) path is only about square root of the number of vertices. In fact, even the original wording by Lovász shows he didn’t believe the conjecture is true (also, I asked him and he confirmed).

Unfortunately, proving that some potential counterexample is not Hamiltonian is computationally difficult. I once had an idea of one (a nice cubic Cayley graph on “only” 3600 vertices), but Bill Cook quickly found a Hamiltonian cycle dashing my hopes (it was kind of him to look into this problem). Maybe someday, when the TSP solvers are fast enough on much larger graphs, it will be time to return to this problem and thoroughly test it on large Cayley graphs. But say, despite long odds, I succeed and find a counterexample. Would a top journal publish such a paper?

Editor’s dilemma

There are three real criteria for evaluation a solution of an open problem by the journal:

  1. Is this an old, famous, or well-studied problem?
  2. Are the tools interesting or innovative enough to be helpful in future studies?
  3. Are the implications of the solution to other problems important enough?

Now let’s make a hypothetical experiment. Let’s say a paper is submitted to a top math journal which solves a famous open problem in Combinatorics. Further, let’s say somebody already proved it is equivalent to a major problem in TCS. This checks criteria 1 and 3. Until not long ago it would be rejected regardless, so let’s assume this is happening relatively recently.

Now imagine two parallel worlds, where in the first world the conjecture is proved on 2 pages using beautiful but elementary linear algebra, and in the second world the conjecture is disproved on a 2 page long summary of a detailed computational search. So in neither world we have much to satisfy criterion 2. Now, a quiz: in which world the paper will be published?

If you recognized that the first world is a story of Hao Huang‘s elegant proof of the induced subgraphs of hypercubes conjecture, which implies the sensitivity conjecture. The Annals published it, I am happy to learn, in a welcome break with the past. But unless we are talking about some 200 year old famous conjecture, I can’t imagine the Annals accepting a short computational paper in the second world. Indeed, it took a bit of a scandal to accept even the 400 year old Kepler’s conjecture which was proved in a remarkable computational work.

Now think about this. Is any of that fair? Shouldn’t we do better as a community on this issue?

What do other people do?

Over the years I asked a number of people about the uncertainty created by the conjectures and what do they do about it. The answers surprised me. Here I am paraphrasing them:

Some were dumbfounded: “What do you mean this conjecture could be false? It has to be true, otherwise nothing I am doing make much sense.”

Others were simplistic: “It’s an important conjecture. Famous people said it’s true. It’s my job to prove it.”

Third were defensive: “Do you really think this conjecture could be wrong? Why don’t you try to disprove it then? We’ll see who is right.”

Fourth were biblical: “I tend to work 6 days a week towards the proof and one day towards the disproof.”

Fifth were practical: “I work on the proof until I hit a wall. I use the idea of this obstacle to try constructing potential counterexamples. When I find an approach to discard such counterexamples, I try to generalize the approach to continue working on the proof. Continue until either side wins.”

If the last two seem sensible to you to, that’s because they are. However, I bet fourth are just grandstanding — no way they actually do that. The fifth sound great when this is possible, but that’s exceedingly rare, in my opinion. We live in a technical age when proving new results often requires great deal of effort and technology. You likely have tools and intuition to work in only one direction. Why would you want to waste time working in another?

What should you do?

First, remember to make conjectures. Every time you write a paper, tell a story of what you proved. Then tell a story of what you wanted to prove but couldn’t. State it in the form of a conjecture. Don’t be afraid to be wrong, or be right but oversharing your ideas. It’s a downside, sure. But the upside is that your conjecture might prove very useful to others, especially young researchers. In might advance the area, or help you find a collaborator to resolve it.

Second, learn to check your conjectures computationally in many small cases. It’s important to give supporting evidence so that others take your conjectures seriously.

Third, learn to make experiments, explore the area computationally. That’s how you make new conjectures.

Fourth, understand yourself. Your skill, your tools. Your abilities like problem solving, absorbing information from the literature, or making bridges to other fields. Faced with a conjecture, use this knowledge to understand whether at least in principle you might be able to prove or disprove a conjecture.

Fifth, actively look for collaborators. Those who have skills, tools, or abilities you are missing. More importantly, they might have a different POV on the validity of the conjecture and how one might want to attack it. Argue with them and learn from them.

Sixth, be brave and optimistic! Whether you decide to prove, disprove a conjecture, or simply state a new conjecture, go for it! Ignore the judgements by the likes of Sarnak and Zeilberger. Trust me — they don’t really mean it.

The guest publishing scam

October 26, 2020 1 comment

For years, I have been a staunch opponent of “special issues” which proliferate many good journals. As an editor, when asked by the publisher if we should have some particular guest issue I would always say no, only to be outvoted or overruled by the Editor in Chief. While I always believed there is some kind of scam going on, I never really thought about it. In fact, it’s really on the surface for everyone to see…

What is so special about special issues?

Well, let me explain how this works. Imagine you organized an annual conference and you feel it was a success. Or you organized a birthday/memorial conference in honor of a senior colleague in the area and want to do more. You submit a proposal to a journal: please, please, can we become “guest editors” and publish a “special issue” of the journal? Look, our conference had so many terrific people, and the person we are honoring is such a great mathematician, so famous and so kind to everyone, how can you say no?

And the editors/publishers do say yes. Not always. Sometimes. If one journal refuses, the request is made to another journal. Eventually, like with paper submissions, some journal says “sure”. The new guest editors quickly ask all/some conference speakers to submit papers. Some/many do. Most of these papers get accepted. Not a rule, just social contract. As in “how dare you reject this little paper by a favorite student of the honoree?”

The journal publishes them with an introductory article by guest editors lauding the conference. A biographical article with reminiscences is also included, with multiple pictures from earlier conferences or from the family archive, always showing light side of the great person. The paper version of the journal is then sent to all authors, or is presented with a pomp to the honoree at some retirement party as some kind of math version of a gold watch. None of them will ever open the volume. These issues will be recycled at best, as everyone will continue to use online versions.

Sounds like a harmless effort, don’t you see? Nobody is acting dishonorably, and mathematicians get to publish more papers, journals get to have submissions the wouldn’t otherwise, the conference or a person gets honored. So, win-win-win, right? Well, hear me out.

Why do the journal editors do it?

We leave the publishers for last. For a journal editor in chief this is straightforward. If they work for leading for-profit publishers they get paid. For a good reason in fact — it’s a hard work. Now, say some friends ask to do part of your job for free, and the proposal looks good, and the list of potential authors is pretty reasonable perhaps. You get to help yourself, your friends, and the area you favor, without anyone ever holding you responsible for the outcome. Low level corruption issues set aside and ignored, who wouldn’t take this deal?

Why do the guest editors do it?

Well, this is the easiest question. Some want to promote the area, some to honor the honoree, some just want to pad their CVs. It’s all good as far as I am concerned. They are not the problem.

Why do the authors do it?

Well, for multiple reasons. Here are some possible scenarios based on my observations. Some are honorable, some are dishonorable, and some in between.

Some authors care deeply for the subject or the honoree. They send their best work to the invited issue. This is their way to give back. Most likely they could’ve published that paper in a much better journal. Nobody will ever appreciate their “sacrifice”, but they often don’t care, it makes them feel better, and they have a really good excuse anyway. From the journal POV these are the best papers. Grade A.

Other authors think of these special issues completely differently and tailor make the paper to the issue. For example, they write personal memoir style reminiscences, as in “ideas from my conversations with X”, or “the influence of X on my work”. Other times they write nice surveys, as in “how X’s work changed field ABC”, or “recent progress on X’s conjectures”. The former are usually low on math content but mildly entertaining, even if not always appropriate for a traditional math journal (but why be constrained with old conventions?) The latter can be quite useful in a way surveys are often in demand, even if the timing for these particular surveys can be a little forced. Also, both are quite appropriate for these specific issues. Anyway, Grade B.

Some authors are famous, write many papers a year, have published in all good and even not-so-good journals multiple times already, so they don’t care which journal they submit next. Somebody asks them to honor somebody/something, and they want to be nice and send their next paper whether or not it’s good or bad, or even remotely related to the subject. And why not? Their name on the paper is what matters anyway, right? Or at least that’s what they think. Grade C.

Some authors have problematic papers which they desperately want to publish. Look, doing research, writing papers and publishing is hard, I get it. Sometimes you aim to prove a big deal and just almost nothing comes out, but you still want to report on your findings just as a tribute to the time you spent on the problem. Or a paper was rejected from a couple of journals and you are close to typing up a stronger result, so want to find a home for the paper asap before it becomes unpublishable at your own hand! Or you haven’t published for years, you’re worried your department may refuse you a promotion, so you want to publish anything, anywhere, just to get a new line on your CV. So given a chance you submit, with an understanding that whatever you submit will likely get published. The temptation is just too strong to look away. I don’t approve, if you can’t tell… Grade D/F.

Why do the publishers do it?

That’s where the scam is. Let me give you a short version before you quit reading, and expound on it below. Roughly — publisher’s contracts with libraries require them to deliver a certain number of pages each year. But the editorial boards are somewhat unruly, unpredictable and partly dysfunctional, like many math departments I suppose. Sometimes they over-accept papers by creating large backlogs and lowering standards. Other times, they are on a quest to raise standards and start to reject a lot of submissions. The journals are skittish about increasing and especially about decreasing the page numbers which would lead to their loss of income, creating a desperate need for more pages, any pages they can publish and mail to the libraries. This vacuum is then happily filled with all those special issues.

What made me so upset that I decided to blog on this?

Look, there is always something that’s a last drop. In this case it was a reference to my paper, and not a good kind. At some point Google Scholar informed me about a paper with a curious title citing a rather technical paper of mine. So I clicked. Here is the citation, in its full glory:

“Therefore, people need to think about the principles and methods of knowledge storage, management and application from a new perspective, and transform human knowledge into a form that can be understood and applied by machines at a higher level—the knowledge map, which is realized on the basis of information interconnection to change knowledge interconnection possible [27].”  

Visualization Analysis of Knowledge Network Research Based on Mapping Knowledge, by Hong Liu, Ying Jiang, Hua Fan, Xin Wang & Kang Zhao, Journal of Signal Processing Systems (2020)

And here is [27]: Pak, I., & Panova, G. (2017). On the complexity of computing Kronecker coefficients, Computational Complexity, 26, 1–36.

Now, I reread the above quote three times and understood nothing. Naturally, I know my paper [27] rather well. It is a technical result on computational complexity of computing certain numbers which naturally arise in Algebraic Combinatorics, and our approach uses symmetric functions, Young tableau combinatorics and Barvinok’s algorithm. We definitely say nothing about the “knowledge storage” or “interconnection” or “management” of any of that.

Confused, I let it go, but an unrelated Google search brought up the paper again. So I reread the quote three more times. Finally convinced this is pure nonsense, I googled the journal to see if it’s one of the numerous spam journals I hear about.

Turns out, the Journal of Signal Processing Systems (JSPS) is a serious journal in the area, with impact factor around 1, and H-index of 49. For comparison, the European Journal of Combinatorics has impact factor around 0.9 and H-index of 45.

Now, JSPS has three main editors — Sun-Yuan Kung from Princeton, Shuvra S. Bhattacharyya from University of Maryland College Park, and Jarmo Takala from Tampere University in Helsinki. All reputable people. For example, Kung has over 30K citations on Google Scholar, while Takala has over 400 published papers.

So, in my usual shy and unassuming way, I wrote to them a short email on Sep 25, 2020, inquiring about the fateful citation:

Dear Editors,
I want to bring to your attention the following article recently published in the Journal of Signal Processing Systems.  I personally have neither knowledge nor expertise in your field, so I can’t tell you whether this is indeed a spam article.  However, I can tell when I see a bogus citation to my own work, which is used to justify some empty verbosity.  Please do keep me posted as to what actions you intend to take on the matter (if any). 
Best,  —  Igor Pak

Here is the reply that I got:

Dear Prof. Pak,
thank you for providing feedback about the citation in this article. The article is published in a special issue, where the papers have been selected by guest editors. We will have a discussion with the guest editors on this matter. Sincerely,
Jarmo Takala
Co-Editor-inChief J. Signal Processing Systems

Now you see what I mean? It’s been over a month since my email. The paper is still there. Clearly going nowhere. The editors basically take no responsibility as they did not oversee the guest issue. They have every incentive to blame someone else and drop the discussion, because this whole thing can only lead to embarrassment and bad rep. This trick is called “blame shifting”.

Meanwhile, the guest editors have no incentives to actually do anything because they are not affiliated with the journal. In fact, you can’t even tell from the Editors’ email or from the paper who they are. So I still don’t know who they are and have no way to reach out to them. The three Editors above never replied to my later email, so I guess we are stuck. All right then, maybe the time will tell….

Explaining the trick in basic terms

I am not sure what the business term for this type of predatory behavior, but let me give you some familiar examples so you get the idea.

(1) Say, you are a large very old liberal arts university located in Cambridge, MA. Yes, like Harvard. Almost exactly like Harvard. You have a fancy very expensive college with very low admission rate of less than 1 in 20. But you know you are a good brand, and every time you make some rich kid go away, your treasurer’s heart is bleeding. So how do you make more money off the brand?

Well, you start an Extension School which even gives Bachelor and Master’s degrees. And it’s a moneymaker! It brings over $500 million each year, about the same as the undergraduate and graduate tuitions combined! But wait, careful! You do give them “Harvard degrees“, just not “Harvard College degrees“. And, naturally, they would never include the Extension School students in the “average SAT score” or “income one year after graduation” stats they report to US News, because it’s not Harvard College, don’t you understand?

Occasionally this leads to confusion and even minor scandals, but who cares, right? We are talking a lot of money! A lot of people have afterhours adjunct jobs, rooms have higher occupancy rate aiming to recoup building repairs (well, pre-pandemic), and a lot of people get educated and feel good about getting an education at Harvard, win-win-win…

But you see where I am going — same brand is split into two under one roof, selling two different, highly unequal, almost unrelated products, all for the benefit of a very rich private corporation.

(2) Now, here is a sweet completely made up example. You are a large corporation selling luxury dark chocolate candies made of very expensive cocoa beans. A new CEO comes up with a request. Cut candy weight to save on the beans without lowering candy box prices, and make it a PR campaign so that everyone feels great and rushes to buy these. You say impossible? Not at all!

Here is what you do. Say, your luxury box of dark chocolate candies weights 200 grams, so each is 20 grams. You make each candy a little bit smaller, so the total weight is now 175 gram — for each candy the difference of 2.5 grams is barely noticeable. You make the candy box bigger and put two more rather large 25 gram candies made out of cheap white chocolate, wrapped into a visually different wrap. You sell them in one box. The new weight is 225 grams, i.e. larger than before. You advertise “now with two bonus candies at the same price!”, and customers feel happy to get some “free stuff”. At the end, they might not like the cheap candies, but who cares – they get to have the same old 10 expensive candies, right?

Again, you see where I am going. They created an artificial confusion by selling a superior and an inferior product in the same box without an honest breakdown, so the customers are completely duped.

Back to publishers

They are playing just as unfair as the second example above. The librarians can’t tell the difference between quality of “special issues”, they only negotiate on the number of pages. The journal’s reputation doesn’t suffer from those. Indeed, it is understood that they are not always but often enough of lower quality, but you can’t really submit there unless you are in the loop. I don’t know how the impact factor and H index are calculated, but I bet the publishers work with Web Of Science to exclude these special issues and report only the usual issues akin to the Harvard example. Or not. Nobody cares for these indices anymore, right?

Some examples

Let me just show how chaotic is the publishing of special issues. Take Discrete Mathematics, an Elsevier journal where I was an editor for 8 years (and whose Wikipedia page I made myself). Here is a page with Special Issues. There is no order to any of these conferences. There are 8th French Combinatorial Conference, Seventh Czech-Slovak International Symposium, 23rd British Combinatorics Conference, huh? What happened to the previous 7, 6 and 22 proceedings, respectively? You notice a lot of special issues from before the journal was overhauled and very few in recent years. Clearly the journal is on the right track. Good for them!

Here are three special issues in JCTA, and here are two in JCTB (both Elsevier). Why these? Are the editors sure these have the same quality as the rest of these top rated journals? Well, hopefully no longer top rated for JCTA. The Annals of Combinatorics (Springer) has literally “Ten Years of BAD Math” special issue (yes, I know what BAD Math means, but the name is awful even if the papers are great). The European Journal of Combinatorics (Elsevier again), publishes usually 1-2 special issue per year. Why?? Not enough submissions? Same for Advances Applied Math (also Elsevier), although very few special issues in recent years (good!). I think one of my papers (of grade B) is in one of the older special issues. Ooops!

Now compare these with the Electronic Journal of Combinatorics which stopped publishing special issues back in 2012. This journal is free online, has no page limitation, so it cares more about its reputation than filling the pages. Or take the extreme case of the Annals of Mathematics which would laugh at the idea of a “special issue”. Now you get it!

What gives?

It’s simple, really. STOP publishing special issues! If you are an Editor in Chief, just refuse! Who really knows what kind of scam the guest editors or the publishers are running? But you know your journal, all papers go through you, and you are responsible for all accepted papers. Really, the journal editors are the only ones responsible for journal reputation and for the peer review!

Expensive for profit publishers enjoying side special issue scam — I’ve been looking forward to your demise for a long while. Even more recently I felt optimistic since a lot of papers are now freely accessible. Now that we are all cut off from the libraries during pandemic — can we all agree that these publishers bring virtually no added value??

If you are a potential guest editor who really wants to organize a special issue based on your conference, or to honor somebody, ask publishers to make a special book deal. They might. They do it all the time, even if this is a bit less lucrative business than journal publishing. Individual mathematicians don’t, but the libraries do buy these volumes. And they should.

If you are a potential contributor to a special issue — do what is listed above in Grade B (write a special topic survey or personal reminiscences), which will be published in a book as a chapter. No serious peer review research. These go to journals.

And if you are one of those scam journal publishers who keep emailing me every week to become a special issue editor because you are so enthralled with my latest arXiv preprint — you go die in a ditch!

Final Disclaimer: All these bad opinions are not at all about any particular journal or special issue. There are numerous good papers published in special issues, and these issues are often dedicated to just wonderful mathematicians. I myself admit of publishing papers in a several such special issues. Here I am making a general point which is hopefully clear.

The status quo of math publishing

March 18, 2019 2 comments

We all like the status quo.  It’s one of my favorite statuses…  The status quo is usually excellent or at least good enough.  It’s just so tempting to do nothing at all that we tend to just keep it.  For years and years which turn into decades.  Until finally the time has come to debate it…

Some say the status quo on math publishing is unsustainable.  That the publishers are much too greedy, that we do all the work and pay twice, that we should boycott the most outrageous of these publishers, that the University of California, German, HungaryNorway and Swedish library systems recent decisions are a watershed moment calling for action, etc.  My own institution (UCLA) is actually the leader in the movement.  While I totally agree with the sentiment, I mostly disagree with the boycott(s) as currently practiced and other proposed measures.  It comes from a position of weakness and requires major changes to the status quo.

Having been thinking about this all for awhile, I am now very optimistic.  In fact, there is a way we can use our natural position of strength to achieve all the goals we want while keeping the status quo.  It may seem hard to believe, but with a few simple measures we can get there in a span of a few years.  This post is a long explanation of how and why we do this.

What IS the current status quo?

In mathematics, it’s pretty simple.  We, the mathematicians, do most of the work:  produce a decent looking .pdf file, perform a peer review on a largely volunteer basis (some editors do get paid occasionally), disseminate the results as best as we can, and lobby our libraries to buy the journal subscriptions.  The journals collect the copyright forms, make minor edits to the paper to conform to their favorite style, print papers on paper, mail them to the libraries, post the .pdf files on the internet accessible via library website, and charge libraries outrageous fees for these services.  They also have armies of managers, lawyers, shareholders, etc. to protect the status quo.

Is it all good or bad?  It’s mostly good, really.  We want all these basic services, just disagree on the price.  There is an old Russian Jewish proverb, that if a problem can be solved with money — it’s not a real problem but a business expense (here is a modern version).  So we should deal with predatory pricing as a business issue and not get emotional by boycotting selective journals or publishers.  We can argue for price decreases completely rationally, by showing that their product lost 90%, but not all its value, and that it’s in our common interest to devalue it, but not kill it.

Why keep the status quo?

This is easy.  We as a community tend to like our journals more than we hate them.  They compete for our papers.  We compete with each other to get published in best places.  This means we as a community know which journals are good, better or best in every area, or in the whole field of mathematics.  This means that each journal has composed the best editorial board it could.  It would be a waste to let this naturally formed structures go.

Now, in the past I strongly criticized top journals, the whole publishing industry, made fun of it, and more recently presented an ethical code of conduct for all journals.   Yet it’s clear that the cost of complete destruction of existing journal nomenclature is too high to pay and thus unlikely to happen.

Why changing the status quo is impractical?

Consider the alternatives.  Yes, the editorial board resignations do happen, most recently in the Journal of Algebraic Combinatorics (JACO) which resigned in mass to form a journal named Algebraic Combinatorics (ALCO) But despite laudations, the original journal exists and doing fine or at least ok.  To my dismay and mild disbelief, the new Editorial Board of JACO has some well-known and wildly respected people.  Arguably, this is not the outcome the resigners aimed for (for the record, I published twice in JACO and recently had a paper accepted by ALCO).

Now, at first, starting new journals may seems like a great idea.  Unfortunately, by the conservative nature of academia they always struggle to get off the ground.  Some survive, such as EJC or EJP, have been pioneers in the area, but others are not doing so well.  The fine print is also an issue — the much hyped Pi and Sigma charge $1000 per article for “processing”, whatever that entails.   Terry Tao wrote that these journals suggest “alternatives to the status quo”.  Maybe.  But how exactly is that an improvement?  (Again, for the record, I published in both EJC, EJP, and recently in Sigma.  No, I didn’t pay, but let me stay on point here — that story can wait for another time.)

Other alternatives are even less effective.  Boycotting selective publishers gives a free reign to others to charge a lot, at the time when we need a systemic change.  I believe that it gives all but the worst publishers the cover they need to survive, while the worst already have enough power to survive and remain in the lead.  There is a long argument here I am trying to avoid.  Having had it with Mark Wilson, I know it would overwhelm this post.  Let me not rebut it thoroughly point-by-point, but present my own vision.

What can we do?

Boycott them all!  I mean all non-free journals, at all times, at all cost.  By that I don’t mean everyone should avoid submission, refereeing, being on the editorial board.  Not at all, rather opposite.  Please do NOT boycott anyone specifically, proceed with your work, keep the status quo.

What I mean is this.  Boycott all non-free journals as a consumer!  Do NOT download papers from journal websites.  I will give detailed suggestions below, after I explained my rationale.  In short, every time you download a paper from the journal website it gives publishers leverage to claim they are indispensable, and gives libraries the fear of faculty revolt if they unsubscribe.  They (both the publishers and the libraries) have no idea how little we need the paid journal websites.

Detailed advice on how to boycott all math journal publishers

Follow the following simple rules.  On your side as an author, make every(!) paper you ever wrote freely accessible.  Not just the latest – all of them!  Put them on the arXiv, viXra, your own website, or anywhere you like as long as the search engines can find them.  If you don’t know how, ask for help.  If you can read this WP blog post, you can also post your papers on some WP site.  If you are afraid of the copyright, snap out of it!  I do this routinely, of course.  Many greats have also done this for all their papers, e.g. Noga Alon and Richard Stanley.  Famously, all papers by Paul Erdős are online.  So my message for all of you reading this: if you don’t have all your papers free online, go ahead, just post them all!  Yes, that means right now!  Stop reading and come back when you are done.

Now, for reading papers the rules are more complicated.   Every time you need to download an article, don’t go to MathSciNet.  Instead, google it first.  Google Scholar usually gives you multiple options on the download location.  Choose the one in the arXiv or author’s website.  Done.

If you fail, but feel the paper could be available from some nefarious copyright violating websites, consider using Yandex, DuckDuckGo, or other search engines which are less concerned about the copyright.

Now, suppose the only location is the journal website.  Often, this happens when the paper is old or old-ish, i.e. outside the 4 year sliding window for Elsevier.  As far as I am concerned, this part of the publisher is “free” since anyone in the world can download it without charge.  Make sure you download the paper without informing your campus library.  This is easy off campus — use any browser without remote access (VPN).  On campus, use a browser masking your ip address, i.e. the Opera.

Now, suppose nothing works.  Say, the paper is recent but inaccessible for free.  Then email to the authors and request the file of paper.  Shame them into putting the paper online while you are at it.   Forward them this blog post, perhaps.

Suppose now the paper is inaccessible for free, but the authors are non-responsive and unlikely to ever make the paper available.  Well, ok — download it from the journal website then via your library.  But then be a mensch.  Post the paper online.  Yes, in violation of copyright.  Yes, other people already do it.  Yes, everyone is downloading them and would be grateful.  No, they won’t fight us all.

Finally, suppose you create a course website.  Make sure all or at least most of your links are to free version of the articles.  Download them all and repost them on your course website so the students can bypass the library redirect.  Every bit helps.

Why would this work?  I.  Shaming is powerful.

Well, in mathematics shaming is widespread and actually works except in some extreme cases.  It’s routine, in fact, to shame authors for not filling gaps in their proofs, for not acknowledging priority, or for not retracting incorrect papers (when the authors refuse to do it, the journals can also be shamed).  Sometimes the shaming doesn’t work.  Here is my own example of shaming fail (rather extreme, unfortunately), turned shaming success on pages of this blog.

More broadly, public shaming is one of the key instruments in the 21st century.  Mathbabe (who is writing a book about shaming) notably shamed Mochizuki for not traveling around to defend his papers.   Harron famously shamed white cis men for working in academia.  Again, maybe not in all cases, but in general public shaming works rather well, and there is a lot of shaming happening everywhere.  

So think about it — what if we can shame every working mathematician into posting all their papers online?  We can then convince libraries that we don’t need to renew all our math journal subscriptions since we can function perfectly well without them.  Now, we would still want the journal to function, but are prepared to spend maybe 10-15% of the prices that Springer and Elsevier currently charge.  Just don’t renew the contract otherwise.  Use the savings to hire more postdocs, new faculty, give students more scholarships to travel to conferences, make new Summer research opportunities, etc.

Why would this work?  II.  Personal perspective.

About a year ago I bought a new laptop and decided to follow some of the rules above as an experiment.  The results were surprisingly good.  I had to download some old non-free papers from  publisher sites maybe about 4-5 times a month.  I went to the library about once every couple of months.  For new papers, I emailed the authors maybe the total of about once every three months, getting the paper every time.  I feel I could have emailed more often, asking for old papers as well.

Only occasionally (maybe once a month) I had to resort to overseas paper depositaries, all out of laziness — it’s faster than walking to the library.  In summary — it’s already easy to be a research mathematician without paying for journals.  In the future, it will get even easier.

Why would this work?  III.  Librarian perspective.

Imagine you are a head librarian responsible for journal contracts and purchasing.   You have access to the download data and you realize that many math journals continue to be useful and even popular.  The publishers bring you a similar or possibly more inflated date showing their products in best light.  Right now you have no evidence the journals are largely useless are worried about backslash which would happen if you accidentally cut down on popular journals.  So you renew just about everything that your library has always been subscribing and skip on subscribing to new journals unless you get special requests for the faculty that you should.

Now imagine that in 2-3 years your data suggests rapidly decreasing popularity of the journals.  You make a projection that the downloads will decrease by a factor of 10 within a few more years.  That frees you from worrying about cancelling subscriptions and gives you strong leverage in negotiating.  Ironically that also helps you keeps the status quo — the publishers slash their price but you can keep most of the subscriptions.

Why would this work?  IV.  Historical perspective.

The history is full of hard fought battles which were made obsolete by cultural and technological changes.  The examples include the “war of the currents“, the “war” of three competing NYC subway systems, same with multiple US railroads, the “long-distance price war“, the “browser war” and the “search engine war“.  They were all very different and resolved in many different ways, but have two things in common — they were ruthless at the time, and nobody cares anymore.  Even the airlines keep slashing prices, making services indistinguishably awful to the point of becoming near-utilities like electric and gas companies.

The same will happen to the journal publishing empires.  In fact, the necessary technology has been available for awhile — it’s the culture that needs to change.  Eventually all existing print journals will become glorified versions of arXiv overlay publications with substantially scaled down stuff and technical production.  Not by choice, of course — there is just no money in it.  Just like the airline travel — service will get worse, but much cheaper.

The publishers will continue to send print copies of journals to a few dozen libraries worldwide which will be immediately put into off-campus underground bunker-like storages as an anti-apocalyptic measure, and since the reader’s demand will be close to nonexistent.  They will remain profitable by cutting cost everywhere since apparently this is all we really care about.

The publishers already know that they are doomed, they just want to prolong the agony and extract as much rent as they can before turning into public utilities.  This is why the Elsevier refuses to budge with the UC and other systems.  They realize that publicly slashing prices for one customer today will lead to an avalanche of similar demands tomorrow, so they would rather forgo a few customers than start a revolution which would decimate their journal value in 5 years (duration of the Elsevier contract).

None of this is new, of course.  Odlyzko described it all back in 1997, in a remarkably prescient yet depressing article.  Unfortunately, we have been moving in the wrong direction.  Gowers is right that publishers cannot be shamed, but his efforts to shame people into boycotting Elsevier may be misplaced as it continues going strong.  The shaming did lead to the continuing conversation and the above mentioned four year sliding window which is the key to my proposal.

What’s happening now?  Why is Elsevier not budging?

As everyone who ever asked for a discount knows, you should do this privately, not publicly.  Very quietly slashing the prices by a factor of 2, then trying to play the same trick again in 5 years would have been smarter and satisfied everybody.  To further help Elsevier hide the losses from shareholders and general public, the library could have used some bureaucratic gimmicks like paying the same for many journals but getting new books for free or something like that.  This would further confuse everybody except professional negotiators on behalf of other library systems, thus still helping to push the prices down.

But the UC system wanted to lead a revolution with their public demands, so here we are, breaking the status quo for no real reason.  There are no winners here.  Even my aunt Bella from Odessa who used to take me regularly to Privoz Market to watch her bargain, could have told you that’s exactly what’s going to happen…

Again, the result is bad for everybody — the Elsevier would have been happier to get some money — less than the usual amount, but better than nothing given the trivial marginal costs.  At the same time, we at UCLA still need the occasional journal access while in the difficult transition period.

AMS, please step up!

There is one more bad actor in the whole publishing drama whose role needs to change.  I am speaking about the AMS, which is essentially a giant publishing house with an army of volunteers and a side business of organizing professional meetings.  Let’s looks at the numbers, the 2016 annual report (for some reason the last one available).  On p.12 we read: of the $31.8 mil operating revenue dues make up about 8%, meetings 4%, while publishing a whopping 68%.  No wonder the AMS is not pushing for changes in current journal pay structure — they are conflicted to the point of being complicit in preserving existing prices.

But let’s dig a little deeper.  On p.16 we see that the journals are fantastically profitable!  They raise $5.2 mil with $1.5 mil in operating expenses, a 247% profit margin.  With margins like that who wants to rock the boat?  Compare this with next item — books.  The AMS made $4.1 mil while spent $3.6 mil.  That’s a healthy 14% profit margin.  Nice, but nothing to write home about.  By its nature, the book market is highly competitive as libraries and individuals have option to buy them or not on a per title basis.  Thus, the competition.

If you think the AMS prices are lower than of other publishers, that’s probably right.  This very dated page by Kirby is helpful.  For example, in 1996, the PTRF (Springer) charged $2100, the Advances (Academic Press, now Elsevier) $1326, the Annals (Princeton Univ. Press) $200, while JAMS only $174.  Still…

What should be done?  Ideally, the AMS should sell its journal business to some university press and invest long-term the sale profits.  That would free it to pursue the widely popular efforts towards free publishing.  In reality that’s unlikely to happen, so perhaps some sort of “Chinese wall” separating journal publishing and the AMS political activities.  This “wall” might already exist, I wouldn’t know.  I am open to suggestions.  Either way, I think the AMS members should brace themselves for the future where the AMS has a little less money.  But since the MathSciNet alone brings 1/3 of the revenue, and other successful products like MathJobs are also money makers, I think the AMS will be fine.

I do have one pet peeve.  The MathSciNet, which is a good product otherwise, should have a “web search” button next to the “article” button.  The latter automatically takes you to the journal website, while the former would search the article on Google Scholar (or Microsoft Academic, I suppose, let the people choose a default).  This would help people circumvent the publishers by cutting down on clicks.

What gives?

I have always been a non-believer in boycotts of specific publishers, and I feel the history proved me more right than wrong.  People tend to avoid boycotts when they have significant cost, and without the overwhelming participation boycotts simply don’t work.  Asking people not to submit or referee for the leading journals in their fields is like asking to voluntarily pay higher taxes.  Some do this, of course, but most don’t, even those who generally agree with higher taxes as a good public policy.

In fact, I always thought we need some kind of one-line bill by the US Congress requiring all research made at every publicly funded university being available for free online.  In my conspiratorial imagination, the AMS being a large publisher refused to bring this up in its lobbying efforts, thus nothing ever happened.  While I still think this bill is a good idea, I no longer think it’s a necessary step.

Now I am finally optimistic that the boycott I am proposing is going to succeed.  The (nearly) free publishing is coming!  Please spread the word, everybody!

UPDATE (March 19, 2019):  Mark Wilson has a blog post commenting and clarifying ALCO vs. JACO situation.

What we’ve got here is failure to communicate

September 14, 2018 21 comments

Here is a lengthy and somewhat detached followup discussion on the very unfortunate Hill’s affair, which is much commented by Tim Gowers, Terry Tao and many others (see e.g. links and comments on their blog posts).  While many seem to be universally distraught by the story and there are some clear disagreements on what happened, there are even deeper disagreements on what should have happened.  The latter question is the subject of this blog post.

Note:  Below we discuss both the ethical and moral aspects of the issue.  Be patient before commenting your disagreements until you finish the reading — there is a lengthy disclaimer at the end.

Review process:

  1. When the paper is submitted there is a very important email acknowledging receipt of the submission.  Large publishers have systems send such emails automatically.  Until this email is received, the paper is not considered submitted.  For example, it is not unethical for the author to get tired of waiting to hear from the journal and submit elsewhere instead.  If the journal later comes back and says “sorry for the wait, here are the reports”, the author should just inform the journal that the paper is under consideration elsewhere and should be considered withdrawn (this happens sometimes).
  2. Similarly, there is a very important email acknowledging acceptance of the submission.  Until this point the editors ethically can do as they please, even reject the paper with multiple positive reports.  Morality of the latter is in the eye of the beholder (cf. here), but there are absolutely no ethical issues here unless the editor violated the rules set up by the journal.  In principle, editors can and do make decisions based on informal discussions with others, this is totally fine.
  3. If a journal withdraws acceptance after the formal acceptance email is sent, this is potentially a serious violation of ethical standards.  Major exception: this is not unethical if the journal follows a certain procedural steps (see the section below).  This should not be done except for some extreme circumstances, such as last minute discovery of a counterexample to the main result which the author refuses to recognize and thus voluntarily withdraw the paper.   It is not immoral since until the actual publication no actual harm is done to the author.
  4. The next key event is publication of the article, whether online of in print, usually/often coupled with the transfer of copyright.  If the journal officially “withdraws acceptance” after the paper is published without deleting the paper, this is not immoral, but depends on the procedural steps as in the previous item.
  5. If a journal deletes the paper after the publication, online or otherwise, this is a gross violation of both moral and ethical standards.  The journals which do that should be ostracized regardless their reasoning for this act.  Major exception: the journal has legal reasoning, e.g. the author violated copyright laws by lifting from another published article as in the Dănuț Marcu case (see below).

Withdrawal process:

  1.  As we mentioned earlier, the withdrawal of accepted or published article should be extremely rare, only in extreme circumstances such as a major math error for a not-yet-published article or a gross ethical violation by the author or by the handling editor of a published article.
  2. For a published article with a major math error or which was later discovered to be known, the journal should not withdraw the article but instead work with the author to publish an erratum or an acknowledgement of priority.  Here an erratum can be either fixing/modifying the results, or a complete withdrawal of the main claim.  An example of the latter is an erratum by Daniel Biss.  Note that the journal can in principle publish a note authored by someone else (e.g. this note by Mnёv in the case of Biss), but this should be treated as a separate article and not a substitute for an erratum by the author.  A good example of acknowledgement of priority is this one by Lagarias and Moews.
  3. To withdraw the disputed article the journal’s editorial board should either follow the procedure set up by the publisher or set up a procedure for an ad hoc committee which would look into the paper and the submission circumstances.  Again, if the paper is already published, only non-math issues such as ethical violations by the author, referee(s) and/or handling editor can be taken into consideration.
  4. Typically, a decision to form an ad hoc committee or call for a full editorial vote should me made by the editor in chief, at the request of (usually at least two) members of the editorial board.  It is totally fine to have a vote by the whole editorial board, even immediately after the issue was raised, but the threshold for successful withdrawal motion should be set by the publisher or agreed by the editorial board before the particular issue arises.  Otherwise, the decision needs to be made by consensus with both the handling editor and the editor in chief abstaining from the committee discussion and the vote.
  5. Examples of the various ways the journals act on withdrawing/retracting published papers can be found in the case of notorious plagiarist Dănuț Marcu.  For example, Geometria Dedicata decided not to remove Marcu’s paper but simply issued a statement, which I personally find insufficient as it is not a retraction in any formal sense.  Alternatively, SUBBI‘s apology is very radical yet the reasoning is completely unexplained. Finally, Soifer’s statement on behalf of Geombinatorics is very thorough, well narrated and quite decisive, but suffers from authoritarian decision making.
  6. In summary, if the process is set up in advance and is carefully followed, the withdrawal/retraction of accepted or published papers can be both appropriate and even desirable.  But when the process is not followed, such action can be considered unethical and should be avoided whenever possible.

Author’s rights and obligations:

  1. The author can withdraw the paper at any moment until publication.  It is also author’s right not to agree to any discussion or rejoinder.  The journal, of course, is under no obligation to ask the author’s permission to publish a refutation of the article.
  2. If the acceptance is issued, the author has every right not go along with the proposed quiet withdrawal of the article.  In this case the author might want to consider complaining to the editor in chief or the publisher making the case that the editors are acting inappropriately.
  3. Until acceptance is issued, the author should not publicly disclose the journal where the paper is submitted, since doing so constitutes a (very minor) moral violation.  Many would disagree on this point, so let me elaborate.  Informing the public of the journal submission is tempting people in who are competition or who have a negative opinion of the paper to interfere with the peer review process.  While virtually all people virtually all the time will act honorably and not contact the journal, such temptation is undesirable and easily avoidable.
  4. As soon as the acceptance or publication is issued, the author should make this public immediately, by the similar reasoning of avoiding temptation by the third parties (of different kind).

Third party outreach:

  1.  If the paper is accepted but not yet published, reaching out to the editor in chief by a third party requesting to publish a rebuttal of some kind is totally fine.  Asking to withdraw the paper for mathematical reasons is also fine, but should provide a clear formal math reasoning as in “Lemma 3 is false because…”  The editor then has a choice but not an obligation to trigger the withdrawal process.
  2. Asking to withdraw the not-yet-published paper without providing math reasoning, but saying something like “this author is a crank” or “publishing this paper will do bad for your reputation” is akin to bullying and thus a minor ethical violation.  The reason it’s minor is because it is journal’s obligations to ignore such emails.  Journal acting on such an email with rumors or unverified facts is an ethical violation on its own.
  3. If a third party learns about a publicly available paper which may or may not be an accepted submission with which they disagree for math or other reason, it it ethical to contact the author directly.  In fact, in case of math issues this is highly desirable.
  4. If a third party learns about a paper submission to a journal without being contacted to review it, and the paper is not yet accepted, then contacting the journal is a strong ethical violation.  Typically, the journal where the paper is submitted it not known to public, so the third party is acting on the information it should not have.  Every such email can be considered as an act of bullying no matter the content.
  5. In an unlikely case everything is as above but the journal’s name where the paper is submitted is publicly available, the third party can contact the journal.  Whether this is ethical or not depends on the wording of the email.  I can imagine some plausible circumstances when e.g. the third party knows that the author is Dănuț Marcu mentioned earlier.  In these rare cases the third party should make every effort to CC the email to everyone even remotely involved, such as all authors of the paper, the publisher, the editor in chief, and perhaps all members of the editorial board.  If the third party feels constrained by the necessity of this broad outreach then the case is not egregious enough, and such email is still bullying and thus unethical.
  6. Once the paper is published anyone can contact the journal for any reason since there is little can be done by the journal beyond what’s described above.  For example, on two different occasions I wrote to journals pointing out that their recently published results are not new and asking them to inform the authors while keeping my anonymity.  Both editors said they would.  One of the journals later published an acknowledgement of retribution.  The other did not.

Editor’s rights and obligations:

  1. Editors have every right to encourage submissions of papers to the journal, and in fact it’s part of their job.  It is absolutely ethical to encourage submissions from colleagues, close relatives, political friends, etc.  The publisher should set up a clear and unobtrusive conflict of interest directive, so if the editor is too close to the author or the subject he or she should transfer the paper to the editor in chief who will chose a different handling editor.
  2. The journal should have a clear scope worked out by the publisher in cooperation with the editorial board.  If the paper is outside of the scope it should be rejected regardless of its mathematical merit.  When I was an editor of Discrete Mathematics, I would reject some “proofs” of the Goldbach conjecture and similar results based on that reasoning.  If the paper prompts the journal to re-evaluate its scope, it’s fine, but the discussion should involve the whole editorial board and irrespective of the paper in question.  Presumably, some editors would not want to continue being on the board if the journal starts changing direction.
  3. If the accepted but not yet published paper seems to fall outside of the journal’s scope, other editors can request the editor in chief to initiate the withdrawal process discussed above.  The wording of request is crucial here – if the issue is neither the the scope nor the major math errors, but rather the weakness of results, then this is inappropriate.
  4. If the issue is the possibly unethical behavior of the handling editor, then the withdrawal may or may not be appropriate depending on the behavior, I suppose.  But if the author was acting ethically and the unethical behavior is solely by the handling editor, I say proceed to publish the paper and then issue a formal retraction while keeping the paper published, of course.

Complaining to universities:

  1. While perfectly ethical, contacting the university administration to initiate a formal investigation of a faculty member is an extremely serious step which should be avoided if at all possible.  Except for the egregious cases of verifiable formal violations of the university code of conduct (such as academic dishonesty), this action in itself is akin to bullying and thus immoral.
  2. The code of conduct is usually available on the university website – the complainer would do well to consult it before filing a complaint.  In particular, the complaint would typically be addressed to the university senate committee on faculty affairs, the office of academic integrity and/or dean of the faculty.  Whether the university president is in math or even the same area is completely irrelevant as the president plays no role in the working of the committee.  In fact, when this is the case, the president is likely to recuse herself or himself from any part of the investigation and severe any contacts with the complainer to avoid appearance of impropriety.
  3. When a formal complaint is received, the university is usually compelled to initiate an investigation and set up an ad hoc subcommittee of the faculty senate which thoroughly examines the issue.  Faculty’s tenure and life being is on the line.  They can be asked to retain legal representation and can be prohibited from discussing the matters of the case with outsiders without university lawyers and/or PR people signing on every communication.  Once the investigation is complete the findings are kept private except for administrative decisions such as firing, suspension, etc.  In summary, if the author seeks information rather than punishment, this is counterproductive.

Complaining to institutions:

  1. I don’t know what to make of the alleged NSF request, which could be ethical and appropriate, or even common.   Then so would be complaining to the NSF on a publicly available research product supported by the agency.  The issue is the opposite to that of the journals — the NSF is a part of the the Federal Government and is thus subject to a large number of regulations and code of conduct rules.  These can explain its request.  We in mathematics are rather fortunate that our theorems tend to lack any political implications in the real world.  But perhaps researchers in Political Science and Sociology have different experiences with granting agencies, I wouldn’t know.
  2. Contacting the AMS can in fact be rather useful, even though it currently has no way to conduct an appropriate investigation.  Put bluntly, all parties in the conflict can simply ignore AMS’s request for documents.  But maybe this should change in the future.  I am not a member of the AMS so have no standing in telling it what to do, but I do have some thoughts on the subject.  I will try to write them up at some point.

Public discourse:

  1. Many commenters on the case opined that while deleting a published paper is bad (I am paraphrasing), but the paper is also bad for whatever reason (politics, lack of strong math, editor’s behavior, being out of scope, etc.)  This is very unfortunate.  Let me explain.
  2. Of course, discussing math in the paper is perfectly ethical: academics can discuss any paper they like, this can be considered as part of the job.  Same with discussing the scope of the paper and the verifiable journal and other party actions.
  3. Publicly discussing personalities and motivation of the editors publishing or non-publishing, third parties contacting editors in chief, etc. is arguably unethical and can be perceived as borderline bullying.  It is also of questionable morality as no complete set of facts are known.
  4. So while making a judgement on the journal conduct next to the judgement on the math in the paper is ethical, it seems somewhat immoral to me.  When you write “yes, the journals’ actions are disturbing, but the math in the paper is poor” we all understand that while formally these are two separate discussions, the negative judgement in the second part can provide an excuse for misbehavior in the first part.  So here is my new rule:  If you would not be discussing the math in the paper without the pretext of its submission history, you should not be discussing it at all. 

In summary:

I argue that for all issues related to submissions, withdrawal, etc. there is a well understood ethical code of conduct.  Decisions on who behaved unethically hinge on formal details of each case.  Until these formalities are clarified, making judgements is both premature and unhelpful.

Part of the problem is the lack of clarity about procedural rules by the journals, as discussed above.  While large institutions such as major universities and long established journal publishers do have such rules set up, most journals tend not to disclose them, unfortunately.  Even worse, many new, independent and/or electronic journals have no such rules at all.  In such environment we are reduced to saying that this is all a failure to communicate.

Lengthy disclaimer:

  1. I have no special knowledge of what actually happened to Hill’s submission.  I outlined what I think should have happened in different scenarios if all participants acted morally and ethically (there are no legal issues here that I am aware of).  I am not trying to blame anyone and in fact, it is possible that none of these theoretical scenarios are applicable.  Yet I do think such a general discussion is useful as it distills the arguments.
  2. I have not read Hill’s paper as I think its content is irrelevant to the discussion and since I am deeply uninterested in the subject.  I am, however, interested in mathematical publishing and all academia related matters.
  3. What’s ethical and what’s moral are not exactly the same.  As far as this post is concerned, ethical issues cover all math research/university/academic related stuff.  Moral issues are more personal and community related, thus less universal perhaps.  In other words, I am presenting my own POV everywhere here.
  4. To give specific examples of the difference, if you stole your officemate’s lunch you acted immorally.  If you submitted your paper to two journals simultaneously you acted unethically.  And if you published a paper based on your officemate’s ideas she told you in secret, you acted both immorally and unethically.  Note that in the last example I am making a moral judgement since I equate this with stealing, while others might think it’s just unethical but morally ok.
  5. There is very little black & white about immoral/unethical acts, and one always needs to assign a relative measure of the perceived violation.  This is similar to criminal acts, which can be a misdemeanor, a gross misdemeanor, a felony, etc.

 

How NOT to reference papers

September 12, 2014 Leave a comment

In this post, I am going to tell a story of one paper and its authors which misrepresented my paper and refused to acknowledge the fact. It’s also a story about the section editor of Journal of Algebra which published that paper and then ignored my complaints. In my usual wordy manner, I do not get to the point right away, and cover some basics first. If you want to read only the juicy parts, just scroll down…

What’s the deal with the references?

First, let’s talk about something obvious. Why do we do what we do? I mean, why do we study for many years how to do research in mathematics, read dozens or hundreds of papers, think long thoughts until we eventually figure out a good question. We then work hard, trial-and-error, to eventually figure out a solution. Sometimes we do this in a matter of hours and sometimes it takes years, but we persevere. Then write up a solution, submit to a journal, sometimes get rejected (who knew this was solved 20 years ago?), and sometimes sent for revision with various lemmas to fix. We then revise the paper, and if all goes well it gets accepted. And published. Eventually.

So, why do we do all of that? For the opportunity to teach at a good university and derive a reasonable salary? Yes, sure, a to some degree. But mostly because we like doing this. And we like having our work appreciated. We like going to conferences to present it. We like it when people read our paper and enjoy it or simply find it useful. We like it when our little papers form building stones towards bigger work, perhaps eventually helping to resolve an old open problem. All this gives us purpose, a sense of accomplishment, a “social capital” if you like fancy terms.

But all this hinges on a tiny little thing we call citations. They tend to come at the end, sometimes footnote size and is the primary vehicle for our goal. If we are uncited, ignored, all hope is lost. But even if we are cited, it matters how our work is cited. In what context was it referenced is critically important. Sometimes our results are substantially used in the proof, those are GOOD references.

Yet often our papers are mentioned in a sentence “See [..] for the related results.” Sometimes this happens out of politeness or collegiality between authors, sometimes for the benefit of the reader (it can be hard navigating a field), and sometimes the authors are being self-serving (as in “look, all these cool people wrote good papers on this subject, so my work must also be good/important/publishable”). There are NEUTRAL references – they might help others, but not the authors.

Finally, there are BAD references. Those which refer derogatively to your work, or simply as a low benchmark which the new paper easily improved. Those which say “our bound is terribly weak, but it’s certainly better than Pak’s.” But the WORST references are those which misstate what you did, which diminish and undermine your work.

So for anyone out there who thinks the references are in the back because they are not so important – think again. They are of utmost importance – they are what makes the system work.

The story of our paper

This was in June 1997. My High School friend Sergey Bratus and I had an idea of recognizing the symmetric group Sn using the Goldbach conjecture. The idea was nice and the algorithm was short and worked really fast in practice. We quickly typed it up and submitted to the Journal of Symbolic Computations in September 1997. The journal gave us a lot of grief. First, they refused to seriously consider it since the Goldbach conjecture in referee’s words is “not like the Riemann hypothesis“, so we could not use it. Never mind that it was checked for n<1014, covering all possible values where such algorithm could possibly be useful. So we rewrote the paper by adding a variation based on the ternary Goldbach conjecture which was known for large enough values (and now proved in full).

The paper had no errors, resolved an open problem, but the referees were unhappy. One of them requested we change the algorithm to also work for the alternating group. We did. In the next round the same or another requested we cover the case of unknown n. We did. In the next round one referee requested we make a new implementation of the algorithm, now in GAP and report the results. We did. Clearly, the referees did not want our paper to get published, but did not know how to say it. Yet we persevered. After 4 back and forth revisions the paper more than doubled in size (completely unnecessarily). This took two years, almost to the day, but the paper did get accepted and published. Within a year or two, it became a standard routine in both GAP and MAGMA libraries.

[0] Sergey Bratus and Igor Pak, Fast constructive recognition of a black box group isomorphic to Sn or An using Goldbach’s Conjecture, J. Symbolic Comput. 29 (2000), 33–57.

Until a few days ago I never knew what was the problem the referees had with our paper. Why did a short, correct and elegant paper need to become long to include cumbersome extensions of the original material for the journal to accept it? I was simply too inexperienced to know that this is not the difference in culture (CS vs. math). Read on to find out what I now realized.

Our competition

After we wrote our paper, submitted and publicized on our websites and various conferences, I started noticing strange things. In papers after papers in Computational Group Theory, roughly a half would not reference our paper, but would cite another paper by 5 people in the field which apparently was doing the same or similar things. I recall I wrote to the authors of this competitive paper, but they wrote back that the paper is not written yet. To say I was annoyed was to understate the feeling.

In one notable instance, I confronted Bill Kantor (by email) who helped us with good advice earlier. He gave an ICM talk on the subject and cited a competition paper but not ours, even though I personally showed him the submitted preprint of [0] back in 1997, and explained our algorithm. He replied that he did not recall whether we sent him the paper. I found and forwarded him my email to him with that paper. He replied that he probably never read the email. I forwarded him back his reply on my original email. Out of excuses, Kantor simply did not reply. You see, the calf can never beat the oak tree.

Eventually, the competition paper was published 3 years after our paper:

[1] Robert Beals, Charles Leedham-Green, Alice Niemeyer, Cheryl Praeger, Ákos Seress, A black-box group algorithm for recognizing finite symmetric and alternating groups. I, Trans. AMS 355 (2003), 2097–2113.

The paper claims that the sequel II by the same authors is forthcoming, but have yet to appear. It was supposed to cover the case of unknown n, which [0] was required to cover, but I guess the same rules do not apply to [1]. Or maybe JSC is more selective than TAMS, one never knows… The never-coming sequel II will later play a crucial part in our story.

Anyhow, it turns out, the final result in [1] is roughly the same as in [0]. Although the details are quite different, it wasn’t really worth the long wait. I quote from [1]:

The running time of constructive recognition in [0] is about the same.

The authors then show an incredible dexterity in an effort to claim that their result is better somehow, by finding minor points of differences between the algorithms and claiming their importance. For example, take look at this passage:

The paper [0] describes the case G = Sn, and sketches the necessary modifications for the case G = An. In this paper, we present a complete argument which works for both cases. The case G = An is more complicated, and it is the more important one in applications.

Let me untangle this. First, what’s more “important” in applications is never justified and no sources were cited. Second, this says that BLNPS either haven’t read [0] or are intentionally misleading, as the case of An there is essentially the same as Sn, and the timing is off by a constant. On the other hand, this suggests that [1] treats An in a substantively more complicated way than Sn. Shouldn’t that be an argument in favor of [0] over [1], not the other way around? I could go on with other similarly dubious claims.

The aftermath

From this point on, multiple papers either ignored [0] in favor of [1] or cited [0] pro forma, emphasizing [1] as the best result somehow. For example, the following paper with 3 out of 5 coauthors of [1] goes at length touting [1] and never even mentioned [0].

[2] Alice Niemeyer, Cheryl Praeger, Ákos Seress, Estimation Problems and Randomised Group Algorithms, Lecture Notes in Math. 2070 (2013), 35–82.

When I asked Niemeyer as to how this could have happened, she apologized and explained: “The chapter was written under great time pressure.”

For an example of a more egregious kind, consider this paper:

[3] Robert Beals, Charles Leedham-Green, Alice Niemeyer, Cheryl Praeger, Ákos Seress, Constructive recognition of finite alternating and symmetric groups acting as matrix groups on their natural permutation modules, J. Algebra 292 (2005), 4–46.

They unambiguously claim:

The asymptotically most efficient black-box recognition algorithm known for An and Sn is in [1].

Our paper [0] is not mentioned anywhere near, and cited pro forma for other reasons. But just two years earlier, the exact same 5 authors state in [1] that the timing is “about the same”. So, what has happened to our algorithm in the intervening two years? It slowed down? Or perhaps the one in [1] got faster? Or, more plausibly, BLNPS simply realized that they can get away with more misleading referencing at JOA, than TAMS would ever allow?

Again, I could go on with a dozen other examples of this phenomenon. But you get the idea…

My boiling point: the 2013 JOA paper

For years, I held my tongue, thinking that in the age of Google Scholar these self-serving passages are not fooling anybody, that anyone interested in the facts is just a couple of clicks away from our paper. But I was naive. This strategy of ignoring and undermining [0] eventually paid off in this paper:

[4] Sebastian Jambor, Martin Leuner, Alice Niemeyer, Wilhelm Plesken, Fast recognition of alternating groups of unknown degree, J. Algebra 392 (2013), 315–335.

The abstract says it all:

We present a constructive recognition algorithm to decide whether a given black-box group is isomorphic to an alternating or a symmetric group without prior knowledge of the degree. This eliminates the major gap in known algorithms, as they require the degree as additional input.

And just to drive the point home, here is the passage from the first paragraph in the introduction.

For the important infinite family of alternating groups, the present black-box algorithms [0], [1] can only test whether a given black-box group is isomorphic to an alternating or a symmetric group of a particular degree, provided as additional input to the algorithm.

Ugh… But wait, our paper [0] they are citing already HAS such a test! And it’s not like it is hidden in the paper somehow – Section 9 is titled “What to do if n is not known?” Are the authors JLNP blind, intentionally misleading or simply never read [0]? Or is it the “great time pressure” argument again? What could possible justify such outrageous error?

Well, I wrote to the JLNP but neither of them answered. Nor acknowledged our priority. Nor updated the arXiv posting to reflect the error. I don’t blame them – people without academic integrity simply don’t see the need for that.

My disastrous battle with JOA

Once I realized that JLNP are not interested in acknowledging our priority, I wrote to the Journal of Algebra asking “what can be done?” Here is a copy of my email. I did not request a correction, and was unbelievably surprised to hear the following from Gerhard Hiss, the Editor of the Section on Computational Algebra of the Journal of Algebra:

[..] the authors were indeed careless in this attribution.

In my opinion, the inaccuracies in the paper “Fast recognition of alternating groups of unknown degree” are not sufficiently serious to make it appropriate for the journal to publish a correction.

Although there is some reason for you to be mildly aggrieved, the correction you ask for appears to be inappropriate. This is also the judgment of the other editors of the Computational Algebra Section, who have been involved in this discussion.

I have talked to the authors of the paper Niemeyer et al. and they confirmed that the [sic.] did not intend to disregard your contributions to the matter.

Thus I very much regret this unpleasent [sic.] situation and I ask you, in particular with regard to the two young authors of the paper, to leave it at that.

This email left me floored. So, I was graciously permitted by the JOA to be “mildly aggrieved“, but not more? Basically, Hiss is saying that the answer to my question “What can be done?” is NOTHING. Really?? And I should stop asking for just treatment by the JOA out of “regard to the two young authors”? Are you serious??? It’s hard to know where to begin…

As often happened in such cases, an unpleasant email exchange ensued. In my complaint to Michel Broué, he responded that Gerhard Hiss is a “respectable man” and that I should search for justice elsewhere.

In all fairness to JOA, one editor did behave honorably. Derek Holt wrote to me directly. He admitted that he was the handling editor for [1]. He writes:

Although I did not referee the paper myself, I did read through it, and I really should have spotted the completely false statement in the paper that you had not described any algorithm for determining the degree n of An or Sn in your paper with Bratus. So I would like to apologise now to you and Bratus for not spotting that. I almost wrote to you back in January when this discussion first started, but I was dissuaded from doing so by the other editors involved in the discussion.

Let me parse this, just in case. Holt is the person who implemented the Bratus-Pak algorithm in Magma. Clearly, he read the paper. He admits the error and our priority, and says he wanted to admit it publicly but other unnamed editors stopped him. Now, what about this alleged unanimity of the editorial board? What am I missing? Ugh…

What really happened? My speculation, part I. The community.

As I understand it, the Computational Group Theory is small close-knit community which as a result has a pervasive groupthink. Here is a passage from Niemeyer email to me:

We would also like to take this opportunity to mention how we came about our algorithm. Charles Leedham-Green was visiting UWA in 1996 and he worked with us on a first version of the algorithm. I talked about that in Oberwolfach in mid 1997 (abstract on OW Web site).

The last part is true indeed. The workshop abstracts are here. Niemeyer’s abstract did not mention Leedham-Green nor anyone else she meant by “us” (from the context – Niemeyer and Praeger), but let’s not quibble. The 1996 date is somewhat more dubious. It is contradicted by Niemeyer and Prager, who themselves clarified the timeline in the talk they gave in Oberwolfach in mid 2001 (see the abstract here):

This work was initiated by intense discussions of the speakers and their colleagues at the Computational Groups Week at Oberwolfach in 1997.

Anyhow, we accept that both algorithms were obtained independently, in mid-1997. It’s just that we finished our paper [0] in 3 months, while it took BLNPS about 4 years until it was submitted in 2001.

Next quote from Niemeyer’s email:

So our work was independent of yours. We are more than happy to acknowledge that you and Sergey [Bratus] were the first to come up with a polynomial time algorithm to solve the problem [..].

The second statement is just not true in many ways, nor is this our grievance as we only claim that [0] has a practically superior and theoretically comparable algorithm to that in [1], so there is no reason at all to single out [1] over [0] as is commonly done in the field. In fact, here is a quote from [1] fully contradicting Niemeyer’s claim:

The first polynomial-time constructive recognition algorithm for symmetric and alternating groups was described by Beals and Babai.

Now, note that both Hiss, Holt, Kantor and all 5 authors BLNPS were at both the 1997 and the 2001 Oberwolfach workshops (neither Bratus nor I were invited). We believe that the whole community operates by “they made a stake on this problem” and “what hasn’t happened at Oberwolfach, hasn’t happened.” Such principles make it easier for members of the community to treat BLNPS as pioneers of this problem, and only reference them even though our paper was published before [1] was submitted. Of course, such attitudes also remove a competitive pressure to quickly write the paper – where else in Math and especially CS people take 4-5 years(!) to write a technically elementary paper? (this last part was true also for [0], which is why we could write it in under 3 months).

In 2012, Niemeyer decided to finally finish the long announced part II of [1]. She did not bother to check what’s in our paper [0]. Indeed, why should she – everyone in the community already “knows” that she is the original (co-)author of the idea, so [4] can also be written as if [0] never happened. Fortunately for her, she was correct on this point as neither the referees nor the handling editor, nor the section editor contradicted false statements right in the abstract and the introduction.

My speculation, part II. Why the JOA rebuke?

Let’s look at the timing. In the Fall 2012, Niemeyer visited Aachen. She started collaborating with Professor Plesken from RWTH Aachen and his two graduate students: Jambor and Leuner. The paper was submitted to JOA on December 21, 2012, and the published version lists affiliation of all but Jambor to be in Aachen (Jambor moved to Auckland, NZ before the publication).

Now, Gerhard Hiss is a Professor at RWTH Aachen, working in the field. To repeat, he is the Section Editor of JOA on Computational Algebra. Let me note that [4] was submitted to JOA three days before Christmas 2012, on the same day (according to a comment I received from Eamonn O’Brien from JOA editorial board), on which apparently Hiss and Niemeyer attended a department Christmas party.

My questions: is it fair for a section editor to be making a decision contesting results by a colleague (Plesken), two graduate students (Jambor and Leuner), and a friend (Niemeyer), all currently or recently from his department? Wouldn’t the immediate recusal by Editor Hiss and investigation by an independent editor be a more appropriate course of action under the circumstances? In fact, this is a general Elsevier guideline if I understand it correctly.

What now?

Well, I am at the end of the line on this issue. Public shaming is the only thing that can really work against groupthink. To spread the word, please LIKE this post, REPOST it, here on WP, on FB, on G+, forward it by email, or do wherever you think appropriate. Let’s make sure that whenever somebody googles these names, this post comes up on top of the search results.

P.S. Full disclosure: I have one paper in the Journal of Algebra, on an unrelated subject. Also, I am an editor of Discrete Mathematics, which together with JOA is owned by the same parent company Elsevier.

UPDATE (September 17, 2014): I am disallowing all comments on this post as some submitted comments were crude and/or offensive. I am however agreeing with some helpful criticism. Some claimed that I crossed the line with some personal speculations, so I removed a paragraph. Also, Eamonn O’Brien clarified for me the inner working of the JOA editorial board, so removed my incorrect speculations on that point. Neither are germane to my two main complaints: that [0] is repeatedly mistreated in the area, most notably in [4], and that Editor Hiss should have recused himself from handling my formal complaint on [4].

UPDATE (October 14, 2014): In the past month, over 11K people viewed this post (according to the WP stat tools). This is a simply astonishing number for an inactive blog. Thank you all for spreading the word, whether supportive or otherwise! Special thanks to those of you in the field, who wrote heartfelt emails, also some apologetic and some critical – this was all very helpful.