Archive
Innovation anxiety
I am on record of liking the status quo of math publishing. It’s very far from ideal as I repeatedly discuss on this blog, see e.g. my posts on the elitism, the invited issues, the non-free aspect of it in the electronic era, and especially the pay-to-publish corruption. But overall it’s ok. I give it a B+. It took us about two centuries to get where we are now. It may take us awhile to get to an A.
Given that there is room for improvement, it’s unsurprising that some people make an effort. The problem is that their efforts be moving us in the wrong direction. I am talking specifically about two ideas that frequently come up by people with best intensions: abolishing peer review and anonymizing the author’s name at the review stage. The former is radical, detrimental to our well being and unlikely to take hold in the near future. The second is already here and is simply misguided.
Before I take on both issues, let me take a bit of a rhetorical detour to make a rather obvious point. I will be quick, I promise!
Don’t steal!
Well, this is obvious, right? But why not? Let’s set all moral and legal issues aside and discuss it as adults. Why should a person X be upset if Y stole an object A from Z? Especially if X doesn’t know either Y or Z, and doesn’t really care who A should belong to. Ah, I see you really don’t want to engage with the issue — just like me you already know that this is appalling (and criminal, obviously).
However, if you look objectively at the society we live in, there is clearly some gray area. Indeed, some people think that taxation is a form of theft (“taking money by force”, you see). Millions of people think that illegally downloading movies is not stealing. My university administration thinks stealing my time making me fill all kinds of forms is totally kosher. The country where I grew up in was very proud about the many ways it stole my parents’ rights for liberty and the pursuit of happiness (so that they could keep their lives). The very same country thinks it’s ok to invade and steal territory from a neighboring country. Apparently many people in the world are ok with this (as in “not my problem”). Not comparing any of these, just challenging the “isn’t it obvious” premise.
Let me give a purely American answer to the “why not” question. Not the most interesting or innovative argument perhaps, but most relevant to the peer review discussion. Back in September 1789, Thomas Jefferson was worried about the constitutional precommitment. Why not, he wondered, have a revolution every 19 years, as a way not to burden future generations with rigid ideas from the past?
In February 1790, James Madison painted a grim picture of what would happen: “most of the rights of property would become absolutely defunct and the most violent struggles be generated” between property haves and have-nots, making remedy worse than the disease. In particular, allowing theft would be detrimental to continuing peaceful existence of the community (duh!).
In summary: a fairly minor change in the core part of the moral code can lead to drastic consequences.
Everyone hates peer review!
Indeed, I don’t know anyone who succeeded in academia without a great deal of frustration over the referee reports, many baseless rejections from the journals, or without having to spend many hours (days, weeks) writing their own referee reports. It’s all part of the job. Not the best part. The part well hidden from outside observers who think that professors mostly teach or emulate a drug cartel otherwise.
Well, the help is on the way! Every now and then somebody notably comes along and proposes to abolish the whole thing. Here is one, two, three just in the last few years. Enough? I guess not. Here is the most recent one, by Adam Mastroianni, twitted by Marc Andreessen to his 1.1 million followers.
This is all laughable, right? Well, hold on. Over the past two weeks I spoke to several well known people who think that abolishing peer review would make the community more equitable and would likely foster the innovation. So let’s address these objections seriously, point by point, straight from Mastroianni’s article.
(1) “If scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal.” Huh? The same level journal? I wish…
(2) “Nobody cares to find out what the reviewers said or how the authors edited their paper in response” Oh yes, they do! Thus multiple rounds of review, sometimes over several years. Thus a lot of frustration. Thus occasional rejections after many rounds if the issue turns out non-fixable. That’s the point.
(3) “Scientists take unreviewed work seriously without thinking twice.” Sure, why not? Especially if they can understand the details. Occasionally they give well known people benefit of the doubt, at least for awhile. But then they email you and ask “Is this paper ok? Why isn’t it published yet? Are there any problems with the proof?” Or sometimes some real scrutiny happens outside of the peer review.
(4) “A little bit of vetting is better than none at all, right? I say: no way.” Huh? In math this is plainly ridiculous, but the author is moving in another direction. He supports this outrageous claim by saying that in biomedical sciences the peer review “fools people into thinking they’re safe when they’re not. That’s what our current system of peer review does, and it’s dangerous.” Uhm. So apparently Adam Mastroianni thinks if you can’t get 100% certainty, it’s better to have none. I feel like I’ve heard the same sentiment form my anti-masking relatives.
Obviously, I wouldn’t know and honestly couldn’t care less about how biomedical academics do research. Simply put, I trust experts in other fields and don’t think I know better than them what they do, should do or shouldn’t do. Mastroianni uses “nobody” 11 times in his blog post — must be great to have such a vast knowledge of everyone’s behavior. In any event, I do know that modern medical advances are nothing short of spectacular overall. Sounds like their system works really well, so maybe let them be…
The author concludes by arguing that it’s so much better to just post papers on the arXiv. He did that with one paper, put some jokes in it and people wrote him nice emails. We are all so happy for you, Adam! But wait, who says you can’t do this with all your papers in parallel with journal submissions? That’s what everyone in math does, at least the arXiv part. And if the journals where you publish don’t allow you to do that, that’s a problem with these specific journals, not with the whole peer review.
As for the jokes — I guess I am a mini-expert. Many of my papers have at least one joke. Some are obscure. Some are not funny. Some are both. After all, “what’s life without whimsy“? The journals tend to be ok with them, although some make me work for it. For example, in this recent paper, the referee asked me to specifically explain in the acknowledgements why am I thankful to Jane Austen. So I did as requested — it was an inspiration behind the first sentence (it’s on my long list of starters in my previous blog post). Anyway, you can do this, Adam! I believe in you!
Everyone needs peer review!
Let’s try to imagine now what would happen if the peer review is abolished. I know, this is obvious. But let’s game it out, post-apocaliptic style.
(1) All papers will be posted on the arXiv. In a few curious cases an informal discussion will emerge, like this one about this recent proof of the four color theorem. Most paper will be ignored just like they are ignored now.
(2) Without a neutral vetting process the journals will turn to publishing “who you know”, meaning the best known and best connected people in the area as “safe bets” whose work was repeatedly peer reviewed in the past. Junior mathematicians will have no other way to get published in leading journals without collaboration (i.e. writing “joint papers”) with top people in the area.
(3) Knowing that their papers won’t be refereed, people will start making shortcuts in their arguments. Soon enough some fraction will turn up unsalvageable incorrect. Embarrassments like the ones discussed in this page will become a common occurrence. Eventually the Atiyah-style proofs of famous theorems will become widespread confusing anyone and everyone.
(4) Granting agencies will start giving grants only to the best known people in the area who have most papers in best known journals (if you can peer review papers, you can’t expect to peer review grant proposals, right?) Eventually they will just stop, opting to give more money to best universities and institutions, in effect outsourcing their work.
(5) Universities will eventually abolish tenure as we know it, because if anyone is free to work on whatever they want without real rewards or accountability, what’s the point of tenure protection? When there are no objective standards, in the university hiring the letters will play the ultimate role along with many biases and random preferences by the hiring committees.
(6) People who work in deeper areas will be spending an extraordinary amount of time reading and verifying earlier papers in the area. Faced with these difficulties graduate students will stay away from such areas opting for more shallow areas. Eventually these areas will diminish to the point of near-extinsion. If you think this is unlikely, look into post-1980 history of finite group theory.
(7) In shallow areas, junior mathematicians will become increasingly more innovative to avoid reading older literature, but rather try to come up with a completely new question or a new theory which can be at least partially resolved on 10 pages. They will start running unrefereed competitive conferences where they will exhibit their little papers as works of modern art. The whole of math will become subjective and susceptible to fashion trends, not unlike some parts of theoretical computer science (TCS).
(8) Eventually people in other fields will start saying that math is trivial and useless, that everything they do can be done by an advanced high schooler in 15 min. We’ve seen this all before, think candid comments by Richard Feynman, or these uneducated proclamations by this blog’s old villain Amy Wax. In regards to combinatorics, such views were prevalent until relatively recently, see my “What is combinatorics” with some truly disparaging quotations, and this interview by László Lovász. Soon after, everyone (physics, economics, engineering, etc.) will start developing their own kind of math, which will be the end of the whole field as we know it.
…
(100) In the distant future, after the human civilization dies and rises up again, historians will look at the ruins of this civilization and wonder what happened? They will never learn that’s it’s all started with Adam Mastroianni when he proclaimed that “science must be free“.
Less catastrophic scenarios
If abolishing peer review does seem a little farfetched, consider the following less drastic measures to change or “improve” peer review.
(i) Say, you allow simultaneous submissions to multiple journals, whichever accepts first gets the paper. Currently, the waiting time is terribly long, so one can argue this would be an improvement. In support of this idea, one can argue that in journalism pitching a story to multiple editors is routine, that job applications are concurrent to all universities, etc. In fact, there is even an algorithm to resolve these kind of situations successfully. Let’s game this out this fantasy.
The first thing that would happen is that journals would be overwhelmed with submissions. The referees are already hard to find. After the change, they would start refusing all requests since they would also be overwhelmed with them and it’s unclear if the report would even be useful. The editors would refuse all but a few selected papers from leading mathematicians. Chat rooms would emerge in the style “who is refereeing which paper” (cf. PubPeer) to either collaborate or at least not make redundant effort. But since it’s hard to trust anonymous claims “I checked and there are no issues with Lemma 2 in that paper” (could that be the author?), these chats will either show real names thus leading to other complications (see below), or cease to exist.
Eventually the publishers will start asking for a signed official copyright transfer “conditional on acceptance” (some already do that), and those in violation will be hit with lawsuits. Universities will change their faculty code of conduct to include such copyright violations as a cause for dismissal, including tenure removal. That’s when the practice will stop and be back to normal, at great cost obviously.
(ii) Non-anonymizing referees is another perennial idea. Wouldn’t it be great if the referees get some credit for all the work that they do (so they can list it on their CVs). Even better if their referee report is available to the general public to read and scrutinize, etc. Win-win-win, right?
No, of course not. Many specialized sub-areas are small so it is hard to find a referee. For the authors, it’s relatively easy to guess who the referees are, at least if you have some experience. But there is still this crucial ambiguity as in “you have a guess but you don’t know for sure” which helps maintain friendship or at least collegiality with those who have written a negative referee report. You take away this ambiguity, and everyone will start refusing refereeing requests. Refereeing is hard already, there is really no need to risk collegial relationships as a result, especially in you are both going to be working the area for years or even decades to come.
(iii) Let’s pay the referees! This is similar but different from (ii). Think about it — the referees are hard to find, so we need to reward them. Everyone know that when you pay for something, everyone takes this more seriously, right? Ugh. I guess I have some new for you…
Think it over. You got a technical 30 page paper to referee. How much would you want to get paid? You start doing a mental calculation. Say, at a very modest $100/hr it would take you maybe 10-20 hours to write a thorough referee report. That’s $1-2K. Some people suggest $50/hr but that was before the current inflation. While I do my own share of refereeing, personally, I would charge more per hour as I can get paid better doing something else (say, teach our Summer school). For a traditional journal to pay this kind of money per paper is simply insane. Their budgets are are relatively small, let me spare you the details.
Now, who can afford that kind of money? Right — we are back to the open access journals who would pass the cost to the authors in the form of an APC. That’s when the story turn from bad to awful. For that kind of money the journals would want a positive referee report since rejected authors don’t pay. If you are not willing to play ball and give them a positive report, they will stop inviting you to referee, leading to more even corruption these journals have in the form of pay-to-publish.
You can probably imagine that this won’t end well. Just talk to medical or biological scientists who grudgingly pays to Nature or Science about 3K from their grants (which are much larger than ours). The pay because they have to, of course, and if they bulk they might not get a new grant setting back their career.
Double blind refereeing
In math, this means that the authors’ names are hidden from referees to avoid biases. The names are visible to the editors, obviously, to prevent “please referee your own paper” requests. The authors are allowed to post their papers on their websites or the arXiv, where it could be easily found by the title, so they don’t suffer from anxieties about their career or competitive pressures.
Now, in contrast with other “let’s improve the peer review” ideas, this is already happening. In other fields this has been happening for years. Closer to home, conferences in TCS have long resisted going double blind, but recently FOCS 2022, SODA 2023 and STOC 2023 all made the switch. Apparently they found Boaz Barak’s arguments unpersuasive. Well, good to know.
Even closer to home, a leading journal in my own area, Combinatorial Theory, turned double blind. This is not a happy turn of event, at least not from my perspective. I published 11 papers in JCTA, before the editorial board broke off and started CT. I have one paper accepted at CT which had to undergo the new double blind process. In total, this is 3 times as many as any other journal where I published. This was by far my favorite math journal.
Let’s hear from the journal why they did it (original emphasis):
The philosophy behind doubly anonymous refereeing is to reduce the effect of initial impressions and biases that may come from knowing the identity of authors. Our goal is to work together as a combinatorics community to select the most impactful, interesting, and well written mathematical papers within the scope of Combinatorial Theory.
Oh, sure. Terrific goal. I did not know my area has a bias problem (especially compared to many other areas), but of course how would I know?
Now, surely the journal didn’t think this change would be free? The editors must have compared pluses and minuses, and decided that on balance the benefits outweigh the cost, right? The journal is mum on that. If any serious discussion was conducted (as I was told), there is no public record of it. Here is what the journal says how the change is implemented:
As a referee, you are not disqualified to evaluate a paper if you think you know an author’s identity (unless you have a conflict of interest, such as being the author’s advisor or student). The journal asks you not to do additional research to identify the authors.
Right. So let me try to understand this. The referee is asked to make a decision whether to spend upwards of 10-20 hours on the basis of the first impression of the paper and without knowledge of the authors’ identity. They are asked not to google the authors’ names, but are ok if you do because they can’t enforce this ethical guideline anyway. So let’s think this over.
Double take on double blind
(1) The idea is so old in other sciences, there is plenty of research on its relative benefits. See e.g. here, there or there. From my cursory reading, it seems, there is a clear evidence of a persistent bias based on the reputation of educational institution. Other biases as well, to a lesser degree. This is beyond unfortunate. Collectively, we have to do better.
(2) Peer reviews have very different forms in different sciences. What works in some would not necessarily would work in others. For example, TCS conferences never really had a proper refereeing process. The referees are given 3 weeks to write an opinion of the paper based on the first 10 pages. They can read the proofs beyond the 10 pages, but don’t have to. They write “honest” opinions to the program committee (invisible to the authors) and whatever they think is “helpful” to the authors. Those of you outside of TCS can’t even imagine the quality and biases of these fully anonymous opinions. In recent years, the top conferences introduced the rebuttal stage which is probably helpful to avoid random superficial nitpicking at lengthy technical arguments.
In this large scale superficial setting with rapid turnover, the double blind refereeing is probably doing more good than bad by helping avoid biases. The authors who want to remain anonymous can simply not make their papers available for about three months between the submission and the decision dates. The conference submission date is a solid date stamp for them to stake the result, and three months are unlikely to make major change to their career prospects. OTOH, the authors who want to stake their reputation on the validity of their technical arguments (which are unlikely to be fully read by the referees) can put their papers on the arXiv. All in all, this seems reasonable and workable.
(3) The journal process is quite a bit longer than the conference, naturally. For example, our forthcoming CT paper was submitted on July 2, 2021 and accepted on November 3, 2022. That’s 16 months, exactly 490 days, or about 20 days per page, including the references. This is all completely normal and is nobody’s fault (definitely not the handling editor’s). In the meantime my junior coauthor applied for a job, was interviewed, got an offer, accepted and started a TT job. For this alone, it never crossed our mind not to put the paper on the arXiv right away.
Now, I have no doubt that the referee googled our paper simply because in our arguments we frequently refer our previous papers on the subject for which this was a sequel (er… actually we refer to some [CPP21a] and [CPP21b] papers). In such cases, if the referee knows that the paper under review is written by the same authors there is clearly more confidence that we are aware of the intricate parts of our own technical details from the previous paper. That’s a good thing.
Another good thing to have is the knowledge that our paper is surviving public scrutiny. Whenever issues arise we fix them, whenever some conjecture are proved or refuted, we update the paper. That’s a normal academic behavior no matter what Adam Mastroianni says. Our reputation and integrity is all we have, and one should make every effort to maintain it. But then the referee who has been procrastinating for a year can (and probably should) compare with the updated version. It’s the right thing to do.
Who wants to hide their name?
Now that I offered you some reasons why looking for paper authors is a good thing (at least in some cases), let’s look for negatives. Under what circumstances might the authors prefer to stay anonymous and not make their paper public on the arXiv?
(a) Junior researchers who are afraid their low status can reduce their chances to get accepted. Right, like graduate students. This will hurt them both mathematically and job wise. This is probably my biggest worry that CT is encouraging more such cases.
(b) Serial submitters and self-plagiarists. Some people write many hundreds of papers. They will definitely benefit from anonymity. The editors know who they are and that their “average paper” has few if any citations outside of self-citations. But they are in a bind — they have to be neutral arbiters and judge each new paper independently of the past. Who knows, maybe this new submission is really good? The referees have no such obligation. On the contrary, they are explicitly asked to make a judgement. But if they have no name to judge the paper by, what are they supposed to do?
Now, this whole anonymity thing is unlikely to help serial submitters at CT, assuming that the journal standards remain high. Their papers will be rejected and they will move on, submitting down the line until they find an obscure enough journal that will bite. If other, somewhat less selective journals adopt the double blind review practice, this could improve their chances, however.
For CT, the difference is that in the anonymous case the referees (and the editors) will spend quite a bit more time per paper. For example, when I know that the author is a junior researcher from a university with limited access to modern literature and senior experts, I go out of my way to write a detailed referee report to help the authors, suggest some literature they are missing or potential directions for their study. If this is a serial submitter, I don’t. What’s the point? I’ve tried this a few times, and got the very same paper from another journal next week. They wouldn’t even fix the typos that I pointed out, as if saying “who has the time for that?” This is where Mastroianni is right: why would their 234-th paper be any different from 233-rd?
(c) Cranks, fraudsters and scammers. The anonymity is their defense mechanism. Say, you google the author and it’s Dănuț Marcu, a serial plagiarist of 400+ math papers. Then you look for a paper he is plagiarizing from and if successful making efforts to ban him from your journal. But if the author is anonymous, you try to referee. There is a very good chance you will accept since he used to plagiarize good but old and somewhat obscure papers. So you see — the author’s identity matters!
Same with the occasional zero-knowledge (ZK) aspirational provers whom I profiled at the end of this blog post. If you are an expert in the area and know of somebody who has tried for years to solve a major conjecture producing one false or incomplete solution after another, what do you do when you see a new attempt? Now compare with what you do if this paper is by anonymous? Are you going to spend the same effort effort working out details of both papers? Wouldn’t in the case of a ZK prover you stop when you find a mistake in the proof of Lemma 2, while in the case of a genuine new effort try to work it out?
In summary: as I explained in my post above, it’s the right thing to do to judge people by their past work and their academic integrity. When authors are anonymous and cannot be found, the losers are the most vulnerable, while the winners are the nefarious characters. Those who do post their work on the arXiv come out about even.
Small changes can make a major difference
If you are still reading, you probably think I am completely 100% opposed to changes in peer review. That’s not true. I am only opposed to large changes. The stakes are just too high. We’ve been doing peer review for a long time. Over the decades we found a workable model. As I tried to explain above, even modest changes can be detrimental.
On the other hand, very small changes can be helpful if implemented gradually and slowly. This is what TCS did with their double blind review and their rebuttal process. They started experimenting with lesser known and low stakes conferences, and improved the process over the years. Eventually they worked out the kinks like COI and implemented the changes at top conferences. If you had to make changes, why would you start with a top journal in the area??
Let me give one more example of a well meaning but ultimately misguided effort to make a change. My former Lt. Governor Gavin Newsom once decided that MOOCs are the answer to education foes and is a way for CA to start giving $10K Bachelor’s degrees. The thinking was — let’s make a major change (a disruption!) to the old technology (teaching) in the style of Google, Uber and Theranos!
Lo and behold, California spent millions and went nowhere. Our collective teaching experience during COVID shows that this was not an accident or mismanagement. My current Governor, the very same Gavin Newsom, dropped this idea like a rock, limiting it to cosmetic changes. Note that this isn’t to say that online education is hopeless. In fact, see this old blog post where I offer some suggestions.
My modest proposal
The following suggestions are limited to pure math. Other fields and sciences are much too foreign for me to judge.
(i) Introduce a very clearly defined quick opinion window of about 3-4 weeks. The referees asked for quick opinions can either decline or agree within 48 hours. It will only take them about 10-20 minutes to make an opinion based on the introduction, so give them a week to respond with 1-2 paragraphs. Collect 2-3 quick opinions. If as an editor you feel you need more, you are probably biased against the paper or the area, and are fishing for a negative opinion to have “quick reject“. This is a bit similar to the way Nature, Science, etc. deal with their submissions.
(ii) Make quick opinion requests anonymous. Request the reviewers to assess how the paper fits the journal (better, worse, on point, best submitted to another area to journals X, Y or Z, etc.) Adopt the practice of returning these opinions to the authors. Proceed to the second stage by mutual agreement. This is a bit similar to TCS which has authors use the feedback from the conference makes decisions about the journal or other conference submissions.
(iii) If the paper is rejected or withdrawn after the quick opinion stage, adopt the practice to send quick opinions to another journal where the paper is resubmitted. Don’t communicate the names of the reviewers — if the new editor has no trust in the first editor’s qualifications, let them collect their own quick opinions. This would protect the reviewers from their names going to multiple journals thus making their names semi-public.
(iv) The most selective journals should require that the paper not be available on the web during the quick opinion stage, and violators be rejected without review. Anonymous for one — anonymous for all! The three week long delay is unlikely to hurt anybody, and the journal submission email confirmation should serve as a solid certificate of a priority if necessary. Some people will try to game the system like give a talk with the same title as the paper or write a blog post. Then it’s on editor’s discretion what to do.
(v) In the second (actual review) stage, the referees should get papers with authors’ names and proceed per usual practice.
Happy New Year everyone!
What to publish?
This might seem like a strange question. A snarky answer would be “everything!” But no, not really everything. Not all math deserves to be published, just like not all math needs to be done. Making this judgement is difficult and goes against the all too welcoming nature of the field. But if you want to succeed in math as a profession, you need to make some choices. This is a blog post about the choices we make and the choices we ought to make.
Bedtime questions
Suppose you tried to solve a major open problem. You failed. A lot of time is wasted. Maybe it’s false, after all, who knows. You are no longer confident. But you did manage to compute some nice examples, which can be turned into a mediocre little paper. Should you write it and post it on the arXiv? Should you submit it to a third rate journal? A mediocre paper is still a consolation prize, right? Better than nothing, no?
Or, perhaps, it is better not to show how little you proved? Wouldn’t people judge you as an “average” of all published papers on your CV? Wouldn’t this paper have negative impact on your job search next year? Maybe it’s better to just keep it to yourself for now and hope you can make a breakthrough next year? Or some day?
But wait, other people in the area have a lot more papers. Some are also going to be on a job market next year. Shouldn’t you try to catch up and publish every little thing you have? People at other universities do look at the numbers, right? Maybe nobody will notice this little paper. If you have more stuff done by then it will get lost in the middle of my CV, but it will help get the numbers up. Aren’t you clever or what?
Oh, wait, maybe not! You do have to send your CV to your letter writers. They will look at all your papers. How would they react to a mediocre paper? Will they judge you badly? What in the world should you do?!?
Well, obviously I don’t have one simple answer to that. But I do have some thoughts. And this quote from a famous 200 year old Russian play about people who really cared how they are perceived:
Chatsky: I wonder who the judges are! […]
Famusov: My goodness! What will countess Marya Aleksevna say to this?
[Alexander Griboyedov, Woe from Wit, 1823, abridged.]
You would think our society had advanced at least a little…
Who are the champions?
If we want to find the answers to our questions, it’s worth looking at the leaders of the field. Let’s take a few steps back and simply ask — Who are the best mathematicians? Ridiculous questions always get many ridiculous answers, so here is a random ranking by some internet person: Newton, Archimedes, Gauss, Euler, etc. Well, ok — these are all pretty dead and probably never had to deal with a bad referee report (I am assuming).
Here is another random list, from a well named website research.com. Lots of living people finally: Barry Simon, Noga Alon, Gilbert Laporte, S.T. Yau, etc. Sure, why not? But consider this recent entrant: Ravi P. Agarwal is at number 20, comfortably ahead of Paul Erdős at number 25. Uhm, why?

Or consider Theodore E. Simos who is apparently the “Best Russian Mathematician” according to research.com, and number 31 in the world ranking:

Uhm, I know MANY Russian mathematicians. Some of them are truly excellent. Who is this famous Simos I never heard of? How come he is so far ahead of Vladimir Arnold who is at number 829 on the list?

Of course, you already guessed the answer. It’s obvious from the pictures above. In their infinite wisdom, research.com judges mathematicians by the weighted average of the numbers of papers and citations. Arnold is doing well on citations, but published so little! Only 157 papers!
Numbers rule the world
To dig a little deeper into this citation phenomenon, take a look at the following curious table from a recent article “Extremal mathematicians“ by Carlos Alfaro:

If you’ve been in the field for awhile, you are probably staring at this in disbelief. How do you physically write so many papers?? Is this even true???
Yes, you know how Paul Erdős did it — he was amazing and he had a lot of coauthors. No, you don’t know how Saharon Shelah does it. But he is a legend, and you are ok with that. But here we meet again our hero Ravi P. Agarwal, the only human mathematician with more papers than Erdős. Who is he? Here is what the MathSciNet says:

Note that Ravi is still going strong — in less than 3 years he added 125 papers. Of these 1727 papers, 645 are with his favorite coauthor Donal O’Regan, number 3 on the list above. Huh? What is going on??
What’s in a number?
If the number of papers is what’s causing you to worry, let’s talk about it. Yes, there is also number of citations, the h-index (which boils down to the number of citations anyway), and maybe other awful measurements of research productivity. But the number of papers is what you have a total control over. So here are a few strategies how you can inflate the number that I learned from a close examination of publishing practices of some of the “extremal mathematicians”. They are best employed in combination:
(a) Form a clique. Over the years build a group of 5-8 close collaborators. Keep writing papers in different subsets of 3-5 of them. This is easier to do since each gets to have many papers while writing only a fraction. Make sure each papers cites heavily all other subsets from the clique. To an untrained eye of an editor, these would appear to be experts who are able to referee the paper.
(b) Form a cartel. This is a strong for of a clique. Invent an area and call yourselves collaborative research in that area. Make up a technical name, something like “analytic and algebraic topology
of locally Euclidean metrizations of infinitely differentiable Riemannian manifolds“. Apply for collaborative grants, organize conferences, publish conference proceedings, publish monographs, start your own journal. From outside it looks like a normal research activity, and who is to judge after all?
(c) Publish in little known, not very selective or shady journals. For example, Ravi P. Agarwal published 26 papers in Mathematics (MDPI Journal) that I discussed at length in this blog post. Note aside: since Mathematics is not indexed by the MathSciNet, the numbers above undercount his total productivity.
(d) Organize special issues with these journals. For example, here is a list of 11(!) special issues Agarwal served as a special editor with MDPI. Note the breadth of the collection:

(e) Become an editor of an established but not well managed journal and publish a lot there with all your collaborators. For example, T.E. Simos has a remarkable record of 150 (!) papers in the Journal of Mathematical Chemistry, where he is an editor. I feel that Springer should be ashamed of such a poor oversight of this journal, but nothing can be done I am sure since the journal has a healthy 2.413 impact factor, and Simos’s hard work surely contributed to its rise from just 1.056 in 2015. OTOH, maybe somebody can convince the MathSciNet to stop indexing this journal?

Let me emphasize that nothing on the list above is unethical, at least in a way the AMS or the NAS define these (as do most universities I think). The difference is quantitative, not qualitative. So these should not be conflated with various paper mill practices such as those described in this article by Anna Abalkina.
Disclaimer: I strongly recommend you use none of these strategies. They are abusing the system and have detrimental long term effects to both your area and your reputation.
Zero-knowledge publishing
In mathematics, there is another method of publishing that I want to describe. This one is borderline unethical at best, so I will refrain from naming names. You figure it out on your own!
Imagine you want to prove a major open problem in the area. More precisely, you want to become famous for doing that without actually getting the proof. In math, you can’t get there without publishing your “proof” in a leading area journal, better yet one of the top journals in mathematics. And if you do, it’s a good bet the referees will examine your proof very carefully. Sounds like a fail-proof system, right?
Think again! Here is an ingenuous strategy that I recently happen to learn. The strategy is modeled on the celebrated zero-knowledge proof technique, although the author I am thinking of might not be aware of that.
For simplicity, let’s say the open problem is “A=? Z”. Here is what you do, step by step.
- You come up with a large set of problems P,Q,R,S,T,U,V,W,X,Y which are all equivalent to Z. You then start a well publicized paper factory proving P=Q, W=X, X=Z, Q=Z, etc. All these papers are correct and give a good vibe of somebody who is working hard on the A=?Z problem. Make sure you have a lot of famous coauthors on these papers to further establish your credibility. In haste, make the papers barely readable so that the referees don’t find any major mistakes but get exhausted by the end.
- Make another list of problems B,C,D,E,F,G which are equivalent to A. Keep these equivalences secret. Start writing new papers proving B=T, D=Y, E=X, etc. Write them all in a style similar to previous list: cumbersome, some missing details, errors in minor arguments, etc. No famous people as coauthors. Do try to involve many grad students and coauthors to generate good will (such a great mentor!) They will all be incorrect, but none of them would raise a flag since by themselves they don’t actually prove A=Z.
- Populate the arXiv with all these papers and submit them to different reputable journals in the area. Some referees or random readers will find mistakes, so you fix one incomprehensible detail with another and resubmit. If crucial problems in one paper persist, just drop it and keep going through the motions on all other papers. Take your time.
- Eventually one of these will get accepted because the referees are human and they get tired. They will just assume that the paper they are handling is just like the papers on the first list – clumsily written but ultimately correct. And who wants to drag things down over some random reduction — the young researcher’s career is on the line. Or perhaps, the referee is a coauthor of some of the paper on the first list – in this case they are already conditioned to believe the claims because that’s what they learned from the experience on the joint paper.
- As soon as any paper from the second list is accepted, say E=X, take off the shelf the reduction you already know and make it public with great fanfare. For example, in this case quickly announce that A=E. Combined with the E=X breakthrough, and together with X=W and W=Z previously published in the first list, you can conclude that A=Z. Send it to the Annals. What are the referees going to do? Your newest A=E is inarguable, clearly true. How clever are you to have figured out the last piece so quickly! The other papers are all complicated and confusing, they all raise questions, but somebody must have refereed them and accepted/published them. Congratulations on the solution of A=Z problem! Well done!
It might take years or even decades until the area has a consensus that one should simply ignore the erroneous E=X paper and return to “A=?Z” the status of an open problem. The Annals will refuse to publish a retraction — technically they only published a correct A=E reduction, so it’s all other journals’ fault. It will all be good again, back to normal. But soon after, new papers such as G=U and B=R start to appear, and the agony continues anew…
From math to art
Now that I (hopefully) convinced you that high numbers of publications is an achievable but ultimately futile goal, how should you judge the papers? Do they at least make a nonnegative contribution to one’s CV? The answer to the latter question is “No”. This contribution can be negative. One way to think about is by invoking the high end art market.
Any art historian would be happy to vouch that the worth of a painting hinges heavily on the identity of the artist. But why should it? If the whole purpose of a piece of art is to evoke some feelings, how does the artist figures into this formula? This is super naïve, obviously, and I am sure you all understand why. My point is that things are not so simple.
One way to see the a pattern among famous artists is to realize that they don’t just create “one off” paintings, but rather a “series”. For example, Monet famously had haystack and Rouen Cathedral series, Van Gogh had a sunflowers series, Mondrian had a distinctive style with his “tableau” and “composition” series, etc. Having a recognizable very distinctive style is important, suggesting that painting in series are valued differently than those that are not, even if they are by the same artist.
Finally, the scarcity is an issue. For example Rodin’s Thinker is one of the most recognizable sculptures in the world. So is the Celebration series by Jeff Koons. While the latter keep fetching enormous prices at auctions, the latest sale of a Thinker couldn’t get a fifth of the Yellow Balloon Dog price. It could be because balloon animals are so cool, but could also be that there are 27 Thinkers in total, all made from the same cast. OTOH, there are only 5 balloon dogs, and they all have distinctly different colors making them both instantly recognizable yet still unique. You get it now — it’s complicated…
What papers to write
There isn’t anything objective of course, but thinking of art helps. Let’s figure this out by working backward. At the end, you need to be able to give a good colloquium style talk about your work. What kid of papers should you write to give such a talk?
- You can solve a major open problem. The talk writes itself then. You discuss the background, many famous people’s attempts and partial solutions. Then state your result and give an idea of the proof. Done. No need to have a follow up or related work. Your theorem speaks for itself. This is analogous to the most famous paintings. There are no haystacks or sunflowers on that list.
- You can tell a good story. I already wrote about how to write a good story in a math paper, and this is related. You start your talk by telling what’s the state of the sub-area, what are the major open problems and how do different aspects of your work fit in the picture. Then talk about how the technology that you develop over several papers positioned you to make a major advance in the area that is your most recent work. This is analogous to the series of painting.
- You can prove something small and nice, but be an amazing lecturer. You mesmerize the audience with your eloquence. For about 5 minutes after your talk they will keep thinking this little problem you solved is the most important result in all of mathematics. This feeling will fade, but good vibes will remain. They might still hire you — such talent is rare and teaching excellence is very valuable.
That’s it. If you want to give a good job talk, there is no other way to do it. This is why writing many one-off little papers makes very little sense. A good talk is not a patchwork quilt – you can’t make it of disparate pieces. In fact, I heard some talks where people tried to do that. They always have coherence of a portrait gallery of different subjects by different artists.
Back to the bedtime questions — the answer should be easy to guess now. If your little paper fits the narrative, do write it and publish it. If it helps you tell a good story — that sounds great. People in the area will want to know that you are brave enough to make a push towards a difficult problem using the tools or results you previously developed. But if it’s a one-off thing, like you thought for some reason that you could solve a major open problem in another area — why tell anyone? If anything, this distracts from the story you want to tell about your main line of research.
How to judge other people’s papers
First, you do what you usually do. Read the paper, make a judgement on the validity and relative importance of the result. But then you supplement the judgement with what you know about the author, just like when you judge a painting.
This may seem controversial, but it’s not. We live in an era of thousands of math journals which publish in total over 130K papers a year (according to MathSciNet). The sheer amount of mathematical research is overwhelming and the expertise has fractured into tiny sub-sub-areas, many hundreds of them. Deciding if a paper is a useful contribution to the area is by definition a function of what the community thinks about the paper.
Clearly, you can’t poll all members of the community, but you can ask a couple of people (usually called referees). And you can look at how previous papers by the author had been accepted by the community. This is why in the art world they always write about recent sales: what money and what museum or private collections bought the previous paintings, etc. Let me give you some math examples.
Say, you are an editor. Somebody submits a bijective proof of a binomial identity. The paper is short but nice. Clearly publishable. But then you check previous publications and discover the author has several/many other published papers with nice bijective proofs of other binomial identities, and all of them have mostly self-citations. Then you realize that in the ocean of binomial identities you can’t even check if this work has been done before. If somebody in the future wants to use this bijection, how would they go about looking for it? What will they be googling for? If you don’t have good answers to these questions, why would you accept such a paper then?
Say, you are hiring a postdoc. You see files of two candidates in your area. Both have excellent well written research proposals. One has 15 papers, another just 5 papers. The first is all over the place, can do and solve anything. The second is studious and works towards building a theory. You only have time to read the proposals (nobody has time to read all 20 papers). You looks at the best papers of each and they are of similar quality. Who do you hire?
That depends on who you are looking for, obviously. If you are a fancy shmancy university where there are many grad students and postdocs all competing with each other, none working closely with their postdoc supervisor — probably the first one. Lots of random papers is a plus — the candidate clearly adapts well and will work with many others without need for a supervision. There is even a chance that they prove something truly important, it’s hard to say, right? Whether they get a good TT job afterwards and what kind of job would that be is really irrelevant — other postdocs will be coming in a steady flow anyway.
But if you want to have this new postdoc to work closely with a faculty at your university, someone intent on building a something valuable, so that they are able to give a nice job talk telling a good story at the end, hire the second one. They first is much too independent and will probably be unable to concentrate on anything specific. The amount of supervision tends to go less, not more, as people move up. Left to their own devices you expect from these postdocs more of the same, so the choice becomes easy.
Say, you are looking at a paper submitted to you as an editor of an obscure journal. You need a referee. Look at the previous papers by the authors and see lots of the repeated names. Maybe it’s a clique? Make sure your referees are not from this clique, completely unrelated to them in any way.
Or, say, you are looking at a paper in your area which claims to have made an important step towards resolving a major conjecture. The first thing you do is look at previous papers by the same person. Have they said the same before? Was it the same or a different approach? Have any of their papers been retracted or major mistakes found? Do they have several parallel papers which prove not exactly related results towards the same goal? If the answer is Yes, this might be a zero-knowledge publishing attempt. Do nothing. But do tell everyone in the area to ignore this author until they publish one definitive paper proving all their claims. Or not, most likely…
P.S. I realize that many well meaning journals have double blind reviews. I understand where they are coming from, but think in the case of math this is misguided. This post is already much too long for me to talk about that — some other time, perhaps.
What we’ve got here is failure to communicate
Here is a lengthy and somewhat detached followup discussion on the very unfortunate Hill’s affair, which is much commented by Tim Gowers, Terry Tao and many others (see e.g. links and comments on their blog posts). While many seem to be universally distraught by the story and there are some clear disagreements on what happened, there are even deeper disagreements on what should have happened. The latter question is the subject of this blog post.
Note: Below we discuss both the ethical and moral aspects of the issue. Be patient before commenting your disagreements until you finish the reading — there is a lengthy disclaimer at the end.
Review process:
- When the paper is submitted there is a very important email acknowledging receipt of the submission. Large publishers have systems send such emails automatically. Until this email is received, the paper is not considered submitted. For example, it is not unethical for the author to get tired of waiting to hear from the journal and submit elsewhere instead. If the journal later comes back and says “sorry for the wait, here are the reports”, the author should just inform the journal that the paper is under consideration elsewhere and should be considered withdrawn (this happens sometimes).
- Similarly, there is a very important email acknowledging acceptance of the submission. Until this point the editors ethically can do as they please, even reject the paper with multiple positive reports. Morality of the latter is in the eye of the beholder (cf. here), but there are absolutely no ethical issues here unless the editor violated the rules set up by the journal. In principle, editors can and do make decisions based on informal discussions with others, this is totally fine.
- If a journal withdraws acceptance after the formal acceptance email is sent, this is potentially a serious violation of ethical standards. Major exception: this is not unethical if the journal follows a certain procedural steps (see the section below). This should not be done except for some extreme circumstances, such as last minute discovery of a counterexample to the main result which the author refuses to recognize and thus voluntarily withdraw the paper. It is not immoral since until the actual publication no actual harm is done to the author.
- The next key event is publication of the article, whether online of in print, usually/often coupled with the transfer of copyright. If the journal officially “withdraws acceptance” after the paper is published without deleting the paper, this is not immoral, but depends on the procedural steps as in the previous item.
- If a journal deletes the paper after the publication, online or otherwise, this is a gross violation of both moral and ethical standards. The journals which do that should be ostracized regardless their reasoning for this act. Major exception: the journal has legal reasoning, e.g. the author violated copyright laws by lifting from another published article as in the Dănuț Marcu case (see below).
Withdrawal process:
- As we mentioned earlier, the withdrawal of accepted or published article should be extremely rare, only in extreme circumstances such as a major math error for a not-yet-published article or a gross ethical violation by the author or by the handling editor of a published article.
- For a published article with a major math error or which was later discovered to be known, the journal should not withdraw the article but instead work with the author to publish an erratum or an acknowledgement of priority. Here an erratum can be either fixing/modifying the results, or a complete withdrawal of the main claim. An example of the latter is an erratum by Daniel Biss. Note that the journal can in principle publish a note authored by someone else (e.g. this note by Mnёv in the case of Biss), but this should be treated as a separate article and not a substitute for an erratum by the author. A good example of acknowledgement of priority is this one by Lagarias and Moews.
- To withdraw the disputed article the journal’s editorial board should either follow the procedure set up by the publisher or set up a procedure for an ad hoc committee which would look into the paper and the submission circumstances. Again, if the paper is already published, only non-math issues such as ethical violations by the author, referee(s) and/or handling editor can be taken into consideration.
- Typically, a decision to form an ad hoc committee or call for a full editorial vote should me made by the editor in chief, at the request of (usually at least two) members of the editorial board. It is totally fine to have a vote by the whole editorial board, even immediately after the issue was raised, but the threshold for successful withdrawal motion should be set by the publisher or agreed by the editorial board before the particular issue arises. Otherwise, the decision needs to be made by consensus with both the handling editor and the editor in chief abstaining from the committee discussion and the vote.
- Examples of the various ways the journals act on withdrawing/retracting published papers can be found in the case of notorious plagiarist Dănuț Marcu. For example, Geometria Dedicata decided not to remove Marcu’s paper but simply issued a statement, which I personally find insufficient as it is not a retraction in any formal sense. Alternatively, SUBBI‘s apology is very radical yet the reasoning is completely unexplained. Finally, Soifer’s statement on behalf of Geombinatorics is very thorough, well narrated and quite decisive, but suffers from authoritarian decision making.
- In summary, if the process is set up in advance and is carefully followed, the withdrawal/retraction of accepted or published papers can be both appropriate and even desirable. But when the process is not followed, such action can be considered unethical and should be avoided whenever possible.
Author’s rights and obligations:
- The author can withdraw the paper at any moment until publication. It is also author’s right not to agree to any discussion or rejoinder. The journal, of course, is under no obligation to ask the author’s permission to publish a refutation of the article.
- If the acceptance is issued, the author has every right not go along with the proposed quiet withdrawal of the article. In this case the author might want to consider complaining to the editor in chief or the publisher making the case that the editors are acting inappropriately.
- Until acceptance is issued, the author should not publicly disclose the journal where the paper is submitted, since doing so constitutes a (very minor) moral violation. Many would disagree on this point, so let me elaborate. Informing the public of the journal submission is tempting people in who are competition or who have a negative opinion of the paper to interfere with the peer review process. While virtually all people virtually all the time will act honorably and not contact the journal, such temptation is undesirable and easily avoidable.
- As soon as the acceptance or publication is issued, the author should make this public immediately, by the similar reasoning of avoiding temptation by the third parties (of different kind).
Third party outreach:
- If the paper is accepted but not yet published, reaching out to the editor in chief by a third party requesting to publish a rebuttal of some kind is totally fine. Asking to withdraw the paper for mathematical reasons is also fine, but should provide a clear formal math reasoning as in “Lemma 3 is false because…” The editor then has a choice but not an obligation to trigger the withdrawal process.
- Asking to withdraw the not-yet-published paper without providing math reasoning, but saying something like “this author is a crank” or “publishing this paper will do bad for your reputation” is akin to bullying and thus a minor ethical violation. The reason it’s minor is because it is journal’s obligations to ignore such emails. Journal acting on such an email with rumors or unverified facts is an ethical violation on its own.
- If a third party learns about a publicly available paper which may or may not be an accepted submission with which they disagree for math or other reason, it it ethical to contact the author directly. In fact, in case of math issues this is highly desirable.
- If a third party learns about a paper submission to a journal without being contacted to review it, and the paper is not yet accepted, then contacting the journal is a strong ethical violation. Typically, the journal where the paper is submitted it not known to public, so the third party is acting on the information it should not have. Every such email can be considered as an act of bullying no matter the content.
- In an unlikely case everything is as above but the journal’s name where the paper is submitted is publicly available, the third party can contact the journal. Whether this is ethical or not depends on the wording of the email. I can imagine some plausible circumstances when e.g. the third party knows that the author is Dănuț Marcu mentioned earlier. In these rare cases the third party should make every effort to CC the email to everyone even remotely involved, such as all authors of the paper, the publisher, the editor in chief, and perhaps all members of the editorial board. If the third party feels constrained by the necessity of this broad outreach then the case is not egregious enough, and such email is still bullying and thus unethical.
- Once the paper is published anyone can contact the journal for any reason since there is little can be done by the journal beyond what’s described above. For example, on two different occasions I wrote to journals pointing out that their recently published results are not new and asking them to inform the authors while keeping my anonymity. Both editors said they would. One of the journals later published an acknowledgement of retribution. The other did not.
Editor’s rights and obligations:
- Editors have every right to encourage submissions of papers to the journal, and in fact it’s part of their job. It is absolutely ethical to encourage submissions from colleagues, close relatives, political friends, etc. The publisher should set up a clear and unobtrusive conflict of interest directive, so if the editor is too close to the author or the subject he or she should transfer the paper to the editor in chief who will chose a different handling editor.
- The journal should have a clear scope worked out by the publisher in cooperation with the editorial board. If the paper is outside of the scope it should be rejected regardless of its mathematical merit. When I was an editor of Discrete Mathematics, I would reject some “proofs” of the Goldbach conjecture and similar results based on that reasoning. If the paper prompts the journal to re-evaluate its scope, it’s fine, but the discussion should involve the whole editorial board and irrespective of the paper in question. Presumably, some editors would not want to continue being on the board if the journal starts changing direction.
- If the accepted but not yet published paper seems to fall outside of the journal’s scope, other editors can request the editor in chief to initiate the withdrawal process discussed above. The wording of request is crucial here – if the issue is neither the the scope nor the major math errors, but rather the weakness of results, then this is inappropriate.
- If the issue is the possibly unethical behavior of the handling editor, then the withdrawal may or may not be appropriate depending on the behavior, I suppose. But if the author was acting ethically and the unethical behavior is solely by the handling editor, I say proceed to publish the paper and then issue a formal retraction while keeping the paper published, of course.
Complaining to universities:
- While perfectly ethical, contacting the university administration to initiate a formal investigation of a faculty member is an extremely serious step which should be avoided if at all possible. Except for the egregious cases of verifiable formal violations of the university code of conduct (such as academic dishonesty), this action in itself is akin to bullying and thus immoral.
- The code of conduct is usually available on the university website – the complainer would do well to consult it before filing a complaint. In particular, the complaint would typically be addressed to the university senate committee on faculty affairs, the office of academic integrity and/or dean of the faculty. Whether the university president is in math or even the same area is completely irrelevant as the president plays no role in the working of the committee. In fact, when this is the case, the president is likely to recuse herself or himself from any part of the investigation and severe any contacts with the complainer to avoid appearance of impropriety.
- When a formal complaint is received, the university is usually compelled to initiate an investigation and set up an ad hoc subcommittee of the faculty senate which thoroughly examines the issue. Faculty’s tenure and life being is on the line. They can be asked to retain legal representation and can be prohibited from discussing the matters of the case with outsiders without university lawyers and/or PR people signing on every communication. Once the investigation is complete the findings are kept private except for administrative decisions such as firing, suspension, etc. In summary, if the author seeks information rather than punishment, this is counterproductive.
Complaining to institutions:
- I don’t know what to make of the alleged NSF request, which could be ethical and appropriate, or even common. Then so would be complaining to the NSF on a publicly available research product supported by the agency. The issue is the opposite to that of the journals — the NSF is a part of the the Federal Government and is thus subject to a large number of regulations and code of conduct rules. These can explain its request. We in mathematics are rather fortunate that our theorems tend to lack any political implications in the real world. But perhaps researchers in Political Science and Sociology have different experiences with granting agencies, I wouldn’t know.
- Contacting the AMS can in fact be rather useful, even though it currently has no way to conduct an appropriate investigation. Put bluntly, all parties in the conflict can simply ignore AMS’s request for documents. But maybe this should change in the future. I am not a member of the AMS so have no standing in telling it what to do, but I do have some thoughts on the subject. I will try to write them up at some point.
Public discourse:
- Many commenters on the case opined that while deleting a published paper is bad (I am paraphrasing), but the paper is also bad for whatever reason (politics, lack of strong math, editor’s behavior, being out of scope, etc.) This is very unfortunate. Let me explain.
- Of course, discussing math in the paper is perfectly ethical: academics can discuss any paper they like, this can be considered as part of the job. Same with discussing the scope of the paper and the verifiable journal and other party actions.
- Publicly discussing personalities and motivation of the editors publishing or non-publishing, third parties contacting editors in chief, etc. is arguably unethical and can be perceived as borderline bullying. It is also of questionable morality as no complete set of facts are known.
- So while making a judgement on the journal conduct next to the judgement on the math in the paper is ethical, it seems somewhat immoral to me. When you write “yes, the journals’ actions are disturbing, but the math in the paper is poor” we all understand that while formally these are two separate discussions, the negative judgement in the second part can provide an excuse for misbehavior in the first part. So here is my new rule: If you would not be discussing the math in the paper without the pretext of its submission history, you should not be discussing it at all.
In summary:
I argue that for all issues related to submissions, withdrawal, etc. there is a well understood ethical code of conduct. Decisions on who behaved unethically hinge on formal details of each case. Until these formalities are clarified, making judgements is both premature and unhelpful.
Part of the problem is the lack of clarity about procedural rules by the journals, as discussed above. While large institutions such as major universities and long established journal publishers do have such rules set up, most journals tend not to disclose them, unfortunately. Even worse, many new, independent and/or electronic journals have no such rules at all. In such environment we are reduced to saying that this is all a failure to communicate.
Lengthy disclaimer:
- I have no special knowledge of what actually happened to Hill’s submission. I outlined what I think should have happened in different scenarios if all participants acted morally and ethically (there are no legal issues here that I am aware of). I am not trying to blame anyone and in fact, it is possible that none of these theoretical scenarios are applicable. Yet I do think such a general discussion is useful as it distills the arguments.
- I have not read Hill’s paper as I think its content is irrelevant to the discussion and since I am deeply uninterested in the subject. I am, however, interested in mathematical publishing and all academia related matters.
- What’s ethical and what’s moral are not exactly the same. As far as this post is concerned, ethical issues cover all math research/university/academic related stuff. Moral issues are more personal and community related, thus less universal perhaps. In other words, I am presenting my own POV everywhere here.
- To give specific examples of the difference, if you stole your officemate’s lunch you acted immorally. If you submitted your paper to two journals simultaneously you acted unethically. And if you published a paper based on your officemate’s ideas she told you in secret, you acted both immorally and unethically. Note that in the last example I am making a moral judgement since I equate this with stealing, while others might think it’s just unethical but morally ok.
- There is very little black & white about immoral/unethical acts, and one always needs to assign a relative measure of the perceived violation. This is similar to criminal acts, which can be a misdemeanor, a gross misdemeanor, a felony, etc.
How NOT to reference papers
In this post, I am going to tell a story of one paper and its authors which misrepresented my paper and refused to acknowledge the fact. It’s also a story about the section editor of Journal of Algebra which published that paper and then ignored my complaints. In my usual wordy manner, I do not get to the point right away, and cover some basics first. If you want to read only the juicy parts, just scroll down…
What’s the deal with the references?
First, let’s talk about something obvious. Why do we do what we do? I mean, why do we study for many years how to do research in mathematics, read dozens or hundreds of papers, think long thoughts until we eventually figure out a good question. We then work hard, trial-and-error, to eventually figure out a solution. Sometimes we do this in a matter of hours and sometimes it takes years, but we persevere. Then write up a solution, submit to a journal, sometimes get rejected (who knew this was solved 20 years ago?), and sometimes sent for revision with various lemmas to fix. We then revise the paper, and if all goes well it gets accepted. And published. Eventually.
So, why do we do all of that? For the opportunity to teach at a good university and derive a reasonable salary? Yes, sure, a to some degree. But mostly because we like doing this. And we like having our work appreciated. We like going to conferences to present it. We like it when people read our paper and enjoy it or simply find it useful. We like it when our little papers form building stones towards bigger work, perhaps eventually helping to resolve an old open problem. All this gives us purpose, a sense of accomplishment, a “social capital” if you like fancy terms.
But all this hinges on a tiny little thing we call citations. They tend to come at the end, sometimes footnote size and is the primary vehicle for our goal. If we are uncited, ignored, all hope is lost. But even if we are cited, it matters how our work is cited. In what context was it referenced is critically important. Sometimes our results are substantially used in the proof, those are GOOD references.
Yet often our papers are mentioned in a sentence “See [..] for the related results.” Sometimes this happens out of politeness or collegiality between authors, sometimes for the benefit of the reader (it can be hard navigating a field), and sometimes the authors are being self-serving (as in “look, all these cool people wrote good papers on this subject, so my work must also be good/important/publishable”). There are NEUTRAL references – they might help others, but not the authors.
Finally, there are BAD references. Those which refer derogatively to your work, or simply as a low benchmark which the new paper easily improved. Those which say “our bound is terribly weak, but it’s certainly better than Pak’s.” But the WORST references are those which misstate what you did, which diminish and undermine your work.
So for anyone out there who thinks the references are in the back because they are not so important – think again. They are of utmost importance – they are what makes the system work.
The story of our paper
This was in June 1997. My High School friend Sergey Bratus and I had an idea of recognizing the symmetric group Sn using the Goldbach conjecture. The idea was nice and the algorithm was short and worked really fast in practice. We quickly typed it up and submitted to the Journal of Symbolic Computations in September 1997. The journal gave us a lot of grief. First, they refused to seriously consider it since the Goldbach conjecture in referee’s words is “not like the Riemann hypothesis“, so we could not use it. Never mind that it was checked for n<1014, covering all possible values where such algorithm could possibly be useful. So we rewrote the paper by adding a variation based on the ternary Goldbach conjecture which was known for large enough values (and now proved in full).
The paper had no errors, resolved an open problem, but the referees were unhappy. One of them requested we change the algorithm to also work for the alternating group. We did. In the next round the same or another requested we cover the case of unknown n. We did. In the next round one referee requested we make a new implementation of the algorithm, now in GAP and report the results. We did. Clearly, the referees did not want our paper to get published, but did not know how to say it. Yet we persevered. After 4 back and forth revisions the paper more than doubled in size (completely unnecessarily). This took two years, almost to the day, but the paper did get accepted and published. Within a year or two, it became a standard routine in both GAP and MAGMA libraries.
[0] Sergey Bratus and Igor Pak, Fast constructive recognition of a black box group isomorphic to Sn or An using Goldbach’s Conjecture, J. Symbolic Comput. 29 (2000), 33–57.
Until a few days ago I never knew what was the problem the referees had with our paper. Why did a short, correct and elegant paper need to become long to include cumbersome extensions of the original material for the journal to accept it? I was simply too inexperienced to know that this is not the difference in culture (CS vs. math). Read on to find out what I now realized.
Our competition
After we wrote our paper, submitted and publicized on our websites and various conferences, I started noticing strange things. In papers after papers in Computational Group Theory, roughly a half would not reference our paper, but would cite another paper by 5 people in the field which apparently was doing the same or similar things. I recall I wrote to the authors of this competitive paper, but they wrote back that the paper is not written yet. To say I was annoyed was to understate the feeling.
In one notable instance, I confronted Bill Kantor (by email) who helped us with good advice earlier. He gave an ICM talk on the subject and cited a competition paper but not ours, even though I personally showed him the submitted preprint of [0] back in 1997, and explained our algorithm. He replied that he did not recall whether we sent him the paper. I found and forwarded him my email to him with that paper. He replied that he probably never read the email. I forwarded him back his reply on my original email. Out of excuses, Kantor simply did not reply. You see, the calf can never beat the oak tree.
Eventually, the competition paper was published 3 years after our paper:
[1] Robert Beals, Charles Leedham-Green, Alice Niemeyer, Cheryl Praeger, Ákos Seress, A black-box group algorithm for recognizing finite symmetric and alternating groups. I, Trans. AMS 355 (2003), 2097–2113.
The paper claims that the sequel II by the same authors is forthcoming, but have yet to appear. It was supposed to cover the case of unknown n, which [0] was required to cover, but I guess the same rules do not apply to [1]. Or maybe JSC is more selective than TAMS, one never knows… The never-coming sequel II will later play a crucial part in our story.
Anyhow, it turns out, the final result in [1] is roughly the same as in [0]. Although the details are quite different, it wasn’t really worth the long wait. I quote from [1]:
The running time of constructive recognition in [0] is about the same.
The authors then show an incredible dexterity in an effort to claim that their result is better somehow, by finding minor points of differences between the algorithms and claiming their importance. For example, take look at this passage:
The paper [0] describes the case G = Sn, and sketches the necessary modifications for the case G = An. In this paper, we present a complete argument which works for both cases. The case G = An is more complicated, and it is the more important one in applications.
Let me untangle this. First, what’s more “important” in applications is never justified and no sources were cited. Second, this says that BLNPS either haven’t read [0] or are intentionally misleading, as the case of An there is essentially the same as Sn, and the timing is off by a constant. On the other hand, this suggests that [1] treats An in a substantively more complicated way than Sn. Shouldn’t that be an argument in favor of [0] over [1], not the other way around? I could go on with other similarly dubious claims.
The aftermath
From this point on, multiple papers either ignored [0] in favor of [1] or cited [0] pro forma, emphasizing [1] as the best result somehow. For example, the following paper with 3 out of 5 coauthors of [1] goes at length touting [1] and never even mentioned [0].
[2] Alice Niemeyer, Cheryl Praeger, Ákos Seress, Estimation Problems and Randomised Group Algorithms, Lecture Notes in Math. 2070 (2013), 35–82.
When I asked Niemeyer as to how this could have happened, she apologized and explained: “The chapter was written under great time pressure.”
For an example of a more egregious kind, consider this paper:
[3] Robert Beals, Charles Leedham-Green, Alice Niemeyer, Cheryl Praeger, Ákos Seress, Constructive recognition of finite alternating and symmetric groups acting as matrix groups on their natural permutation modules, J. Algebra 292 (2005), 4–46.
They unambiguously claim:
The asymptotically most efficient black-box recognition algorithm known for An and Sn is in [1].
Our paper [0] is not mentioned anywhere near, and cited pro forma for other reasons. But just two years earlier, the exact same 5 authors state in [1] that the timing is “about the same”. So, what has happened to our algorithm in the intervening two years? It slowed down? Or perhaps the one in [1] got faster? Or, more plausibly, BLNPS simply realized that they can get away with more misleading referencing at JOA, than TAMS would ever allow?
Again, I could go on with a dozen other examples of this phenomenon. But you get the idea…
My boiling point: the 2013 JOA paper
For years, I held my tongue, thinking that in the age of Google Scholar these self-serving passages are not fooling anybody, that anyone interested in the facts is just a couple of clicks away from our paper. But I was naive. This strategy of ignoring and undermining [0] eventually paid off in this paper:
[4] Sebastian Jambor, Martin Leuner, Alice Niemeyer, Wilhelm Plesken, Fast recognition of alternating groups of unknown degree, J. Algebra 392 (2013), 315–335.
The abstract says it all:
We present a constructive recognition algorithm to decide whether a given black-box group is isomorphic to an alternating or a symmetric group without prior knowledge of the degree. This eliminates the major gap in known algorithms, as they require the degree as additional input.
And just to drive the point home, here is the passage from the first paragraph in the introduction.
For the important infinite family of alternating groups, the present black-box algorithms [0], [1] can only test whether a given black-box group is isomorphic to an alternating or a symmetric group of a particular degree, provided as additional input to the algorithm.
Ugh… But wait, our paper [0] they are citing already HAS such a test! And it’s not like it is hidden in the paper somehow – Section 9 is titled “What to do if n is not known?” Are the authors JLNP blind, intentionally misleading or simply never read [0]? Or is it the “great time pressure” argument again? What could possible justify such outrageous error?
Well, I wrote to the JLNP but neither of them answered. Nor acknowledged our priority. Nor updated the arXiv posting to reflect the error. I don’t blame them – people without academic integrity simply don’t see the need for that.
My disastrous battle with JOA
Once I realized that JLNP are not interested in acknowledging our priority, I wrote to the Journal of Algebra asking “what can be done?” Here is a copy of my email. I did not request a correction, and was unbelievably surprised to hear the following from Gerhard Hiss, the Editor of the Section on Computational Algebra of the Journal of Algebra:
[..] the authors were indeed careless in this attribution.
In my opinion, the inaccuracies in the paper “Fast recognition of alternating groups of unknown degree” are not sufficiently serious to make it appropriate for the journal to publish a correction.
Although there is some reason for you to be mildly aggrieved, the correction you ask for appears to be inappropriate. This is also the judgment of the other editors of the Computational Algebra Section, who have been involved in this discussion.
I have talked to the authors of the paper Niemeyer et al. and they confirmed that the [sic.] did not intend to disregard your contributions to the matter.
Thus I very much regret this unpleasent [sic.] situation and I ask you, in particular with regard to the two young authors of the paper, to leave it at that.
This email left me floored. So, I was graciously permitted by the JOA to be “mildly aggrieved“, but not more? Basically, Hiss is saying that the answer to my question “What can be done?” is NOTHING. Really?? And I should stop asking for just treatment by the JOA out of “regard to the two young authors”? Are you serious??? It’s hard to know where to begin…
As often happened in such cases, an unpleasant email exchange ensued. In my complaint to Michel Broué, he responded that Gerhard Hiss is a “respectable man” and that I should search for justice elsewhere.
In all fairness to JOA, one editor did behave honorably. Derek Holt wrote to me directly. He admitted that he was the handling editor for [1]. He writes:
Although I did not referee the paper myself, I did read through it, and I really should have spotted the completely false statement in the paper that you had not described any algorithm for determining the degree n of An or Sn in your paper with Bratus. So I would like to apologise now to you and Bratus for not spotting that. I almost wrote to you back in January when this discussion first started, but I was dissuaded from doing so by the other editors involved in the discussion.
Let me parse this, just in case. Holt is the person who implemented the Bratus-Pak algorithm in Magma. Clearly, he read the paper. He admits the error and our priority, and says he wanted to admit it publicly but other unnamed editors stopped him. Now, what about this alleged unanimity of the editorial board? What am I missing? Ugh…
What really happened? My speculation, part I. The community.
As I understand it, the Computational Group Theory is small close-knit community which as a result has a pervasive groupthink. Here is a passage from Niemeyer email to me:
We would also like to take this opportunity to mention how we came about our algorithm. Charles Leedham-Green was visiting UWA in 1996 and he worked with us on a first version of the algorithm. I talked about that in Oberwolfach in mid 1997 (abstract on OW Web site).
The last part is true indeed. The workshop abstracts are here. Niemeyer’s abstract did not mention Leedham-Green nor anyone else she meant by “us” (from the context – Niemeyer and Praeger), but let’s not quibble. The 1996 date is somewhat more dubious. It is contradicted by Niemeyer and Prager, who themselves clarified the timeline in the talk they gave in Oberwolfach in mid 2001 (see the abstract here):
This work was initiated by intense discussions of the speakers and their colleagues at the Computational Groups Week at Oberwolfach in 1997.
Anyhow, we accept that both algorithms were obtained independently, in mid-1997. It’s just that we finished our paper [0] in 3 months, while it took BLNPS about 4 years until it was submitted in 2001.
Next quote from Niemeyer’s email:
So our work was independent of yours. We are more than happy to acknowledge that you and Sergey [Bratus] were the first to come up with a polynomial time algorithm to solve the problem [..].
The second statement is just not true in many ways, nor is this our grievance as we only claim that [0] has a practically superior and theoretically comparable algorithm to that in [1], so there is no reason at all to single out [1] over [0] as is commonly done in the field. In fact, here is a quote from [1] fully contradicting Niemeyer’s claim:
The first polynomial-time constructive recognition algorithm for symmetric and alternating groups was described by Beals and Babai.
Now, note that both Hiss, Holt, Kantor and all 5 authors BLNPS were at both the 1997 and the 2001 Oberwolfach workshops (neither Bratus nor I were invited). We believe that the whole community operates by “they made a stake on this problem” and “what hasn’t happened at Oberwolfach, hasn’t happened.” Such principles make it easier for members of the community to treat BLNPS as pioneers of this problem, and only reference them even though our paper was published before [1] was submitted. Of course, such attitudes also remove a competitive pressure to quickly write the paper – where else in Math and especially CS people take 4-5 years(!) to write a technically elementary paper? (this last part was true also for [0], which is why we could write it in under 3 months).
In 2012, Niemeyer decided to finally finish the long announced part II of [1]. She did not bother to check what’s in our paper [0]. Indeed, why should she – everyone in the community already “knows” that she is the original (co-)author of the idea, so [4] can also be written as if [0] never happened. Fortunately for her, she was correct on this point as neither the referees nor the handling editor, nor the section editor contradicted false statements right in the abstract and the introduction.
My speculation, part II. Why the JOA rebuke?
Let’s look at the timing. In the Fall 2012, Niemeyer visited Aachen. She started collaborating with Professor Plesken from RWTH Aachen and his two graduate students: Jambor and Leuner. The paper was submitted to JOA on December 21, 2012, and the published version lists affiliation of all but Jambor to be in Aachen (Jambor moved to Auckland, NZ before the publication).
Now, Gerhard Hiss is a Professor at RWTH Aachen, working in the field. To repeat, he is the Section Editor of JOA on Computational Algebra. Let me note that [4] was submitted to JOA three days before Christmas 2012, on the same day (according to a comment I received from Eamonn O’Brien from JOA editorial board), on which apparently Hiss and Niemeyer attended a department Christmas party.
My questions: is it fair for a section editor to be making a decision contesting results by a colleague (Plesken), two graduate students (Jambor and Leuner), and a friend (Niemeyer), all currently or recently from his department? Wouldn’t the immediate recusal by Editor Hiss and investigation by an independent editor be a more appropriate course of action under the circumstances? In fact, this is a general Elsevier guideline if I understand it correctly.
What now?
Well, I am at the end of the line on this issue. Public shaming is the only thing that can really work against groupthink. To spread the word, please LIKE this post, REPOST it, here on WP, on FB, on G+, forward it by email, or do wherever you think appropriate. Let’s make sure that whenever somebody googles these names, this post comes up on top of the search results.
P.S. Full disclosure: I have one paper in the Journal of Algebra, on an unrelated subject. Also, I am an editor of Discrete Mathematics, which together with JOA is owned by the same parent company Elsevier.
UPDATE (September 17, 2014): I am disallowing all comments on this post as some submitted comments were crude and/or offensive. I am however agreeing with some helpful criticism. Some claimed that I crossed the line with some personal speculations, so I removed a paragraph. Also, Eamonn O’Brien clarified for me the inner working of the JOA editorial board, so removed my incorrect speculations on that point. Neither are germane to my two main complaints: that [0] is repeatedly mistreated in the area, most notably in [4], and that Editor Hiss should have recused himself from handling my formal complaint on [4].
UPDATE (October 14, 2014): In the past month, over 11K people viewed this post (according to the WP stat tools). This is a simply astonishing number for an inactive blog. Thank you all for spreading the word, whether supportive or otherwise! Special thanks to those of you in the field, who wrote heartfelt emails, also some apologetic and some critical – this was all very helpful.
You must be logged in to post a comment.