Archive

Archive for the ‘Journals’ Category

Innovation anxiety

December 28, 2022 3 comments

I am on record of liking the status quo of math publishing. It’s very far from ideal as I repeatedly discuss on this blog, see e.g. my posts on the elitism, the invited issues, the non-free aspect of it in the electronic era, and especially the pay-to-publish corruption. But overall it’s ok. I give it a B+. It took us about two centuries to get where we are now. It may take us awhile to get to an A.

Given that there is room for improvement, it’s unsurprising that some people make an effort. The problem is that their efforts be moving us in the wrong direction. I am talking specifically about two ideas that frequently come up by people with best intensions: abolishing peer review and anonymizing the author’s name at the review stage. The former is radical, detrimental to our well being and unlikely to take hold in the near future. The second is already here and is simply misguided.

Before I take on both issues, let me take a bit of a rhetorical detour to make a rather obvious point. I will be quick, I promise!

Don’t steal!

Well, this is obvious, right? But why not? Let’s set all moral and legal issues aside and discuss it as adults. Why should a person X be upset if Y stole an object A from Z? Especially if X doesn’t know either Y or Z, and doesn’t really care who A should belong to. Ah, I see you really don’t want to engage with the issue — just like me you already know that this is appalling (and criminal, obviously).

However, if you look objectively at the society we live in, there is clearly some gray area. Indeed, some people think that taxation is a form of theft (“taking money by force”, you see). Millions of people think that illegally downloading movies is not stealing. My university administration thinks stealing my time making me fill all kinds of forms is totally kosher. The country where I grew up in was very proud about the many ways it stole my parents’ rights for liberty and the pursuit of happiness (so that they could keep their lives). The very same country thinks it’s ok to invade and steal territory from a neighboring country. Apparently many people in the world are ok with this (as in “not my problem”). Not comparing any of these, just challenging the “isn’t it obvious” premise.

Let me give a purely American answer to the “why not” question. Not the most interesting or innovative argument perhaps, but most relevant to the peer review discussion. Back in September 1789, Thomas Jefferson was worried about the constitutional precommitment. Why not, he wondered, have a revolution every 19 years, as a way not to burden future generations with rigid ideas from the past?

In February 1790, James Madison painted a grim picture of what would happen: “most of the rights of property would become absolutely defunct and the most violent struggles be generated” between property haves and have-nots, making remedy worse than the disease. In particular, allowing theft would be detrimental to continuing peaceful existence of the community (duh!).

In summary: a fairly minor change in the core part of the moral code can lead to drastic consequences.

Everyone hates peer review!

Indeed, I don’t know anyone who succeeded in academia without a great deal of frustration over the referee reports, many baseless rejections from the journals, or without having to spend many hours (days, weeks) writing their own referee reports. It’s all part of the job. Not the best part. The part well hidden from outside observers who think that professors mostly teach or emulate a drug cartel otherwise.

Well, the help is on the way! Every now and then somebody notably comes along and proposes to abolish the whole thing. Here is one, two, three just in the last few years. Enough? I guess not. Here is the most recent one, by Adam Mastroianni, twitted by Marc Andreessen to his 1.1 million followers.

This is all laughable, right? Well, hold on. Over the past two weeks I spoke to several well known people who think that abolishing peer review would make the community more equitable and would likely foster the innovation. So let’s address these objections seriously, point by point, straight from Mastroianni’s article.

(1) “If scientists cared a lot about peer review, when their papers got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite the paper, etc. Instead, they usually just submit the same paper to another journal.” Huh? The same level journal? I wish…

(2) “Nobody cares to find out what the reviewers said or how the authors edited their paper in response” Oh yes, they do! Thus multiple rounds of review, sometimes over several years. Thus a lot of frustration. Thus occasional rejections after many rounds if the issue turns out non-fixable. That’s the point.

(3) “Scientists take unreviewed work seriously without thinking twice.” Sure, why not? Especially if they can understand the details. Occasionally they give well known people benefit of the doubt, at least for awhile. But then they email you and ask “Is this paper ok? Why isn’t it published yet? Are there any problems with the proof?” Or sometimes some real scrutiny happens outside of the peer review.

(4) “A little bit of vetting is better than none at all, right? I say: no way.” Huh? In math this is plainly ridiculous, but the author is moving in another direction. He supports this outrageous claim by saying that in biomedical sciences the peer review “fools people into thinking they’re safe when they’re not. That’s what our current system of peer review does, and it’s dangerous.” Uhm. So apparently Adam Mastroianni thinks if you can’t get 100% certainty, it’s better to have none. I feel like I’ve heard the same sentiment form my anti-masking relatives.

Obviously, I wouldn’t know and honestly couldn’t care less about how biomedical academics do research. Simply put, I trust experts in other fields and don’t think I know better than them what they do, should do or shouldn’t do. Mastroianni uses “nobody” 11 times in his blog post — must be great to have such a vast knowledge of everyone’s behavior. In any event, I do know that modern medical advances are nothing short of spectacular overall. Sounds like their system works really well, so maybe let them be…

The author concludes by arguing that it’s so much better to just post papers on the arXiv. He did that with one paper, put some jokes in it and people wrote him nice emails. We are all so happy for you, Adam! But wait, who says you can’t do this with all your papers in parallel with journal submissions? That’s what everyone in math does, at least the arXiv part. And if the journals where you publish don’t allow you to do that, that’s a problem with these specific journals, not with the whole peer review.

As for the jokes — I guess I am a mini-expert. Many of my papers have at least one joke. Some are obscure. Some are not funny. Some are both. After all, “what’s life without whimsy“? The journals tend to be ok with them, although some make me work for it. For example, in this recent paper, the referee asked me to specifically explain in the acknowledgements why am I thankful to Jane Austen. So I did as requested — it was an inspiration behind the first sentence (it’s on my long list of starters in my previous blog post). Anyway, you can do this, Adam! I believe in you!

Everyone needs peer review!

Let’s try to imagine now what would happen if the peer review is abolished. I know, this is obvious. But let’s game it out, post-apocaliptic style.

(1) All papers will be posted on the arXiv. In a few curious cases an informal discussion will emerge, like this one about this recent proof of the four color theorem. Most paper will be ignored just like they are ignored now.

(2) Without a neutral vetting process the journals will turn to publishing “who you know”, meaning the best known and best connected people in the area as “safe bets” whose work was repeatedly peer reviewed in the past. Junior mathematicians will have no other way to get published in leading journals without collaboration (i.e. writing “joint papers”) with top people in the area.

(3) Knowing that their papers won’t be refereed, people will start making shortcuts in their arguments. Soon enough some fraction will turn up unsalvageable incorrect. Embarrassments like the ones discussed in this page will become a common occurrence. Eventually the Atiyah-style proofs of famous theorems will become widespread confusing anyone and everyone.

(4) Granting agencies will start giving grants only to the best known people in the area who have most papers in best known journals (if you can peer review papers, you can’t expect to peer review grant proposals, right?) Eventually they will just stop, opting to give more money to best universities and institutions, in effect outsourcing their work.

(5) Universities will eventually abolish tenure as we know it, because if anyone is free to work on whatever they want without real rewards or accountability, what’s the point of tenure protection? When there are no objective standards, in the university hiring the letters will play the ultimate role along with many biases and random preferences by the hiring committees.

(6) People who work in deeper areas will be spending an extraordinary amount of time reading and verifying earlier papers in the area. Faced with these difficulties graduate students will stay away from such areas opting for more shallow areas. Eventually these areas will diminish to the point of near-extinsion. If you think this is unlikely, look into post-1980 history of finite group theory.

(7) In shallow areas, junior mathematicians will become increasingly more innovative to avoid reading older literature, but rather try to come up with a completely new question or a new theory which can be at least partially resolved on 10 pages. They will start running unrefereed competitive conferences where they will exhibit their little papers as works of modern art. The whole of math will become subjective and susceptible to fashion trends, not unlike some parts of theoretical computer science (TCS).

(8) Eventually people in other fields will start saying that math is trivial and useless, that everything they do can be done by an advanced high schooler in 15 min. We’ve seen this all before, think candid comments by Richard Feynman, or these uneducated proclamations by this blog’s old villain Amy Wax. In regards to combinatorics, such views were prevalent until relatively recently, see my “What is combinatorics” with some truly disparaging quotations, and this interview by László Lovász. Soon after, everyone (physics, economics, engineering, etc.) will start developing their own kind of math, which will be the end of the whole field as we know it.

(100) In the distant future, after the human civilization dies and rises up again, historians will look at the ruins of this civilization and wonder what happened? They will never learn that’s it’s all started with Adam Mastroianni when he proclaimed that “science must be free“.

Less catastrophic scenarios

If abolishing peer review does seem a little farfetched, consider the following less drastic measures to change or “improve” peer review.

(i) Say, you allow simultaneous submissions to multiple journals, whichever accepts first gets the paper. Currently, the waiting time is terribly long, so one can argue this would be an improvement. In support of this idea, one can argue that in journalism pitching a story to multiple editors is routine, that job applications are concurrent to all universities, etc. In fact, there is even an algorithm to resolve these kind of situations successfully. Let’s game this out this fantasy.

The first thing that would happen is that journals would be overwhelmed with submissions. The referees are already hard to find. After the change, they would start refusing all requests since they would also be overwhelmed with them and it’s unclear if the report would even be useful. The editors would refuse all but a few selected papers from leading mathematicians. Chat rooms would emerge in the style “who is refereeing which paper” (cf. PubPeer) to either collaborate or at least not make redundant effort. But since it’s hard to trust anonymous claims “I checked and there are no issues with Lemma 2 in that paper” (could that be the author?), these chats will either show real names thus leading to other complications (see below), or cease to exist.

Eventually the publishers will start asking for a signed official copyright transfer “conditional on acceptance” (some already do that), and those in violation will be hit with lawsuits. Universities will change their faculty code of conduct to include such copyright violations as a cause for dismissal, including tenure removal. That’s when the practice will stop and be back to normal, at great cost obviously.

(ii) Non-anonymizing referees is another perennial idea. Wouldn’t it be great if the referees get some credit for all the work that they do (so they can list it on their CVs). Even better if their referee report is available to the general public to read and scrutinize, etc. Win-win-win, right?

No, of course not. Many specialized sub-areas are small so it is hard to find a referee. For the authors, it’s relatively easy to guess who the referees are, at least if you have some experience. But there is still this crucial ambiguity as in “you have a guess but you don’t know for sure” which helps maintain friendship or at least collegiality with those who have written a negative referee report. You take away this ambiguity, and everyone will start refusing refereeing requests. Refereeing is hard already, there is really no need to risk collegial relationships as a result, especially in you are both going to be working the area for years or even decades to come.

(iii) Let’s pay the referees! This is similar but different from (ii). Think about it — the referees are hard to find, so we need to reward them. Everyone know that when you pay for something, everyone takes this more seriously, right? Ugh. I guess I have some new for you…

Think it over. You got a technical 30 page paper to referee. How much would you want to get paid? You start doing a mental calculation. Say, at a very modest $100/hr it would take you maybe 10-20 hours to write a thorough referee report. That’s $1-2K. Some people suggest $50/hr but that was before the current inflation. While I do my own share of refereeing, personally, I would charge more per hour as I can get paid better doing something else (say, teach our Summer school). For a traditional journal to pay this kind of money per paper is simply insane. Their budgets are are relatively small, let me spare you the details.

Now, who can afford that kind of money? Right — we are back to the open access journals who would pass the cost to the authors in the form of an APC. That’s when the story turn from bad to awful. For that kind of money the journals would want a positive referee report since rejected authors don’t pay. If you are not willing to play ball and give them a positive report, they will stop inviting you to referee, leading to more even corruption these journals have in the form of pay-to-publish.

You can probably imagine that this won’t end well. Just talk to medical or biological scientists who grudgingly pays to Nature or Science about 3K from their grants (which are much larger than ours). The pay because they have to, of course, and if they bulk they might not get a new grant setting back their career.

Double blind refereeing

In math, this means that the authors’ names are hidden from referees to avoid biases. The names are visible to the editors, obviously, to prevent “please referee your own paper” requests. The authors are allowed to post their papers on their websites or the arXiv, where it could be easily found by the title, so they don’t suffer from anxieties about their career or competitive pressures.

Now, in contrast with other “let’s improve the peer review” ideas, this is already happening. In other fields this has been happening for years. Closer to home, conferences in TCS have long resisted going double blind, but recently FOCS 2022, SODA 2023 and STOC 2023 all made the switch. Apparently they found Boaz Barak’s arguments unpersuasive. Well, good to know.

Even closer to home, a leading journal in my own area, Combinatorial Theory, turned double blind. This is not a happy turn of event, at least not from my perspective. I published 11 papers in JCTA, before the editorial board broke off and started CT. I have one paper accepted at CT which had to undergo the new double blind process. In total, this is 3 times as many as any other journal where I published. This was by far my favorite math journal.

Let’s hear from the journal why they did it (original emphasis):

The philosophy behind doubly anonymous refereeing is to reduce the effect of initial impressions and biases that may come from knowing the identity of authors. Our goal is to work together as a combinatorics community to select the most impactful, interesting, and well written mathematical papers within the scope of Combinatorial Theory.

Oh, sure. Terrific goal. I did not know my area has a bias problem (especially compared to many other areas), but of course how would I know?

Now, surely the journal didn’t think this change would be free? The editors must have compared pluses and minuses, and decided that on balance the benefits outweigh the cost, right? The journal is mum on that. If any serious discussion was conducted (as I was told), there is no public record of it. Here is what the journal says how the change is implemented:

As a referee, you are not disqualified to evaluate a paper if you think you know an author’s identity (unless you have a conflict of interest, such as being the author’s advisor or student). The journal asks you not to do additional research to identify the authors.

Right. So let me try to understand this. The referee is asked to make a decision whether to spend upwards of 10-20 hours on the basis of the first impression of the paper and without knowledge of the authors’ identity. They are asked not to google the authors’ names, but are ok if you do because they can’t enforce this ethical guideline anyway. So let’s think this over.

Double take on double blind

(1) The idea is so old in other sciences, there is plenty of research on its relative benefits. See e.g. here, there or there. From my cursory reading, it seems, there is a clear evidence of a persistent bias based on the reputation of educational institution. Other biases as well, to a lesser degree. This is beyond unfortunate. Collectively, we have to do better.

(2) Peer reviews have very different forms in different sciences. What works in some would not necessarily would work in others. For example, TCS conferences never really had a proper refereeing process. The referees are given 3 weeks to write an opinion of the paper based on the first 10 pages. They can read the proofs beyond the 10 pages, but don’t have to. They write “honest” opinions to the program committee (invisible to the authors) and whatever they think is “helpful” to the authors. Those of you outside of TCS can’t even imagine the quality and biases of these fully anonymous opinions. In recent years, the top conferences introduced the rebuttal stage which is probably helpful to avoid random superficial nitpicking at lengthy technical arguments.

In this large scale superficial setting with rapid turnover, the double blind refereeing is probably doing more good than bad by helping avoid biases. The authors who want to remain anonymous can simply not make their papers available for about three months between the submission and the decision dates. The conference submission date is a solid date stamp for them to stake the result, and three months are unlikely to make major change to their career prospects. OTOH, the authors who want to stake their reputation on the validity of their technical arguments (which are unlikely to be fully read by the referees) can put their papers on the arXiv. All in all, this seems reasonable and workable.

(3) The journal process is quite a bit longer than the conference, naturally. For example, our forthcoming CT paper was submitted on July 2, 2021 and accepted on November 3, 2022. That’s 16 months, exactly 490 days, or about 20 days per page, including the references. This is all completely normal and is nobody’s fault (definitely not the handling editor’s). In the meantime my junior coauthor applied for a job, was interviewed, got an offer, accepted and started a TT job. For this alone, it never crossed our mind not to put the paper on the arXiv right away.

Now, I have no doubt that the referee googled our paper simply because in our arguments we frequently refer our previous papers on the subject for which this was a sequel (er… actually we refer to some [CPP21a] and [CPP21b] papers). In such cases, if the referee knows that the paper under review is written by the same authors there is clearly more confidence that we are aware of the intricate parts of our own technical details from the previous paper. That’s a good thing.

Another good thing to have is the knowledge that our paper is surviving public scrutiny. Whenever issues arise we fix them, whenever some conjecture are proved or refuted, we update the paper. That’s a normal academic behavior no matter what Adam Mastroianni says. Our reputation and integrity is all we have, and one should make every effort to maintain it. But then the referee who has been procrastinating for a year can (and probably should) compare with the updated version. It’s the right thing to do.

Who wants to hide their name?

Now that I offered you some reasons why looking for paper authors is a good thing (at least in some cases), let’s look for negatives. Under what circumstances might the authors prefer to stay anonymous and not make their paper public on the arXiv?

(a) Junior researchers who are afraid their low status can reduce their chances to get accepted. Right, like graduate students. This will hurt them both mathematically and job wise. This is probably my biggest worry that CT is encouraging more such cases.

(b) Serial submitters and self-plagiarists. Some people write many hundreds of papers. They will definitely benefit from anonymity. The editors know who they are and that their “average paper” has few if any citations outside of self-citations. But they are in a bind — they have to be neutral arbiters and judge each new paper independently of the past. Who knows, maybe this new submission is really good? The referees have no such obligation. On the contrary, they are explicitly asked to make a judgement. But if they have no name to judge the paper by, what are they supposed to do?

Now, this whole anonymity thing is unlikely to help serial submitters at CT, assuming that the journal standards remain high. Their papers will be rejected and they will move on, submitting down the line until they find an obscure enough journal that will bite. If other, somewhat less selective journals adopt the double blind review practice, this could improve their chances, however.

For CT, the difference is that in the anonymous case the referees (and the editors) will spend quite a bit more time per paper. For example, when I know that the author is a junior researcher from a university with limited access to modern literature and senior experts, I go out of my way to write a detailed referee report to help the authors, suggest some literature they are missing or potential directions for their study. If this is a serial submitter, I don’t. What’s the point? I’ve tried this a few times, and got the very same paper from another journal next week. They wouldn’t even fix the typos that I pointed out, as if saying “who has the time for that?” This is where Mastroianni is right: why would their 234-th paper be any different from 233-rd?

(c) Cranks, fraudsters and scammers. The anonymity is their defense mechanism. Say, you google the author and it’s Dănuț Marcu, a serial plagiarist of 400+ math papers. Then you look for a paper he is plagiarizing from and if successful making efforts to ban him from your journal. But if the author is anonymous, you try to referee. There is a very good chance you will accept since he used to plagiarize good but old and somewhat obscure papers. So you see — the author’s identity matters!

Same with the occasional zero-knowledge (ZK) aspirational provers whom I profiled at the end of this blog post. If you are an expert in the area and know of somebody who has tried for years to solve a major conjecture producing one false or incomplete solution after another, what do you do when you see a new attempt? Now compare with what you do if this paper is by anonymous? Are you going to spend the same effort effort working out details of both papers? Wouldn’t in the case of a ZK prover you stop when you find a mistake in the proof of Lemma 2, while in the case of a genuine new effort try to work it out?

In summary: as I explained in my post above, it’s the right thing to do to judge people by their past work and their academic integrity. When authors are anonymous and cannot be found, the losers are the most vulnerable, while the winners are the nefarious characters. Those who do post their work on the arXiv come out about even.

Small changes can make a major difference

If you are still reading, you probably think I am completely 100% opposed to changes in peer review. That’s not true. I am only opposed to large changes. The stakes are just too high. We’ve been doing peer review for a long time. Over the decades we found a workable model. As I tried to explain above, even modest changes can be detrimental.

On the other hand, very small changes can be helpful if implemented gradually and slowly. This is what TCS did with their double blind review and their rebuttal process. They started experimenting with lesser known and low stakes conferences, and improved the process over the years. Eventually they worked out the kinks like COI and implemented the changes at top conferences. If you had to make changes, why would you start with a top journal in the area??

Let me give one more example of a well meaning but ultimately misguided effort to make a change. My former Lt. Governor Gavin Newsom once decided that MOOCs are the answer to education foes and is a way for CA to start giving $10K Bachelor’s degrees. The thinking was — let’s make a major change (a disruption!) to the old technology (teaching) in the style of Google, Uber and Theranos!

Lo and behold, California spent millions and went nowhere. Our collective teaching experience during COVID shows that this was not an accident or mismanagement. My current Governor, the very same Gavin Newsom, dropped this idea like a rock, limiting it to cosmetic changes. Note that this isn’t to say that online education is hopeless. In fact, see this old blog post where I offer some suggestions.

My modest proposal

The following suggestions are limited to pure math. Other fields and sciences are much too foreign for me to judge.

(i) Introduce a very clearly defined quick opinion window of about 3-4 weeks. The referees asked for quick opinions can either decline or agree within 48 hours. It will only take them about 10-20 minutes to make an opinion based on the introduction, so give them a week to respond with 1-2 paragraphs. Collect 2-3 quick opinions. If as an editor you feel you need more, you are probably biased against the paper or the area, and are fishing for a negative opinion to have “quick reject“. This is a bit similar to the way Nature, Science, etc. deal with their submissions.

(ii) Make quick opinion requests anonymous. Request the reviewers to assess how the paper fits the journal (better, worse, on point, best submitted to another area to journals X, Y or Z, etc.) Adopt the practice of returning these opinions to the authors. Proceed to the second stage by mutual agreement. This is a bit similar to TCS which has authors use the feedback from the conference makes decisions about the journal or other conference submissions.

(iii) If the paper is rejected or withdrawn after the quick opinion stage, adopt the practice to send quick opinions to another journal where the paper is resubmitted. Don’t communicate the names of the reviewers — if the new editor has no trust in the first editor’s qualifications, let them collect their own quick opinions. This would protect the reviewers from their names going to multiple journals thus making their names semi-public.

(iv) The most selective journals should require that the paper not be available on the web during the quick opinion stage, and violators be rejected without review. Anonymous for one — anonymous for all! The three week long delay is unlikely to hurt anybody, and the journal submission email confirmation should serve as a solid certificate of a priority if necessary. Some people will try to game the system like give a talk with the same title as the paper or write a blog post. Then it’s on editor’s discretion what to do.

(v) In the second (actual review) stage, the referees should get papers with authors’ names and proceed per usual practice.

Happy New Year everyone!

The insidious corruption of open access publishers

January 9, 2022 6 comments

The evil can be innovative. Highly innovative, in fact. It has to be, to survive. We wouldn’t even notice it otherwise. This is the lesson one repeatedly learns from foreign politics, where authoritarian or outright dictatorial regimes keeps coming up with new and ingenuous uses of technology to further corrupt and impoverish their own people. But this post is about Mathematics, the flagship MDPI journal.

What is MDPI?

It’s a for profit publisher of online-only “open access” journals. Are they legitimate or predatory? That’s a good question. The academic world is a little perplexed on this issue, although maybe they shouldn’t be. It’s hard for me to give a broad answer given that it publishes over 200 journals, most of which have single word wonder titles like Data, Diseases, Diversity, DNA, etc.

If “MDPI” doesn’t register, you probably haven’t checked your spam folder lately. I am pretty sure I got more emails inviting me to be a guest editor of various MDPI journals than from Nigerian princes. The invitations came in many fields (or are they?), from Sustainability to Symmetry, from Entropy to Axioms, etc. Over the years I even got some curious invites from such titles as Life and Languages. I can attest that at the time of this writing I am alive and can speak, which I suppose qualifies me to be guest editor of both..

I checked my UCLA account, and the first email from I got from MDPI was on Oct 5, 2009, inviting me to be guest editor in “Algorithms for Applied Mathematics” special issue of Algorithms. The most remarkable invitation came from a journal titled “J“, which may or may not have been inspired by the single letter characters in the James Bond series, or perhaps by the Will Smith character in Men in Black — we’ll never know. While the brevity is commendable, it serves the same purpose of creatively obscuring the subject in all these cases.

While I have nothing to say about all MDPI journals, let me leave you with some links to people who took MDPI seriously and decided to wade on the issue. Start with this 2012 Stack Exchange discussions on MDPI and move to this Reddit discussion from 3 months ago. Confused enough? Then read the following:

  1. Christos Petrou, MDPI’s Remarkable Growth, The Scholarly Kitchen (August 10, 2020)
  2. Dan Brockington, MDPI Journals: 2015-2020 (March 29, 2021)
  3. Paolo Crosetto, Is MDPI a predatory publisher? (April 12, 2021)
  4. Ángeles Oviedo-García, Journal citation reports and the definition of a predatory journal: The case of MDPI, Research Evaluation (2021). See also this response by MDPI.

As you can see, there are issues with MDPI, and I am probably the last person to comment on them. We’ll get back to this.

What is Mathematics?

It’s one of the MDPI journals. It was founded in 2013 and as of this writing published 7,610 articles. More importantly, it’s not reviewed by the MathSciNet and ZbMath. Ordinarily that’s all you need to know in deciding whether to submit there, but let’s look at the impact factor. The numbers differ depending on which version you take, but the relative picture is the same: it suggests that Mathematics is a top 5-10 journal. Say, this comprehensive list gives 2.258 for Mathematics vs. 2.403 for Duke, 2.200 for Amer. Jour. Math, 2.197 for JEMS, 1.688 for Advances Math, and 1.412 for Trans. AMS. Huh?

And look at this nice IF growth. Projected forward it will be #1 journal in the whole field, just what the name would suggest. Time to jump on the bandwagon! Clearly somebody very clever is managing the journal guiding it from obscurity to the top in just a few years…

Now, the Editorial Board has 11 “editors-in-chief” and 814 “editors”. Yes, you read the right — it’s 825 in total. Well, math is a broad subject, so what did you expect? For comparison, Trans. AMS has only about 25 people on its Editorial Board, so they can’t possibly cover all of mathematics, right? Uhm…

So, who are these people? I made an effort and read the whole list of these 825 chosen ones. At least two are well known and widely respected mathematicians, although neither lists being an editor of Mathematics on their extended CVs (I checked). Perhaps, ashamed of the association, but not ashamed enough to ask MDPI to take their name off the list? Really?

I also found three people in my area (understood very broadly) that I would consider serious professionals. One person is from my own university albeit from a different department. One person is a colleague and a friend (this post might change that). Several people are my “Facebook or LinkedIn friends” which means I never met them (who doesn’t have those?) That’s it! Slim pickings for someone who knows thousands of mathematicians…

Yes, it is. No doubt about it. Just look at this self-reported graph below. That’s a lot of papers, almost all of them in the past few years. For comparison, Trans. AMS publishes about 300 papers a year, while Jour. AMS in the past few years averaged about 25 papers a year.

The reasons for popularity are also transparent: they accept all kinds of nonsense.

To be fair, honest acceptance rates are hard to come by, so we really don’t know what happens to lower tier math journals. I remember when I came to be an editor of Discrete Math. it had the acceptance ratio of 30% which I considered outrageously high. I personally aimed for 10-15%. But I imagine that the acceptance ratio is non-monotone as a function of the “journal prestige” since there is a lot of self-selection happening at the time of submission.

Note that the reason for self-selection (when it comes to top journals) is the high cost of waiting for a decision which can often take upwards of a year. A couple of year-long rejections for a paper and its prospects are looking dim as other papers start appearing (including your own) which can prove stronger result by better/cleaner arguments. Now try explaining to the editor why your old weaker paper should be published in favor of all this new shining stuff…

This is yet another place where MDPI is innovative. They make a decision within days:

So the authors contemplating where to submit face a stark alternative: either their paper will be accepted with high probability within days, or — who knows… All these decisions are highly personal and dependent on particularities of author’s country, university, career stage, etc., but overall it’s hard to blame them for sending their work to Mathematics.

What makes MDPI special?

Mostly the way it makes money. It forgoes print subscription mode altogether, and has a 1800 CHF (about $1,960) “article processing charge” (APC). This is not unusual per se, e.g. Trans. AMS, Ser. B charges $2,750 APC while Forum of Mathematics, Sigma charges $1500 which is a deep discount from Cambridge’s “standard” $3,255 APC. What is unusual is the sheer volume of business MDPI makes from these charges essentially by selling air. They simply got ahead of competitors by being shameless. Indeed, why have high standards? That’s just missing out on so much revenue…

This journal is predatory, right?

Well, that’s what the MDPI link items 1-4 are about (see above). When it comes to Mathematics, I say No, at least not in a sense that’s traditionally understood. However, this doesn’t make it a legitimate research publication, not for a second! It blurs the lines, it corrupts the peer review, it leeches off academia, and it collects rents by selling air. Now that I made my views clear, let me explain it all.

What people seem to be hung up about is the idea that you can tell who is predatory by looking at the numbers. Number of submissions, number of citations, acceptance percentage, number of special issues, average article charge, etc. These numbers can never prove that MDPI does anything wrong. Otherwise MDPI wouldn’t be posting them for everyone to see.

Reading MDPI response in item 4. is especially useful. They make a good point — there is not good definition of a “predatory journal”, since the traditional “pay-to-play” definition simply doesn’t apply. Because when you look at the stats — Mathematics looks like a run-of-the-mill generic publication with high acceptance ratio, a huge number of ever corrupting special issues, and very high APC revenue. Phrased differently and exaggerating a bit, they are a mixture of Forum of Mathematics, Sigma or Trans. AMS, Ser. B. in being freely accessible, combined with the publication speed and efficiency of Science or Nature, but the selectivity of the arXiv (which does in fact reject some papers).

How do you tell they are illegitimate then?

Well, it’s the same logic as when judging life under an authoritarian regime. On paper, they all look the same, there is nothings to see. Indeed, for every electoral irregularity or local scandal they respond with what-about-your-elections. That’s how it goes, everybody knows.

Instead, what you do is ask real people to tell their stories. The shiny facade of the regime quickly fades away when one reads these testimonials. For life in the Soviet Union, I recommend The Gulag Archipelago and Boys in Zinc which bookend that sordid history.

So I did something similar and completely unscientific. I wrote to about twenty authors of Mathematics papers from the past two years, asking them to tell their stories, whether their papers were invited or contributed, and if they paid and how much. I knew none of them before writing, but over a half of the authors kindly responded with some very revealing testimonials which I will try to summarize below.

What exactly does the Mathematics do?

(1) They spam everyone who they consider “reputable” to be “guest editors” and run “special issues”. I wrote before how corrupt are those, but this is corruption on steroids. The editors are induced by waiving their APCs and by essentially anyone their choose. The editors seem to be given a budget to play with. In fact, I couldn’t find anyone whose paper was invited (or who was an editor) and paid anything, although I am sure there are many such people from universities whose libraries have budgeted for open source journals.

(2) They induce highly cited people to publish in their journal by waiving APCs. This is explicitly done in an effort to raise impact factors, and Mathematics uses h-index to formalize this. The idea seems to be that even a poor paper by a highly cited author will get many more citation than average, even if they are just self-citations. They are probably right on this. Curiously, one of my correspondents looked up my own h-index (33 as I just discovered), and apparently it passed the bar. So he quickly proposed to help me publish my own paper in some special issue he was special editing this month. Ugh…

(3) They spam junior researchers asking them to submit to their numerous special issues, and in return to accept their publishing model. They are asked to submit by nearly guaranteeing high rates of processing and quick timeline. Publish or perish, etc.

(4) They keep up with appearances and do send each paper to referees, usually multiple referees, but requiring them to respond in two weeks. The paper avoids being carefully refereed and that allows a quick turnaround. Furthermore, the refereeing assignments are made more or less at random to people in their database completely unfamiliar with the subject. They don’t need to be, of course, all they need is to provide a superficial opinion. From what I hear, when the referee recommends rejection the journal doesn’t object — there is plenty of fish in the sea…

(5) Perhaps surprisingly, several people expressed great satisfaction with the way refereeing was done. I attribute this to superficial nature of the reports and the survivor bias. Indeed, nobody likes technical reports which make you deal with proof details, and all the people I emailed had their papers accepted (I wouldn’t know the names of people whose papers were rejected).

(6) The potential referees are induced to accept the assignment by providing 100 CHF vouchers which can be redeemed at any MDPI publication. Put crudely, they are asked to accept many refereeing assignments, say Y/N at random, and you can quickly publish your own paper (as long as it’s not a complete garbage). One of my correspondents wrote that he exchanged six vouchers worth 600 CHF onto one APC worth 1600 CHF at the time. He meant that this was a good deal as the journal waived the rest, but from what I heard others got the same or similar deal.

(7) Everyone else who has a university library willing to pay APC is invited to submit for the same reasons as (4). And people do contribute. Happily, in fact. Why wouldn’t they — it’s not their money and they get to have a quick publication in a journal with high IF. Many of my correspondents reported to be so happy, they later published several other papers in various MDPI journals.

(8) According to my correspondents, other than the uncertain reputation, the main problem people faced was typesetting, especially when it came to references. Mathematics is clearly very big on that, it’s why they succeeded to begin with. One author reported that the journal made them write a sentence

The first part of the bibliography […], numbered in chronological order from [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,….]

Several others reported long battles with the bibliography style to the point of threatening to withdraw the paper, at which point the journal cave all reported. But all in all, there were unusually few complaints other than on a follow up flood of random referee invitations.

(9) To conclude, the general impression of authors seem to be crystalized in the following quote by one of them:

I think what happened is MDPI just puts out a ton of journals and is clearly just interested in profiting from them (as all publishers are, in a sense…) and some of their particular journals have become more established and reputed than others, some seem so obscure I think they really are just predatory, but others have risen above that, and Mathematics is somewhere in the middle of that spectrum.

What gives?

As I mentioned before, in my opinion Mathematics is not predatory. Rather, it’s parasitic. Predatory journals take people’s own cash to “publish” their paper in some random bogus online depositary. The authors are duped out of cash with the promise of a plausibly looking claim of scientific recognition which they can use for their own advancement. On the other hand, Mathematics does nothing nothing other journals don’t do, and the authors seem to be happy with the outcome.

The losers are the granting foundations and university libraries which shell out large amounts for a subpar products (compared to Trans. AMS, Ser B., Forum Math Sigma, etc.) as they can’t tell the difference between these journals, or institutionally not allowed to do so. In the spirit of “road to hell is paved with good intentions“, this is an unintended consequence of the Elsevier boycott which brought the money considerations out of the shadows and directly led to founding of the open access journals with their misguided budget model.

MDPI clearly found a niche allowing them to monetize on mediocre papers while claiming high impact factors from a minority of papers by serious researchers. In essence it’s the same scam as top journals are playing with invited issues (see my old blog post again), but in reverse — here the invited issues are pushing the average quality of the journal UP rather than DOWN.

As I see it, Mathematics corrupts the whole peer review process by monetizing it to the point that APC becomes a primary consideration rather than the mathematical contribution of the paper. In contrast with the Elsevier, the harm MDPI does is on an intangible level — the full extend of it might never become clear as just about all papers the Mathematics publishes will never be brought to public scrutiny (the same is true for most low-tier journal). All I know is that the money universities spend on Mathematics APCs are better be spent on just about anything else supporting actual research and education.

What happens to math journal in the future?

I already tried answering this eight years ago, with a mixed success. MDPI shows that I was right about moving to online model and non-geographical titles, but wrong about thinking that journals will further specialize. Journals like Mathematics, Algorithms, Symmetry, etc. are clear counterexamples. I guess I was much too optimistic about the future without thinking through the corrupt nature the money brings to the system.

So what now? I think the answer is clear, at least in Mathematics. The libraries should stop paying for open access. Granting agencies should prohibit grants be used for paying for publications. Mathematicians should simply run away any time someone brings up the money. JUST SAY NO.

If this means that journals like Forum Math. would have to die or get converted to another model — so be it. The right model of arXiv overlay is cheap and accessible. There is absolutely no need for a library to pay for Trans. AMS, Ser. B. publication if the paper is already freely available on the arXiv, as is the fact with the vast majority of their papers. It’s hard to defend giving money to Cambridge Univ. Press or AMS, but giving it to MDPI is just sinful.

Finally, if you are on the Mathematics editorial board, please resign and never tell anyone that you were there. You already got what you wanted, your paper is published, your name is on the cover of some special issue (they print them for the authors). I might be overly optimistic again, but when it comes to MDPI, shame might actually work…

What we’ve got here is failure to communicate

September 14, 2018 21 comments

Here is a lengthy and somewhat detached followup discussion on the very unfortunate Hill’s affair, which is much commented by Tim Gowers, Terry Tao and many others (see e.g. links and comments on their blog posts).  While many seem to be universally distraught by the story and there are some clear disagreements on what happened, there are even deeper disagreements on what should have happened.  The latter question is the subject of this blog post.

Note:  Below we discuss both the ethical and moral aspects of the issue.  Be patient before commenting your disagreements until you finish the reading — there is a lengthy disclaimer at the end.

Review process:

  1. When the paper is submitted there is a very important email acknowledging receipt of the submission.  Large publishers have systems send such emails automatically.  Until this email is received, the paper is not considered submitted.  For example, it is not unethical for the author to get tired of waiting to hear from the journal and submit elsewhere instead.  If the journal later comes back and says “sorry for the wait, here are the reports”, the author should just inform the journal that the paper is under consideration elsewhere and should be considered withdrawn (this happens sometimes).
  2. Similarly, there is a very important email acknowledging acceptance of the submission.  Until this point the editors ethically can do as they please, even reject the paper with multiple positive reports.  Morality of the latter is in the eye of the beholder (cf. here), but there are absolutely no ethical issues here unless the editor violated the rules set up by the journal.  In principle, editors can and do make decisions based on informal discussions with others, this is totally fine.
  3. If a journal withdraws acceptance after the formal acceptance email is sent, this is potentially a serious violation of ethical standards.  Major exception: this is not unethical if the journal follows a certain procedural steps (see the section below).  This should not be done except for some extreme circumstances, such as last minute discovery of a counterexample to the main result which the author refuses to recognize and thus voluntarily withdraw the paper.   It is not immoral since until the actual publication no actual harm is done to the author.
  4. The next key event is publication of the article, whether online of in print, usually/often coupled with the transfer of copyright.  If the journal officially “withdraws acceptance” after the paper is published without deleting the paper, this is not immoral, but depends on the procedural steps as in the previous item.
  5. If a journal deletes the paper after the publication, online or otherwise, this is a gross violation of both moral and ethical standards.  The journals which do that should be ostracized regardless their reasoning for this act.  Major exception: the journal has legal reasoning, e.g. the author violated copyright laws by lifting from another published article as in the Dănuț Marcu case (see below).

Withdrawal process:

  1.  As we mentioned earlier, the withdrawal of accepted or published article should be extremely rare, only in extreme circumstances such as a major math error for a not-yet-published article or a gross ethical violation by the author or by the handling editor of a published article.
  2. For a published article with a major math error or which was later discovered to be known, the journal should not withdraw the article but instead work with the author to publish an erratum or an acknowledgement of priority.  Here an erratum can be either fixing/modifying the results, or a complete withdrawal of the main claim.  An example of the latter is an erratum by Daniel Biss.  Note that the journal can in principle publish a note authored by someone else (e.g. this note by Mnёv in the case of Biss), but this should be treated as a separate article and not a substitute for an erratum by the author.  A good example of acknowledgement of priority is this one by Lagarias and Moews.
  3. To withdraw the disputed article the journal’s editorial board should either follow the procedure set up by the publisher or set up a procedure for an ad hoc committee which would look into the paper and the submission circumstances.  Again, if the paper is already published, only non-math issues such as ethical violations by the author, referee(s) and/or handling editor can be taken into consideration.
  4. Typically, a decision to form an ad hoc committee or call for a full editorial vote should me made by the editor in chief, at the request of (usually at least two) members of the editorial board.  It is totally fine to have a vote by the whole editorial board, even immediately after the issue was raised, but the threshold for successful withdrawal motion should be set by the publisher or agreed by the editorial board before the particular issue arises.  Otherwise, the decision needs to be made by consensus with both the handling editor and the editor in chief abstaining from the committee discussion and the vote.
  5. Examples of the various ways the journals act on withdrawing/retracting published papers can be found in the case of notorious plagiarist Dănuț Marcu.  For example, Geometria Dedicata decided not to remove Marcu’s paper but simply issued a statement, which I personally find insufficient as it is not a retraction in any formal sense.  Alternatively, SUBBI‘s apology is very radical yet the reasoning is completely unexplained. Finally, Soifer’s statement on behalf of Geombinatorics is very thorough, well narrated and quite decisive, but suffers from authoritarian decision making.
  6. In summary, if the process is set up in advance and is carefully followed, the withdrawal/retraction of accepted or published papers can be both appropriate and even desirable.  But when the process is not followed, such action can be considered unethical and should be avoided whenever possible.

Author’s rights and obligations:

  1. The author can withdraw the paper at any moment until publication.  It is also author’s right not to agree to any discussion or rejoinder.  The journal, of course, is under no obligation to ask the author’s permission to publish a refutation of the article.
  2. If the acceptance is issued, the author has every right not go along with the proposed quiet withdrawal of the article.  In this case the author might want to consider complaining to the editor in chief or the publisher making the case that the editors are acting inappropriately.
  3. Until acceptance is issued, the author should not publicly disclose the journal where the paper is submitted, since doing so constitutes a (very minor) moral violation.  Many would disagree on this point, so let me elaborate.  Informing the public of the journal submission is tempting people in who are competition or who have a negative opinion of the paper to interfere with the peer review process.  While virtually all people virtually all the time will act honorably and not contact the journal, such temptation is undesirable and easily avoidable.
  4. As soon as the acceptance or publication is issued, the author should make this public immediately, by the similar reasoning of avoiding temptation by the third parties (of different kind).

Third party outreach:

  1.  If the paper is accepted but not yet published, reaching out to the editor in chief by a third party requesting to publish a rebuttal of some kind is totally fine.  Asking to withdraw the paper for mathematical reasons is also fine, but should provide a clear formal math reasoning as in “Lemma 3 is false because…”  The editor then has a choice but not an obligation to trigger the withdrawal process.
  2. Asking to withdraw the not-yet-published paper without providing math reasoning, but saying something like “this author is a crank” or “publishing this paper will do bad for your reputation” is akin to bullying and thus a minor ethical violation.  The reason it’s minor is because it is journal’s obligations to ignore such emails.  Journal acting on such an email with rumors or unverified facts is an ethical violation on its own.
  3. If a third party learns about a publicly available paper which may or may not be an accepted submission with which they disagree for math or other reason, it it ethical to contact the author directly.  In fact, in case of math issues this is highly desirable.
  4. If a third party learns about a paper submission to a journal without being contacted to review it, and the paper is not yet accepted, then contacting the journal is a strong ethical violation.  Typically, the journal where the paper is submitted it not known to public, so the third party is acting on the information it should not have.  Every such email can be considered as an act of bullying no matter the content.
  5. In an unlikely case everything is as above but the journal’s name where the paper is submitted is publicly available, the third party can contact the journal.  Whether this is ethical or not depends on the wording of the email.  I can imagine some plausible circumstances when e.g. the third party knows that the author is Dănuț Marcu mentioned earlier.  In these rare cases the third party should make every effort to CC the email to everyone even remotely involved, such as all authors of the paper, the publisher, the editor in chief, and perhaps all members of the editorial board.  If the third party feels constrained by the necessity of this broad outreach then the case is not egregious enough, and such email is still bullying and thus unethical.
  6. Once the paper is published anyone can contact the journal for any reason since there is little can be done by the journal beyond what’s described above.  For example, on two different occasions I wrote to journals pointing out that their recently published results are not new and asking them to inform the authors while keeping my anonymity.  Both editors said they would.  One of the journals later published an acknowledgement of retribution.  The other did not.

Editor’s rights and obligations:

  1. Editors have every right to encourage submissions of papers to the journal, and in fact it’s part of their job.  It is absolutely ethical to encourage submissions from colleagues, close relatives, political friends, etc.  The publisher should set up a clear and unobtrusive conflict of interest directive, so if the editor is too close to the author or the subject he or she should transfer the paper to the editor in chief who will chose a different handling editor.
  2. The journal should have a clear scope worked out by the publisher in cooperation with the editorial board.  If the paper is outside of the scope it should be rejected regardless of its mathematical merit.  When I was an editor of Discrete Mathematics, I would reject some “proofs” of the Goldbach conjecture and similar results based on that reasoning.  If the paper prompts the journal to re-evaluate its scope, it’s fine, but the discussion should involve the whole editorial board and irrespective of the paper in question.  Presumably, some editors would not want to continue being on the board if the journal starts changing direction.
  3. If the accepted but not yet published paper seems to fall outside of the journal’s scope, other editors can request the editor in chief to initiate the withdrawal process discussed above.  The wording of request is crucial here – if the issue is neither the the scope nor the major math errors, but rather the weakness of results, then this is inappropriate.
  4. If the issue is the possibly unethical behavior of the handling editor, then the withdrawal may or may not be appropriate depending on the behavior, I suppose.  But if the author was acting ethically and the unethical behavior is solely by the handling editor, I say proceed to publish the paper and then issue a formal retraction while keeping the paper published, of course.

Complaining to universities:

  1. While perfectly ethical, contacting the university administration to initiate a formal investigation of a faculty member is an extremely serious step which should be avoided if at all possible.  Except for the egregious cases of verifiable formal violations of the university code of conduct (such as academic dishonesty), this action in itself is akin to bullying and thus immoral.
  2. The code of conduct is usually available on the university website – the complainer would do well to consult it before filing a complaint.  In particular, the complaint would typically be addressed to the university senate committee on faculty affairs, the office of academic integrity and/or dean of the faculty.  Whether the university president is in math or even the same area is completely irrelevant as the president plays no role in the working of the committee.  In fact, when this is the case, the president is likely to recuse herself or himself from any part of the investigation and severe any contacts with the complainer to avoid appearance of impropriety.
  3. When a formal complaint is received, the university is usually compelled to initiate an investigation and set up an ad hoc subcommittee of the faculty senate which thoroughly examines the issue.  Faculty’s tenure and life being is on the line.  They can be asked to retain legal representation and can be prohibited from discussing the matters of the case with outsiders without university lawyers and/or PR people signing on every communication.  Once the investigation is complete the findings are kept private except for administrative decisions such as firing, suspension, etc.  In summary, if the author seeks information rather than punishment, this is counterproductive.

Complaining to institutions:

  1. I don’t know what to make of the alleged NSF request, which could be ethical and appropriate, or even common.   Then so would be complaining to the NSF on a publicly available research product supported by the agency.  The issue is the opposite to that of the journals — the NSF is a part of the the Federal Government and is thus subject to a large number of regulations and code of conduct rules.  These can explain its request.  We in mathematics are rather fortunate that our theorems tend to lack any political implications in the real world.  But perhaps researchers in Political Science and Sociology have different experiences with granting agencies, I wouldn’t know.
  2. Contacting the AMS can in fact be rather useful, even though it currently has no way to conduct an appropriate investigation.  Put bluntly, all parties in the conflict can simply ignore AMS’s request for documents.  But maybe this should change in the future.  I am not a member of the AMS so have no standing in telling it what to do, but I do have some thoughts on the subject.  I will try to write them up at some point.

Public discourse:

  1. Many commenters on the case opined that while deleting a published paper is bad (I am paraphrasing), but the paper is also bad for whatever reason (politics, lack of strong math, editor’s behavior, being out of scope, etc.)  This is very unfortunate.  Let me explain.
  2. Of course, discussing math in the paper is perfectly ethical: academics can discuss any paper they like, this can be considered as part of the job.  Same with discussing the scope of the paper and the verifiable journal and other party actions.
  3. Publicly discussing personalities and motivation of the editors publishing or non-publishing, third parties contacting editors in chief, etc. is arguably unethical and can be perceived as borderline bullying.  It is also of questionable morality as no complete set of facts are known.
  4. So while making a judgement on the journal conduct next to the judgement on the math in the paper is ethical, it seems somewhat immoral to me.  When you write “yes, the journals’ actions are disturbing, but the math in the paper is poor” we all understand that while formally these are two separate discussions, the negative judgement in the second part can provide an excuse for misbehavior in the first part.  So here is my new rule:  If you would not be discussing the math in the paper without the pretext of its submission history, you should not be discussing it at all. 

In summary:

I argue that for all issues related to submissions, withdrawal, etc. there is a well understood ethical code of conduct.  Decisions on who behaved unethically hinge on formal details of each case.  Until these formalities are clarified, making judgements is both premature and unhelpful.

Part of the problem is the lack of clarity about procedural rules by the journals, as discussed above.  While large institutions such as major universities and long established journal publishers do have such rules set up, most journals tend not to disclose them, unfortunately.  Even worse, many new, independent and/or electronic journals have no such rules at all.  In such environment we are reduced to saying that this is all a failure to communicate.

Lengthy disclaimer:

  1. I have no special knowledge of what actually happened to Hill’s submission.  I outlined what I think should have happened in different scenarios if all participants acted morally and ethically (there are no legal issues here that I am aware of).  I am not trying to blame anyone and in fact, it is possible that none of these theoretical scenarios are applicable.  Yet I do think such a general discussion is useful as it distills the arguments.
  2. I have not read Hill’s paper as I think its content is irrelevant to the discussion and since I am deeply uninterested in the subject.  I am, however, interested in mathematical publishing and all academia related matters.
  3. What’s ethical and what’s moral are not exactly the same.  As far as this post is concerned, ethical issues cover all math research/university/academic related stuff.  Moral issues are more personal and community related, thus less universal perhaps.  In other words, I am presenting my own POV everywhere here.
  4. To give specific examples of the difference, if you stole your officemate’s lunch you acted immorally.  If you submitted your paper to two journals simultaneously you acted unethically.  And if you published a paper based on your officemate’s ideas she told you in secret, you acted both immorally and unethically.  Note that in the last example I am making a moral judgement since I equate this with stealing, while others might think it’s just unethical but morally ok.
  5. There is very little black & white about immoral/unethical acts, and one always needs to assign a relative measure of the perceived violation.  This is similar to criminal acts, which can be a misdemeanor, a gross misdemeanor, a felony, etc.

 

Combinatorial briefs

June 9, 2013 Leave a comment

I tend to write longish posts, in part for the sake of clarity, and in part because I can – it is easier to express yourself in a long form.  However, the brevity has its own benefits, as it forces the author to give succinct summaries of often complex and nuanced views.  Similarly, the lack of such summaries can provide plausible deniability of understanding the basic points you are making.

This is the second time I am “inspired” by the Owl blogger who has a Tl;Dr style response to my blog post and rather lengthy list of remarkable quotations that I compiled.  So I decided to make the following Readers Digest style summaries of this list and several blog posts.

1)  Combinatorics has been sneered at for decades and struggled to get established

In the absence of History of Modern Combinatorics monograph, this is hard to prove.  So here are selected quotes, from the above mentioned quotation page.  Of course, one should reade them in full to appreciate and understand the context, but for our purposes these will do.

Combinatorics is the slums of topology – Henry Whitehead

Scoffers regard combinatorics as a chaotic realm of binomial coefficients, graphs, and lattices, with a mixed bag of ad hoc tricks and techniques for investigating them. [..]  Another criticism of combinatorics is that it “lacks abstraction.” The implication is that combinatorics is lacking in depth and all its results follow from trivial, though possible elaborate, manipulations. This argument is extremely misleading and unfair. – Richard Stanlеy (1971)

The opinion of many first-class mathematicians about combinatorics is still in the pejorative. While accepting its interest and difficulty, they deny its depth. It is often forcefully stated that combinatorics is a collection of problems which may be interesting in themselves but are not linked and do not constitute a theory. – László Lovász (1979)

Combinatorics [is] a sort of glorified dicethrowing.  – Robert Kanigel (1991)

This prejudice, the view that combinatorics is quite different from ‘real mathematics’, was not uncommon in the twentieth century, among popular expositors as well as professionals.  –  Peter Cameron (2001)

Now that the readers can see where the “traditional sensitivities” come from, the following quote must come as a surprise.  Even more remarkable is that it’s become a conventional wisdom:

Like number theory before the 19th century, combinatorics before the 20th century was thought to be an elementary topic without much unity or depth. We now realize that, like number theory, combinatorics is infinitely deep and linked to all parts of mathematics.  – John Stillwell (2010)

Of course, the prejudice has never been limited to Combinatorics.  Imagine how an expert in Partition Theory and q-series must feel after reading this quote:

[In the context of Partition Theory]  Professor Littlewood, when he makes use of an algebraic identity, always saves himself the trouble of proving it; he maintains that an identity, if true, can be verified in a few lines by anybody obtuse enough to feel the need of verification.  – Freeman Dyson (1944), see here.

2)  Combinatorics papers have been often ostracized and ignored by many top math journals

This is a theme in this post about the Annals, this MO answer, and a smaller theme in this post (see Duke paragraph).  This bias against Combinatorics is still ongoing and hardly a secret.  I argue that on the one hand, the situation is (slowly) changing for the better.  On the other hand, if some journals keep the proud tradition of rejecting the field, that’s ok, really.  If only they were honest and clear about it!  To those harboring strong feelings on this, listening to some breakup music could be helpful.

3)  Despite inherent diversity, Combinatorics is one field

In this post, I discussed how I rewrote Combinatorics Wikipedia article, largely as a collection of links to its subfields.  In a more recent post mentioned earlier I argue why it is hard to define the field as a whole.  In many ways, Combinatorics resembles a modern nation, united by a language, culture and common history.  Although its borders are not easy to define, suggesting that it’s not a separate field of mathematics is an affront to its history and reality (see two sections above).  As any political scientist will argue, nation borders can be unhelpful, but are here for a reason.  Wishing borders away is a bit like French “race-ban”  – an imaginary approach to resolve real problems.

Gowers’s “two cultures” essay is an effort to describe and explain cultural differences between Combinatorics and other fields.  The author should be praised both for the remarkable essay, and for the bravery of raising the subject.  Finally, on the Owl’s attempt to divide Combinatorics into “conceptual” which “has no internal reasons to die in any foreseeable future” and the rest, which “will remain a collection of elementary tricks, [..] will die out and forgotten [sic].”  I am assuming the Owl meant here most of the “Hungarian combinatorics”, although to be fair, the blogger leaves some wiggle room there.  Either way, “First they came for Hungarian Combinatorics” is all that came to mind.

What do math journals do? What will become of them in the future?

May 28, 2013 5 comments

Recently, there has been plenty of discussions on math journals, their prices, behavior, technology and future.   I have been rather reluctant to join the discussion in part due to my own connection to Elsevier, in part because things in Combinatorics are more complicated than in other areas of mathematics (see below), but also because I couldn’t reconcile several somewhat conflicting thoughts that I had.  Should all existing editorial boards revolt and all journals be electronic?  Or perhaps should we move to “pay-for-publishing” model?  Or even “crowd source refereeing”?  Well, now that the issue a bit cooled down, I think I figured out exactly what should happen to math journals.  Be patient – a long explanation is coming below.

Quick test questions

I would like to argue that the debate over the second question is the general misunderstanding of the first question in the title.  In fact, I am pretty sure most mathematicians are quite a bit confused on this, for a good reason.  If you think this is easy, quick, answer the following three questions:

1)  Published paper has a technical mistake invalidating the main result.  Is this a fault of author, referee(s), handling editor, managing editor(s), a publisher, or all of the above?  If the reader find such mistake, who she/he is to contact?

2)  Published paper proves special case of a known result published 20 years earlier in an obscure paper.  Same question.  Would the answer change if the author lists the paper in the references?

3) Published paper is written in a really poor English.  Sections are disorganized and the introduction is misleading.  Same question.

Now that you gave your answers, ask a colleague.  Don’t be surprised to hear a different point of view.  Or at least don’t be surprised when you hear mine.

What do referees do?

In theory, a lot.  In practice, that depends.  There are few official journal guides to referees, but there are several well meaning guides (see also here, here, here,  here §4.10, and a nice discussion by Don Knuth §15).  However, as any editor can tell you, you never know what exactly did the referee do.  Some reply within 5 min, some after 2 years.  Some write one negative sentence, some 20 detailed pages, some give an advice in the style “yeah, not a bad paper, cites me twice, why not publish it”, while others a brushoff “not sure who this person is, and this problem is indeed strongly related to what I and my collaborators do, but of course our problems are much more interesting/important  – rejection would be best”.  The anonymity is so relaxing, doing a poor job is just too tempting.  The whole system hinges on the shame, sense of responsibility, and personal relationship with the editor.

A slightly better questions is “What do good referees do?”  The answer is – they don’t just help the editor make acceptance/rejection decision.  They help the authors.  They add some background the authors don’t know, look for missing references, improve on the proofs, critique the exposition and even notation.  They do their best, kind of what ideal advisors do for their graduate students, who just wrote an early draft of their first ever math paper.

In summary, you can’t blame the referees for anything.  They do what they can and as much work as they want.  To make a lame comparison, the referees are like wind and the editors are a bit like sailors.  While the wind is free, it often changes direction, sometimes completely disappears, and in general quite unreliable.  But sometimes it can really take you very far.  Of course, crowd sourcing refereeing is like democracy in the army – bad even in theory, and never tried in practice.

First interlude: refereeing war stories

I recall a curious story by Herb Wilf, on how Don Knuth submitted a paper under assumed name with an obscure college address, in order to get full refereeing treatment (the paper was accepted and eventually published under Knuth’s real name).  I tried this once, to unexpected outcome (let me not name the journal and the stupendous effort I made to create a fake identity).  The referee wrote that the paper was correct, rather interesting but “not quite good enough” for their allegedly excellent journal.  The editor was very sympathetic if a bit condescending, asking me not to lose hope, work on my papers harder and submit them again.  So I tried submitting to a competing but equal in statue journal, this time under my own name. The paper was accepted in a matter of weeks.  You can judge for yourself the moral of this story.

A combinatorialist I know (who shall remain anonymous) had the following story with Duke J. Math.  A year and a half after submission, the paper was rejected with three (!) reports mostly describing typos.  The authors were dismayed and consulted a CS colleague.  That colleague noticed that the three reports were in .pdf  but made by cropping from longer files.   Turns out, if the cropping is made straightforwardly, the cropped portions are still hidden in the files.  Using some hacking software the top portions of the reports were uncovered.  The authors discovered that they are extremely positive, giving great praise of the paper.  Now the authors believe that the editor despised combinatorics (or their branch of combinatorics) and was fishing for a bad report.  After three tries, he gave up and sent them cropped reports lest they think somebody else considers their paper worthy of publishing in the grand old Duke (cf. what Zeilberger wrote about Duke).

Another one of my stories is with the  Journal of AMS.  A year after submission, one of my papers was rejected with the following remarkable referee report which I quote here in full:

The results are probably well known.  The authors should consult with experts.  

Needless to say, the results were new, and the paper was quickly published elsewhere.  As they say, “resistance is futile“.

What do associate/handling editors do?

Three little things, really.  They choose referees, read their reports and make the decisions.  But they are responsible for everything.  And I mean for everything, both 1), 2) and 3).  If the referee wrote a poorly researched report, they should recognize this and ignore it, request another one.  They should ensure they have more than one opinion on the paper, all of them highly informed and from good people.  If it seems the authors are not aware of the literature and referee(s) are not helping, they should ensure this is fixed.  It the paper is not well written, the editors should ask the authors to rewrite it (or else).   At Discrete Mathematics, we use this page by Doug West, as a default style to math grammar.  And if the reader finds a mistake, he/she should first contact the editor.  Contacting the author(s) is also a good idea, but sometimes the anonymity is helpful – the editor can be trusted to bring bad news and if possible, request a correction.

B.H. Neumann described here how he thinks the journal should operate.  I wish his views held widely today.  The  book by Krantz, §5.5, is a good outline of the ideal editorial experience, and this paper outlines how to select referees.  However, this discussion (esp. Rick Durrett’s “rambling”) is more revealing.  Now, the reason most people are confused as to who is responsible for 1), 2) and 3), is the fact that while some journals have serious proactive editors, others do not, or their work is largely invisible.

What do managing editors and publishers do?

In theory, managing editors hire associate editors, provide logistical support, distribute paper load, etc.  In practice they also serve as handling editors for a large number of papers.  The publishers…  You know what the publishers do.  Most importantly, they either pay editors or they don’t.  They either charge libraries a lot, or they don’t.  Publishing is a business, after all…

Who wants free universal electronic publishing?

Good mathematicians.  Great mathematicians.  Mathematicians who write well and see no benefit in their papers being refereed.  Mathematicians who have many students and wish the publishing process was speedier and less cumbersome, so their students can get good jobs.  Mathematicians who do not value the editorial work and are annoyed when the paper they want to read is “by subscription only” and thus unavailable.  In general, these are people who see having to publish as an obstacle, not as a benefit.

Who does not want free universal electronic publishing?

Publishers (of course), libraries, university administrators.  These are people and establishments who see value in existing order and don’t want it destroyed.  Also: mediocre mathematicians, bad mathematicians, mathematicians from poor countries, mathematicians who don’t have access to good libraries (perhaps, paradoxically).  In general, people who need help with their papers.  People who don’t want a quick brush-off  “not good enough” or “probably well known”, but those who need advice on the references, on their English, on how the papers are structured and presented, and on what to do next.

So, who is right?

Everyone.  For some mathematicians, having all journals to be electronic with virtually no cost is an overall benefit.  But at the very least, “pro status quo” crowd have a case, in my view.  I don’t mean that Elsevier pricing policy is reasonable, I am talking about the big picture here.  In a long run, I think of journals as non-profit NGO‘s, some kind of nerdy versions of Nobel Peace Prize winning Médecins Sans Frontières.  While I imagine that in the future many excellent top level journals will be electronic and free, I also think many mid-level journals in specific areas will be run by non-profit publishers, not free at all, and will employ a number of editorial and technical stuff to help the authors, both of papers they accept and reject.  This is a public service we should strive to perform, both for the sake of those math papers, and for development of mathematics in other countries.

Right now, the number of mathematicians in the world is already rather large and growing.  Free journals can do only so much.  Without high quality editors paid by the publishers, with a large influx of papers from the developing world, there is a chance we might loose the traditional high standards for published second tier papers.  And I really don’t want to think of a mathematics world once the peer review system is broken.  That’s why I am not in the “free publishing camp” – in an effort to save money, we might loose something much more valuable – the system which gives foundation and justification of our work.

Second interlude: journals vis-à-vis combinatorics

I already wrote about the fate of combinatorics papers in the Annals, especially when comparison with Number Theory.  My view was gloomy but mildly optimistic.  In fact, since that post was written couple more combinatorics papers has been accepted.  Good.  But let me give you a quiz.  Here are two comparable highly selective journals – Duke J. Math. and Composito Math.  In the past 10 years Composito published exactly one (!) paper in Combinatorics (defined as primary MSC=05), of the 631 total.  In the same period, Duke published 8 combinatorics papers of 681 total.

Q: Which of the two (Composito or Duke) treats combinatorics papers better?

A: Composito, of course.

The reasoning is simple.  Forget the anecdotal evidence in the previous interlude.  Just look at the “aim and scope” of the journals vs. these numbers.  Here is what Compsito website says with a refreshing honesty:

By tradition, the journal published by the foundation focuses on papers in the main stream of pure mathematics. This includes the fields of algebra, number theory, topology, algebraic and analytic geometry and (geometric) analysis. Papers on other topics are welcome if they are of interest not only to specialists.

Translation: combinatorics papers are not welcome (as are papers in many other fields).  I think this is totally fair.  Nothing wrong with that.  Clearly, there are journals which publish mostly in combinatorics, and where papers in none of these fields would be welcome.  In fact there is a good historical reason for that.  Compare this with what Duke says on its website:

Published by Duke University Press since its inception in 1935, the Duke Mathematical Journal is one of the world’s leading mathematical journals. Without specializing in a small number of subject areas, it emphasizes the most active and influential areas of current mathematics.

See the difference?  They don’t name their favorite areas!  How are the authors supposed to guess which are these?  Clearly, Combinatorics with its puny 1% proportion of Duke papers is not a subject area that Duke “emphasizes”.  Compare it with 104 papers in Number Theory (16%) and 124 papers in Algebraic Geometry (20%) over the same period.  Should we conclude that in the past 10 years, Combinatorics was not “the most active and influential”, or perhaps not “mathematics” at all? (yes, some people think so)  I have my own answer to this question, and I bet so do you…

Note also, that things used to be different at Duke.  For example, exactly 40 years earlier, in the period 1963-1973, Duke published 47 papers in combinatorics out of 972 total, even though the area was only in its first stages of development.  How come?  The reason is simple: Leonard Carlitz was Managing Editor at the time, and he welcomed papers from a number of prominent combinatorialists active during that time, such as Andrews, Gould, Moon, Riordan, Stanley, Subbarao, etc., as well as a many of his own papers.

So, ideally, what will happen to math journals?

That’s actually easy.  Here are my few recommendations and predictions.

1)  We should stop with all these geography based journals.  That’s enough.  I understand the temptation for each country, or university, or geographical entity to have its own math journal, but nowadays this is counterproductive and a cause for humor.  This parochial patriotism is perhaps useful in sports (or not), but is nonsense in mathematics.  New journals should emphasize new/rapidly growing areas of mathematics underserved by current journals, not new locales where printing presses are available.

2)  Existing for profit publishers should realize that with the growth of arXiv and free online competitors, their business model is unsustainable.  Eventually all these journals will reorganize into a non-profit institutions or foundations.  This does not mean that the journals will become electronic or free.  While some probably will, others will remain expensive, have many paid employees (including editors), and will continue to provide services to the authors, all supported by library subscriptions.  These extra services are their raison d’être, and will need to be broadly advertised.  The authors would learn not to be surprised of a quick one line report from free journals, and expect a serious effort from “expensive journals”.

3)  The journals will need to rethink their structure and scope, and try to develop their unique culture and identity.  If you have two similar looking free electronic journals, which do not add anything to the papers other than their .sty file, the difference is only the editorial board and history of published papers.  This is not enough.  All journals, except for the very top few, will have to start limiting their scope to emphasize the areas of their strength, and be honest and clear in advertising these areas.  Alternatively, other journals will need to reorganize and split their editorial board into clearly defined fields.  Think  Proc. LMS,  Trans. AMS, or a brand new  Sigma, which basically operate as dozens of independent journals, with one to three handling editors in each.  While highly efficient, in a long run this strategy is also unsustainable as it leads to general confusion and divergence in the quality of these sub-journals.

4)  Even among the top mathematicians, there is plenty of confusion on the quality of existing mathematics journals, some of which go back many decades.  See e.g. a section of Tim Gowers’s post about his views of the quality of various Combinatorics journals, since then helpfully updated and corrected.  But at least those of us who have been in the area for a while, have the memory of the fortune of previously submitted papers, whether our own, or our students, or colleagues.  A circumstantial evidence is better than nothing.  For the newcomers or outsiders, such distinctions between journals are a mystery.  The occasional rankings (impact factor or this, whatever this is) are more confusing than helpful.

What needs to happen is a new system of awards recognizing achievements of individual journals and/or editors, in their efforts to improve the quality of the journals, attracting top papers in the field, arranging fast refereeing, etc.   Think a mixture of Pulitzer Prize and J.D. Power and Associates awards – these would be a great help to understand the quality of the journals.  For example, the editors of the Annals clearly hustled to referee within a month in this case (even if motivated by PR purposes).  It’s an amazing speed for a technical 50+ page paper, and this effort deserves recognition.

Full disclosure:  Of the journals I singled out, I have published once in both  JAMS  and  Duke.  Neither paper is in Combinatorics, but both are in Discrete Mathematics, when understood broadly.

On triple crowns in mathematics and AMS badges

September 9, 2012 1 comment

As some of you figured out from the previous post, my recent paper (joint with Martin Kassabov) was accepted to the Annals of Mathematics.  This being one of my childhood dreams (well, a version of it), I was elated for a few days.  Then I thought – normal children don’t dream about this kind of stuff.  In fact, we as a mathematical community have only community awards (as in prizes, medals, etc.) and have very few “personal achievement” benchmarks.  But, of course, they are crucial for the “follow your dreams” approach to life (popularized famously in the Last Lecture).  How can we make it work in mathematics?

I propose we invent some new “badges/statistics” which can be “awarded” by AMS automatically, based on the list of publications, and noted in the MathSciNet Author’s Profile.  The awardees can then proudly mention them on the department websites, they can be included in Wikipedia entries of these mathematicians, etc.   Such statistics are crucial everywhere in sports, and most are individual achievements.  Some were even invented to showcase a particular athlete.   So I thought – we can also do this.  Here is my list of proposed awards. Ok, it’s not very serious…  Enjoy!

Triple Crown in Mathematics

A paper in each of Annals of Mathematics, Inventiones, and Journal of AMS.  What, you are saying that “triple crown” is about horse racing?  Not true.  There are triple crowns in everything, from bridge to golf, from hiking to motor racing.  Let’s add this one to the list.

Other Journal awards

Some (hopefully) amusing variations on the Tripe Crown.  They are all meant to be great achievements, something to brag about.

Marathon – 300 papers

Ultramarathon – 900 papers

Iron Man – 5 triple crown awards

Big Ten – 10 papers in journals where “University” is part of the title

Americana – 5 papers in journals whose title may only include US cities (e.g. Houston), states (e.g. Illinois, Michigan, New York), or other parts of American geography (such as Rocky Mountains, Pacific Ocean)

Foreign lands – 5 papers in journals named after non-US cities (e.g. Bordeaux, Glasgow, Monte Carlo, Moscow), and five papers in journals named after foreign countries.

Around the world – 5 papers in journals whose titles have different continents (Antarctica Journal of Mathematics does not count, but Australasian Journal of Combinatorics can count for either continent).

What’s in a word – 5 papers in single word journals: (e.g. Astérisque, Complexity, Configurations, Constraints, Entropy, IntegersNonlinearity, Order, Positivity, Symmetry).

Decathlon – papers in 10 different journals beginning with “Journal of”.

Annals track – papers in 5 different journals beginning with “Annals of”.

I-heart-mathematicians – 5 papers in journals with names of mathematicians (e.g. Bernoulli, Fourier, Lie, Fibonacci, Ramanujan)

Publication badges

Now, imagine AMS awarded badges the same way MathOverflow does, i.e. in bulk and for both minor and major contributions.  People would just collect them in large numbers, and perhaps spark controversies.  But what would they look like?  Here is my take:

enthusiast (bronze) – published at least 1 paper a year, for 10 years (can be awarded every year when applicable)

fanatic (silver) – published at least 10 papers a year, for 20 years

obsessed (gold) – published at least 20 papers a year, for 30 years

nice paper (bronze) – paper has at least 2 citations

good paper (silver) – paper has at least 20 citations

great paper (gold) – paper has at least 200 citations

famous paper (platinum) – paper has at least 2000 citations

necromancer (silver) – cited a paper which has not been cited for 25 years

asleep at the wheel (silver) – published an erratum to own paper 10 years later

destroyer (silver) – disproved somebody’s published result by an explicit counterexample

peer pressure (silver) – retracted own paper, purchased and burned all copies, sent cease and desist letters to all websites which illegally host it

scholar (bronze) – at least one citation

supporter (bronze) – cited at least one paper

writer (bronze) – first paper

reviewer (bronze) – first MathSciNet review

self-learner (bronze) – solved own open problem in a later paper

self-citer (bronze) – first citation of own paper

self-fan (silver) – cited 5 own papers at least 5 times each

narcissist (gold) – cited 15 own papers at least 15 times each

enlightened rookie (silver) – first paper was cited at least 20 times

dry spell (bronze) – no papers for the past 3 years, but over 100 citations to older papers over the same period

remission (silver) – first published paper after a dry spell

soliloquy (bronze) – no citation other than self-citations for the past 5 years

drum shape whisperer (silver) – published two new objects with exactly same eigenvalues

neo-copernicus (silver) – found a coordinate system to die for

gaussian ingenuity (gold) – found eight proofs of the same law or theorem

fermatist (silver) – published paper has a proof sketched on the margins

pythagorist (gold) – penned an unpublished and publicly unavailable preprint with over 1000 citations

homologist (platinum) – has a (co)homology named after

dualist (platinum) – has a reciprocity or duality named after

ghost-writer (silver) – published with a person who has been dead for 10 years

prince of nerdom (silver) – wrote a paper joint with a computer

king of nerdom (gold) – had a computer write a joint paper

sequentialist (gold) – authored a sequel of five papers with the same title

prepositionist (gold) – ten papers which begin with a preposition “on”, “about”, “toward”, or “regarding” (prepositions at the end of the title are not counted, but sneered at).

luddite (bronze) – paper originally written NOT in TeX or LaTeX.

theorist (silver) – the implied constant in O(.) notation in the main result in greater than 1080.

conditionalist (silver) – main result is a conditional some known conjecture (not awarded in Crypto and Theory CS until the hierarchy of complexity classes is established)

ackermannist (gold) – main result used a function which grows greater than any finite tower of 2’s.

What about you?  Do you have any suggestions? 🙂

How do you solve a problem like the Annals?

August 19, 2012 3 comments

The Annals of Mathematics has been on my mind in the past few days (I will explain why some other day). More precisely, I was wondering

Does the Annals publish articles in Combinatorics? If not, why not?  If yes, what changed?

What’s coming is a lengthy answer to this question, and a small suggestion.

The numbers

I decided to investigate by searching the MR on MathSciNet (what else?)  For our purposes, Combinatorics is defined as “Primary MSC = 05”).  For a control group, I used Number Theory (“Primary MSC = 11”).   I chose a break point date to be the year 2000, a plausible dividing line between the “old days” and “modern times”.  I got the following numbers.

All MR papers:  about 2.8 mil, of which 1 mil after 2000.   In the Annals: 5422, of which 742 after 2000.

Combinatorics papers:  about 88k, of which 41k after 2000.  In the Annals: 18, of which 13 after 2000.

Number Theory papers:  about 58k, of which 29k after 2000.   In the Annals: 225, of which 129 after 2000.

So any way you slice it, as a plain number, as percentage of all papers, before 2000, after 2000, or in total – NT has about 10 times as many papers as Combinatorics.  The bias seems transparent, no?

Well, there is another way to look at the numbers.  MR finds that about 3% of all papers are in Combinatorics (which includes Graph Theory, btw).  The percentage of Combinatorics in the Annals is about 0.3%  Oops…  But the percentage in recent years clearly picked up – since 2000, 13 Combinatorics papers constitute about 1.7% of all Annals papers.  Given that there are over 50 major “areas” of mathematics (according to MSC), and Combinatorics is about 4.1% of all published papers since 2000, this is slightly below average, but not bad at all.

So what exactly is going on?  Has Combinatorics finally reached the prominence it deserves?  It took me awhile to figure this out, so let me tell this slowly.

The people

Let’s looks at individual combinatorialists.  Leonard Carlitz authored about 1000 papers, none in the Annals.  George Andrews wrote over 300 and Ron Graham over 450 papers, many classical ones.  Both аre former presidents of AMS.  Again, none in the Annals.  The list goes on:  W.T. Tutte, Gian-Carlo Rota, Richard Stanely, Don KnuthDoron Zeilberger, Béla Bollobás, János Pach, etc. – all extremely prolific, and neither published a single paper in the Annals.  These are just off the top of my head, and in no particular order.

The case of Paul Erdős is perhaps the most interesting.  Between 1937 and 1955, he published 25 papers in the Annals in a variety of fields (Analysis, Number Theory, Probability, etc.)  Starting 1956, over the span of 40 years, he published over 1000 papers and none in the Annals.  What happened?  You see, in 1956 he coauthored a paper with Alfréd Rényi titled “On some combinatorical problems”, his very first paper with MSC=05.   Their pioneer paper “On the evolution of random graphs” came just four years later.  Nothing was ever the same again.  Good bye, the Annals!  Coincidence?  Maybe a little.  But from what I know about Erdős’s biography, his interests did shift to Combinatorics about that time…

Now, in NT and other fields things are clearly different.  After many trials, two champions I found are Manjul Bhargava (6 out of his 21 papers were published in the Annals), and Hassler Whitney (19 out of 65), both with about 30% rate.

The answer

In fact, it is easier to list those who have published Combinatorics papers in the Annals.  Here is the list of all 18 papers, as it really holds the clue to answering our initial question.  A close examination of the list shows that the 13 papers since 2000 are quite a bit diverse and interconnected to other areas of mathematics.  Some, but not most, are solutions to major open problems.  Some, but not most, are in a popular area of extremal/probabilistic combinatorics, etc.  Overall, a good healthy mix, even if a bit too small in number.

Note that in other fields things are different.  Check out Discrete Geometry (52C), a beautiful and rapidly growing area of mathematics.  Of the about 1800 papers since 2000, only three appeared in the Annals: one retracted (by Biss), and two are solutions of centuries old problems (by Hales and by Musin), an impossibly high standard.  One can argue that this sample is too small.  But think about it – why is it so small??

In summary, the answer to the first question is YES, the Annals does now publish Combinatorics papers.  It may look much warmer towards NT, but that’s neither important, nor the original question.  As for what caused the change, it seems, Combinatorics has become just like any other field.  It is diverse in its problems, has a long history, has a number of connections and applications to other fields, etc.  It may fall short on the count of faculty at some leading research universities, but overall became “normal”.  Critically, when it comes to Combinatorics, the old over the top criterion by the Annals (“must be a solution of a classical problem”), is no longer applied.  A really important contribution is good enough now.  Just like in NT, I would guess.

The moral

I grew up (mathematically) in a world where the Annals viewed Combinatorics much the same way it viewed Statistics – as a foreign to mathematics fields with its own set of journals (heck, even its own annals).  People rarely if ever submitted their papers to the Annals, because neither did the leaders of the field.  Things clearly have changed for the better.  Now the Annals does publish papers in Combinatorics, and will probably publish more if more are submitted.  The main difference with Statistics is obvious – statisticians worked very hard to separate themselves from Mathematics, to create a separate community with their own departments, journals, grants, etc.  They largely succeeded.  Combinatorialists on the other hand, worked hard to become a part of mainstream Mathematics, and succeeded as well, to some extent.  The change of attitude in the Annals is just a reflection on that.   

The over-representation of NT is also easy to explain.  I argued on MO that there is a bit of first-mover advantage going on, that some fields of mathematics feel grandfathered and push new fields away.  While clearly true, let’s ask who benefits?  Not the people in the area, which then has higher expectations for them (as in “What? No paper is the Annals yet?”).  While it may seem that as a result, an applicant in NT might get an unfair advantage over that in Combinatorics, the hiring committees know better.  This is bad for the Annals as well.  In these uncertain times of hundreds of mathematics journals (including some really strange), various journal controversiesoften misused barely reasonable impact factors, and new journals appearing every day, it is good to have some stability.  Mathematics clearly needs at least one journal with universally high standards, and giving preferences to a particular field does not help anyone.

The suggestion

It seems, combinatorialists and perhaps people in other fields have yet to realize that the Annals is gradually changing in response to the changing state of the field(s).  Some remain unflinching in their criticism.  Notably, Zeilberger started calling it “snooty” in 1995, and continues now: “paragon of mathematical snootiness” that will “only publish hard-to-understand proofs” (2007),  “high-brow, pretentious” (2010).  My suggestion is trivial – ignore all that.  Combinatorialists should all try to send their best papers to the top journals in Math, not just in the field (which are plenty).  I realize that the (relative) reward may seem rather small, there is a lot of waiting involved, and the rejection chances are high, but still – this is important for the field.  There is clearly a lot of anxiety about this among job applicants, so untenured mathematicians are off the hook.  But the rest of us really should do this with our best work.  I trust the editors will notice and eventually more Combinatorics papers will get published.

P.S.  BTW, it is never too late.  Of the 100+ papers by Victor Zalgaller, his first paper in the Annals appeared in 2004, when he was 84, exactly 65 years after his very first paper appeared in Russia in 1939.