In this post, I am going to tell a story of one paper and its authors which misrepresented my paper and refused to acknowledge the fact. It’s also a story about the section editor of Journal of Algebra which published that paper and then ignored my complaints. In my usual wordy manner, I do not get to the point right away, and cover some basics first. If you want to read only the juicy parts, just scroll down…
What’s the deal with the references?
First, let’s talk about something obvious. Why do we do what we do? I mean, why do we study for many years how to do research in mathematics, read dozens or hundreds of papers, think long thoughts until we eventually figure out a good question. We then work hard, trial-and-error, to eventually figure out a solution. Sometimes we do this in a matter of hours and sometimes it takes years, but we persevere. Then write up a solution, submit to a journal, sometimes get rejected (who knew this was solved 20 years ago?), and sometimes sent for revision with various lemmas to fix. We then revise the paper, and if all goes well it gets accepted. And published. Eventually.
So, why do we do all of that? For the opportunity to teach at a good university and derive a reasonable salary? Yes, sure, a to some degree. But mostly because we like doing this. And we like having our work appreciated. We like going to conferences to present it. We like it when people read our paper and enjoy it or simply find it useful. We like it when our little papers form building stones towards bigger work, perhaps eventually helping to resolve an old open problem. All this gives us purpose, a sense of accomplishment, a “social capital” if you like fancy terms.
But all this hinges on a tiny little thing we call citations. They tend to come at the end, sometimes footnote size and is the primary vehicle for our goal. If we are uncited, ignored, all hope is lost. But even if we are cited, it matters how our work is cited. In what context was it referenced is critically important. Sometimes our results are substantially used in the proof, those are GOOD references.
Yet often our papers are mentioned in a sentence “See [..] for the related results.” Sometimes this happens out of politeness or collegiality between authors, sometimes for the benefit of the reader (it can be hard navigating a field), and sometimes the authors are being self-serving (as in “look, all these cool people wrote good papers on this subject, so my work must also be good/important/publishable”). There are NEUTRAL references – they might help others, but not the authors.
Finally, there are BAD references. Those which refer derogatively to your work, or simply as a low benchmark which the new paper easily improved. Those which say “our bound is terribly weak, but it’s certainly better than Pak’s.” But WORST references are those which misstate what you did, which diminish and undermine your work.
So for anyone out there who thinks the references are in the back because they are not so important – think again. They are of utmost importance – they are what makes the system work.
The story of our paper
This was in June 1997. My High School friend Sergey Bratus and I had an idea of recognizing the symmetric group Sn using the Goldbach conjecture. The idea was nice and the algorithm was short and worked really fast in practice. We quickly typed it up and submitted to the Journal of Symbolic Computations in September 1997. The journal gave us a lot of grief. First, they refused to seriously consider it since the Goldbach conjecture in referee’s words is “not like the Riemann hypothesis“, so we could not use it. Never mind that it was checked for n<1014, covering all possible values where such algorithm could possibly be useful. So we rewrote the paper by adding a variation based on the ternary Goldbach conjecture which was known for large enough values (and now proved in full).
The paper had no errors, resolved an open problem, but the referees were unhappy. One of them requested we change the algorithm to also work for the alternating group. We did. In the next round the same or another requested we cover the case of unknown n. We did. In the next round one referee requested we make a new implementation of the algorithm, now in GAP and report the results. We did. Clearly, the referees did not want our paper to get published, but did not know how to say it. Yet we persevered. After 4 back and forth revisions the paper more than doubled in size (completely unnecessarily). This took two years, almost to the day, but the paper did get accepted and published. Within a year or two, it became a standard routine in both GAP and MAGMA libraries.
 Sergey Bratus and Igor Pak, Fast constructive recognition of a black box group isomorphic to Sn or An using Goldbach’s Conjecture, J. Symbolic Comput. 29 (2000), 33–57.
Until a few days ago I never knew what was the problem the referees had with our paper. Why did a short, correct and elegant paper need to become long to include cumbersome extensions of the original material for the journal to accept it? I was simply too inexperienced to know that this is not the difference in culture (CS vs. math). Read on to find out what I now realized.
After we wrote our paper, submitted and publicized on our websites and various conferences, I started noticing strange things. In papers after papers in Computational Group Theory, roughly a half would not reference our paper, but would cite another paper by 5 people in the field which apparently was doing the same or similar things. I recall I wrote to the authors of this competitive paper, but they wrote back that the paper is not written yet. To say I was annoyed was to understate the feeling.
In one notable instance, I confronted Bill Kantor (by email) who helped us with good advice earlier. He gave an ICM talk on the subject and cited a competition paper but not ours, even though I personally showed him the submitted preprint of  back in 1997, and explained our algorithm. He replied that he did not recall whether we sent him the paper. I found and forwarded him my email to him with that paper. He replied that he probably never read the email. I forwarded him back his reply on my original email. Out of excuses, Kantor simply did not reply. You see, the calf can never beat the oak tree.
Eventually, the competition paper was published 3 years after our paper:
 Robert Beals, Charles Leedham-Green, Alice Niemeyer, Cheryl Praeger, Ákos Seress, A black-box group algorithm for recognizing finite symmetric and alternating groups. I, Trans. AMS 355 (2003), 2097–2113.
The paper claims that the sequel II by the same authors is forthcoming, but have yet to appear. It was supposed to cover the case of unknown n, which  was required to cover, but I guess the same rules do not apply to . Or maybe JSC is more selective than TAMS, one never knows… The never-coming sequel II will later play a crucial part in our story.
Anyhow, it turns out, the final result in  is roughly the same as in . Although the details are quite different, it wasn’t really worth the long wait. I quote from :
The running time of constructive recognition in  is about the same.
The authors then show an incredible dexterity in an effort to claim that their result is better somehow, by finding minor points of differences between the algorithms and claiming their importance. For example, take look at this passage:
The paper  describes the case G = Sn, and sketches the necessary modifications for the case G = An. In this paper, we present a complete argument which works for both cases. The case G = An is more complicated, and it is the more important one in applications.
Let me untangle this. First, what’s more “important” in applications is never justified and no sources were cited. Second, this says that BLNPS either haven’t read  or are intentionally misleading, as the case of An there is essentially the same as Sn, and the timing is off by a constant. On the other hand, this suggests that  treats An in a substantively more complicated way than Sn. Shouldn’t that be an argument in favor of  over , not the other way around? I could go on with other similarly dubious claims.
From this point on, multiple papers either ignored  in favor of  or cited  pro forma, emphasizing  as the best result somehow. For example, the following paper with 3 out of 5 coauthors of  goes at length touting  and never even mentioned .
 Alice Niemeyer, Cheryl Praeger, Ákos Seress, Estimation Problems and Randomised Group Algorithms, Lecture Notes in Math. 2070 (2013), 35–82.
When I asked Niemeyer as to how this could have happened, she apologized and explained: “The chapter was written under great time pressure.”
For an example of a more egregious kind, consider this paper:
 Robert Beals, Charles Leedham-Green, Alice Niemeyer, Cheryl Praeger, Ákos Seress, Constructive recognition of finite alternating and symmetric groups acting as matrix groups on their natural permutation modules, J. Algebra 292 (2005), 4–46.
They unambiguously claim:
The asymptotically most efficient black-box recognition algorithm known for An and Sn is in .
Our paper  is not mentioned anywhere near, and cited pro forma for other reasons. But just two years earlier, the exact same 5 authors state in  that the timing is “about the same”. So, what has happened to our algorithm in the intervening two years? It slowed down? Or perhaps the one in  got faster? Or, more plausibly, BLNPS simply realized that they can get away with more misleading referencing at JOA, than TAMS would ever allow?
Again, I could go on with a dozen other examples of this phenomenon. But you get the idea…
My boiling point: the 2013 JOA paper
For years, I held my tongue, thinking that in the age of Google Scholar these self-serving passages are not fooling anybody, that anyone interested in the fact is just a couple of clicks away from our paper. But I was naive. This strategy of ignoring and undermining  eventually paid off in this paper:
 Sebastian Jambor, Martin Leuner, Alice Niemeyer, Wilhelm Plesken, Fast recognition of alternating groups of unknown degree, J. Algebra 392 (2013), 315–335.
The abstract says it all:
We present a constructive recognition algorithm to decide whether a given black-box group is isomorphic to an alternating or a symmetric group without prior knowledge of the degree. This eliminates the major gap in known algorithms, as they require the degree as additional input.
And just to drive the point home, here is the passage from the first paragraph in the introduction.
For the important infinite family of alternating groups, the present black-box algorithms ,  can only test whether a given black-box group is isomorphic to an alternating or a symmetric group of a particular degree, provided as additional input to the algorithm.
Ugh… But wait, our paper  they are citing already HAS such a test! And it’s not like it is hidden in the paper somehow – Section 9 is titled “What to do if n is not known?” Are the authors JLNP blind, intentionally misleading or simply never read ? Or is it the “great time pressure” argument again? What could possible justify such outrageous error?
Well, I wrote to the JLNP but neither of them answered. Nor acknowledged our priority. Nor updated the arXiv posting to reflect the error. I don’t blame them – people without academic integrity simply don’t see the need for that.
My disastrous battle with JOA
Once I realized that JLNP are not interested in acknowledging our priority, I wrote to the Journal of Algebra asking “what can be done?” Here is a copy of my email. I did not request a correction, and was unbelievably surprised to hear the following from Gerhard Hiss, the Editor of the Section on Computational Algebra of the Journal of Algebra:
[..] the authors were indeed careless in this attribution.
In my opinion, the inaccuracies in the paper “Fast recognition of alternating groups of unknown degree” are not sufficiently serious to make it appropriate for the journal to publish a correction.
Although there is some reason for you to be mildly aggrieved, the correction you ask for appears to be inappropriate. This is also the judgment of the other editors of the Computational Algebra Section, who have been involved in this discussion.
I have talked to the authors of the paper Niemeyer et al. and they confirmed that the did not intend to disregard your contributions to the matter.
Thus I very much regret this unpleasent [sic.] situation and I ask you, in particular with regard to the two young authors of the paper, to leave it at that.
This email left me floored. So, I was graciously permitted by the JOA to be “mildly aggrieved“, but not more? Basically, Hiss is saying that the answer to my question “What can be done?” is NOTHING. Really?? And I should stop asking for just treatment by the JOA out of “regard to the two young authors”? Are you serious??? It’s hard to know where to begin…
As often happened in such cases, an unpleasant email exchange ensued. In my complaint to Michel Broué, he responded that Gerhard Hiss is a “respectable man” and that I should search for justice elsewhere.
In all fairness to JOA, one editor did behave honorably. Derek Holt wrote to me directly. He admitted that he was the handling editor for . He writes:
Although I did not referee the paper myself, I did read through it, and I really should have spotted the completely false statement in the paper that you had not described any algorithm for determining the degree n of An or Sn in your paper with Bratus. So I would like to apologise now to you and Bratus for not spotting that. I almost wrote to you back in January when this discussion first started, but I was dissuaded from doing so by the other editors involved in the discussion.
Let me parse this, just in case. Holt is the person who implemented the Bratus-Pak algorithm in Magma. Clearly, he read the paper. He admits the error and our priority, and says he wanted to admit it publicly but other unnamed editors stopped him. Now, what about this alleged unanimity of the editorial board? What am I missing? Ugh…
What really happened? My speculation, part I. The community.
As I understand it, the Computational Group Theory is small close-knit community which as a result has a pervasive groupthink. Here is a passage from Niemeyer email to me:
We would also like to take this opportunity to mention how we came about our algorithm. Charles Leedham-Green was visiting UWA in 1996 and he worked with us on a first version of the algorithm. I talked about that in Oberwolfach in mid 1997 (abstract on OW Web site).
The last part is true indeed. The workshop abstracts are here. Niemeyer’s abstract did not mention Leedham-Green nor anyone else she meant by “us” (from the context – Niemeyer and Praeger), but let’s not quibble. The 1996 date is somewhat more dubious. It is contradicted by Niemeyer and Prager, who themselves clarified the timeline in the talk they gave in Oberwolfach in mid 2001 (see the abstract here):
This work was initiated by intense discussions of the speakers and their colleagues at the Computational Groups Week at Oberwolfach in 1997.
Anyhow, we accept that both algorithms were obtained independently, in mid-1997. It’s just that we finished our paper  in 3 months, while it took BLNPS about 4 years until it was submitted in 2001.
Next quote from Niemeyer’s email:
So our work was independent of yours. We are more than happy to acknowledge that you and Sergey [Bratus] were the first to come up with a polynomial time algorithm to solve the problem [..].
The second statement is just not true in many ways, nor is this our grievance as we only claim that  has a practically superior and theoretically comparable algorithm to that in , so there is no reason at all to single out  over  as is commonly done in the field. In fact, here is a quote from  fully contradicting Niemeyer’s claim:
The first polynomial-time constructive recognition algorithm for symmetric and alternating groups was described by Beals and Babai.
Now, note that both Hiss, Holt, Kantor and all 5 authors BLNPS were at both the 1997 and the 2001 Oberwolfach workshops (neither Bratus nor I were invited). We believe that the whole community operates by “they made a stake on this problem” and “what hasn’t happened at Oberwolfach, hasn’t happened.” Such principles make it easier for members of the community to treat BLNPS as pioneers of this problem, and only reference them even though our paper was published before  was submitted. Of course, such attitudes also remove a competitive pressure to quickly write the paper – where else in Math and especially CS people take 4-5 years(!) to write a technically elementary paper? (this last part was true also for , which is why we could write it in under 3 months).
In 2012, Niemeyer decided to finally finish the long announced part II of . She did not bother to check what’s in our paper . Indeed, why should she – everyone in the community already “knows” that she is the original (co-)author of the idea, so  can also be written as if  never happened. Fortunately for her, she was correct on this point as neither the referees nor the handling editor, nor the section editor contradicted false statements right in the abstract and the introduction.
My speculation, part II. Why the JOA rebuke?
Let’s look at the timing. In the Fall 2012, Niemeyer visited Aachen. She started collaborating with Professor Plesken from RWTH Aachen and his two graduate students: Jambor and Leuner. The paper was submitted to JOA on December 21, 2012, and the published version lists affiliation of all but Jambor to be in Aachen (Jambor moved to Auckland, NZ before the publication).
Now, Gerhard Hiss is a Professor at RWTH Aachen, working in the field. To repeat, he is the Section Editor of JOA on Computational Algebra. Let me note that  was submitted to JOA three days before Christmas 2012, on the same day (according to a comment I received from Eamonn O’Brien from JOA editorial board), on which apparently Hiss and Niemeyer attended a department Christmas party.
My questions: is it fair for a section editor to be making a decision contesting results by a colleague (Plesken), two graduate students (Jambor and Leuner), and a friend (Niemeyer), all currently or recently from his department? Wouldn’t the immediate recusal by Editor Hiss and investigation by an independent editor be a more appropriate course of action under the circumstances? In fact, this is a general Elsevier guideline if I understand it correctly.
Well, I am at the end of the line on this issue. Public shaming is the only thing that can really work against groupthink. To spread the word, please LIKE this post, REPOST it, here on WP, on FB, on G+, forward it by email, or do wherever you think appropriate. Let’s make sure that whenever somebody googles these names, this post comes up on top of the search results.
P.S. Full disclosure: I have one paper in the Journal of Algebra, on an unrelated subject. Also, I am an editor of Discrete Mathematics, which together with JOA is owned by the same parent company Elsevier.
UPDATE (September 17, 2014): I am disallowing all comments on this post as some submitted comments were crude and/or offensive. I am however agreeing with some helpful criticism. Some claimed that I crossed the line with some personal speculations, so I removed a paragraph. Also, Eamonn O’Brien clarified for me the inner working of the JOA editorial board, so removed my incorrect speculations on that point. Neither are germane to my two main complaints: that  is repeatedly mistreated in the area, most notably in , and that Editor Hiss should have recused himself from handling my formal complaint on .
The question. A year ago, on this blog, I investigated Who computed Catalan numbers. Short version: it’s Euler, but many others did a lot of interesting work soon afterwards. I even made a Catalan Numbers Page with many historical and other documents. But I always assumed that the dubious honor of naming them after Eugène Catalan belongs to Netto. However, recently I saw this site which suggested that it was E.T. Bell who named the sequence. This didn’t seem right, as Bell was both a notable combinatorialist and mathematical historian. So I decided to investigate who did the deed.
First, I looked at Netto’s Lehrbuch der Combinatorik (1901). Although my German is minuscule and based on my knowledge of English and Yiddish (very little of the latter, to be sure), it was clear that Netto simply preferred counting of Catalan’s brackets to triangulations and other equivalent combinatorial interpretations. He did single out Catalan’s work, but mentioned Rodrigues’s work as well. In general, Netto wasn’t particularly careful with the the references, but in fairness neither were were most of his contemporaries. In any event, he never specifically mentioned “Catalan Zahlen”.
Second, I checked the above mentioned 1938 Bell’s paper in the Annals. As I suspected, Bell mentioned “Catalan’s numbers” only in passing, and not in a way to suggest that Catalan invented them. In fact, he used the term “Euler-Segner sequence” and provided careful historical and more recent references.
Next on my list was John Riordan‘s Math Review MR0024411, of this 1948 Motzkin’s paper. The review starts with “The Catalan numbers…”, and indeed might have been the first time this name was introduced. However, it is naive to believe that this MR moved many people to use this expression over arguably more cumbersome “Euler-Segner sequence”. In fact, Motzkin himself is very careful to cite Euler, Cayley, Kirkman, Liouville, and others. My guess is this review was immediately forgotten, but was a harbinger of things to come.
Curiously, Riordan does this again in 1964, in a Math Review on an English translation of a popular mathematics book by A.M. Yglom and I.M. Yaglom (published in Russian in 1954). The book mentions the sequence in the context of counting triangulations of an n-gon, without calling it by any name, but Riordan recognizes them and uses the term “Catalan numbers” in the review.
The answer. To understand what really happened, see this Ngram chart. It clearly shows that the term “Catalan numbers” took off after 1968. What happened? Google Books immediately answers – Riordan’s Combinatorial Identities was published in 1968 and it used “the Catalan numbers”. The term took off and became standard within a few years.
What gives? It seems, people really like to read books. Intentionally or unintentionally, monographs tend to standardize the definitions, notations, and names of mathematical objects. In his notes on Mathematical writing, Knuth mentions that the term “NP-complete problem” became standard after it was used by Aho, Hopcroft and Ullman in their famous Data Structures and Algorithms textbook. Similarly, Macdonald’s Symmetric Functions and Hall Polynomials became a standard source of names of everything in the area, just as Stanley predicted in his prescient review.
The same thing happened to Riordan’s book. Although now may be viewed as tedious, somewhat disorganized and unnecessarily simplistic (Riordan admitted to dislike differential equations, complex analysis, etc.), back in the day there was nothing better. It was lauded as “excellent and stimulating” in P.R. Stein’s review, which continued to say “Combinatorial identities is, in fact, a book that must be read, from cover to cover, and several times.” We are guessing it had a tremendous influence on the field and cemented the terminology and some notation.
In conclusion. We don’t know why Riordan chose the term “Catalan numbers”. As Motzkin’s paper shows, he clearly knew of Euler’s pioneer work. Maybe he wanted to honor Catalan for his early important work on the sequence. Or maybe he just liked the way it sounds. But Riordan clearly made a conscious decision to popularize the term back in 1948, and eventually succeeded.
UPDATE (Feb. 8, 2014) Looks like Henry Gould agrees with me (ht. Peter Luschny). He is, of course, the author of a definitive bibliography of Catalan numbers. Also, see this curious argument against naming mathematical terms after people (ht. Reinhard Zumkeller).
UPDATE (Aug 25, 2014): I did more historical research on the subject which is now reworked into an article History of Catalan Numbers.
Holiday season offers endless opportunities to celebrate, relax, rest, reflect and meditate. Whether you are enjoying a white Christmas or a palm tree Chanukkah, a mathematician in you might wonder if there is more to the story, a rigorous food for thought, if you will. So here is a brief guide to the holidays for the mathematically inclined.
1) Christmas tree lectures
I have my own Christmas tree tradition. Instead of getting one, I watch new Don Knuth‘s “Christmas tree lecture“. Here is the most recent one. But if you have time and enjoy binge-watching here is the archive of past lectures (click on “Computer musings” and select December dates). If you are one of my Math 206 students, compare how Knuth computed the number of spanning trees in a hypercube (in a 2009 lecture) with the way Bernardi did in his elegant paper.
2) Algorithmic version of Fermat’s Christmas theorem
Apparently, Fermat’s theorem on sums of two squares first appeared in Fermat’s long letter to Mersenne, written on Christmas Day (December 25, 1640). For background, see Catalan and French language Wikipedia articles. Zagier’s “one-sentence proof” is well known and available here. Long assumed to be mysterious, it was nicely explained by Elsholtz. Still mysteriously, a related proof also appears in a much earlier paper (in French), by a Russian-American mathematician J. Uspensky (ht. Ustinov). Can somebody explain to me what’s in that paper?
Interestingly, there is a nice polynomial time algorithm to write a prime p=1 mod 4 as a sum of two squares, but I could not find a clean version on the web. If you are curious, start with Cornacchia’s algorithm for more general quadratic Diophantine equations, and read its various proofs (advanced, elementary, short, textbook, in French). Then figure out why Fermat’s special case can be done in (probabilistic) polynomial time.
3) Dreidel game analysis
The dreidel is a well known Chanukkah game with simple rules. Less known is the mathematics behind it. Start with this paper explaining that it’s unfair, and continue to this paper explaining how to fix it (on average). Then proceed to this “squared nuts” conjecture by Zeilberger on the expected length of the game (I have a really good joke here which I will suppress). This conjecture was eventually resolved in this interesting paper, definitely worth $25 promised by Zeilberger.
4) Santa Claus vs beautiful mathematics
Most readers of this blog are aware of existence of beautiful mathematics. I can only speculate that a clear majority of them would probably deny the existence of Santa Claus. However, there are millions of (mostly, very young) people who believe the exact opposite on both counts. Having grown up in the land of Ded Moroz, we have little to say on the great Santa debate, but we believe it’s worth carefully examining Santa proponent’s views. Could it be that their arguments can be helpful in our constant struggle to spread the gospel of beautiful mathematics?
We recommend reading “Yes, Virginia, there is Santa Claus“ column (fully available here), which was originally published by the New York Sun in 1897. In fact, read it twice, three times, even four times. I am reluctant to quote from it because it’s short and deserves to be read in full. But note this passage: “The most real things in the world are those that neither children nor men can see.” The new Jewish editor of the Sun reports that the rabbis he consulted think this is “a joyous articulation of faith”. Maybe. But to me this evokes some beautiful advanced mathematics.
You see, when mathematicians try to explain that mathematics is beautiful, they tend to give simple visually appealing examples (like here). But I suggest closing your eyes and imagining beautiful mathematical objects, such as the 600-cell, Poincaré homolgy sphere, Lie group E8, Monster group, or many other less known higher dimensional constructions such as the associahedron, the Birkhoff polytope, Walz’s flexible cross-polyhedron, etc. Certainly all of these can be seen by “neither children nor men”. Yet we can prove that they “are real”. We can then spend years studying and generalizing them. This knowledge alone can bring joy to every holiday season…
HAPPY HOLIDAYS EVERYONE! С НОВЫМ ГОДОМ!
I was wondering what you think about a claim that I sometimes hear in this context – that one of the problems is that universities train too many Ph.D. students. That with a smaller number of math Ph.D. students the above will be less of a problem, and also that this way there will be a smaller number of people dealing with less “serious/important” topics (whatever this means exactly).
This question is certainly relevant to the “adjunct issue”. I heard it before, but always found it somewhat confusing. Specifically to the US, with its market based system, who exactly is supposed to decrease the number of Ph.D.’s? The student themselves should realize how useless in the doctoral degree and stop applying? The individual professors should refuse to accept graduate students? The universities should do this together, in some kind of union? The government? All these questions are a bit different and need untangling.
I was going to write a brief reply, but after Adam asked this question I found a yet another example of lazy journalism by Slate’s “education columnist” Rebecca Schuman who argues:
It is, simply put, irresponsible to accept so many Ph.D. students when you know graduate teaching may well be the only college teaching they ever do.
Of course, Dr. Schuman already has a Ph.D. (from our neighbor UC Irvine) — she just wants others not get one, perhaps to avoid her own fate of an adjunct at University of Missouri. Needless to say, I cannot disagree more. Let me explain.
Universities are not allowed to form a cartel
Let’s deal with the easy part. If the American universities somehow conspired to limit or decrease the number of graduate students they accepts, this would be a classical example of anti-competitive behavior. Simply put, the academia would form a cartel. A textbook example of a cartel is OPEC which openly conspires to increase or decrease oil production in order to control world energy prices. In the US, such activity is against the law due to to the Sherman Act of 1890, and the government/courts have been ruthless in its application (cf. European law to that effect).
One can argue that universities are non-profit institutions and by definition would not derive profit should they conspire, but the law makes no distinction on this, and this paper (co-authored by the celebrity jurist and economist Richard Posner) supports this approach. And to those who think that only giants such as Standard Oil, AT&T or Microsoft have to worry about anti-trust, the government offers plenty of example of going after small time cartels. A notable recent case is Obama’s FTC going after Music Teachers National Association, who have a non-poaching of music students recommendation in their “code of ethics”. Regardless what you think of that case, it is clear that the universities would never try to limit the number of graduate students in a similar manner.
Labor suppy and demand
As legions before her, Schuman laments that pospective grad students do not listen to “reason”:
Expecting wide-eyed, mind-loving intellectuals to embrace the eventual realities of their situations has not worked—yes, they should know better, but if they listened to reason, they wouldn’t be graduate students in the first place. Institutions do know better, so current Ph.D. recruitment is dripping with disingenuousness.
But can you really be “wide-eyed” in the internet era? There is certainly no shortage of articles by both journalists and academics on the “plight” of academic life – she herself links to sites which seem pretty helpful informing prospective graduate students (yes, even the link to Simpsons is helpful). I have my own favorites: this, that, that and even that. But all of these are misleading at best and ridiculous at worst. When I mentioned them on MO, José Figueroa-O’Farrill called them a “parallel universe”, for a good reason.
You see, in this universe people make (mostly) rational decisions, wide-eyed or not. The internet simply destroyed the information gap. Faced with poor future income prospects, graduate students either choose to go elsewhere or demand better conditions at the universities. Faced with a decreasing pool of candidates the universities make an effort to make their programs more attractive, and strive to expand the applicant pool by reaching out to underrepresented groups, foreign students, etc. Eventually the equilibrium is reached and labor supply meets demand, as it always has. Asking the universities (who “do know better”) to have the equilibrium be reached at a lower point is equivalent to asking that Ph.D. programs become less attractive. And I thought Schuman cares…
Impact of government actions
Now, when it comes to distorting of the labor market, the government is omnipotent and with a single bill can decrease the number of graduate students. Let’s say, the Congress tomorrow enacts a law mandating a minimum wage of $60,000 a year for all graduate students. Of course, large universities have small armies of lawyers and accountants who would probably figure out how to artificially hike up the tuition for graduate students and include it in their income, but let’s assume that the law is written to prevent any loopholes. What would happen next?
Obviously, the universities wouldn’t be able to afford that many graduate graduate students. The number of them will plunge. The universities would have to cut back on the TA/recitation/discussion sessions and probably hire more adjuncts to compensate for the loss. In time, this would lower the quality of education or lead to huge tuition increases, or mostly likely a little bit of both. The top private universities who would want to maintain small classes will become completely unaffordable for the middle class. Meanwhile the poorer state universities will commodify their education by creating huge classes with multiple choice machine testing, SAT-style, and further diminishing student-faculty interaction. In fact, to compensate for their increasing cost to universities, graduate students will be asked to do more teaching, thus extending their time-to-degree and decreasing the graduation rates.
Most importantly, this would probably have no positive effect on decreasing competition for tenure track jobs, since the academic market is international. In other words, a decreasing american supply will be immediately compensated with an increasing european supply aided with inflow from emerging markets (ever increasing in quantity and quality production of Ph.D.’s in Asia). In fact, there is plenty of evidence that this would have sharply negative effect on prospects of American students, as decreased competition would result in weaker research work (see below).
In summary, who exactly would be the winners of this government action? I can think of only one group: lazy journalists who would have many new reasons to write columns complaining about the new status quo.
The out of control academics
Let’s go back to Schuman’s “it is [..] irresponsible to accept so many Ph.D. students” quote I mentioned above, and judge in on moral merits. Irresponsible? Really? You are serious? Is it also irresponsible to give so many football scholarships to college students if only a few of them can make it to the NFL? Is it also irresponsible to have so many acting schools given that so few of the students become movie stars? (see this list in my own little town). In the previous post I already explain how graduate schools are apprenticeship programs. Graduate schools give students a chance and an opportunity to succeed. Some students do indeed, while others move to do something else, sometimes succeeding beyond expectations (see e.g. this humorous list).
What’s worse, Schuman implicitly assumes that the Ph.D. study can only be useful if directly applicable to obtain a professorship. This is plainly false. I notice from her CV that she teaches “The World of Kafka” and “Introduction to German Prose”. Excellent classes I am sure, but how exactly the students are supposed to use this knowledge in real life? Start writing in German or become a literary agent? Please excuse me for being facetious – I hope my point is clear.
Does fewer students means better math? (on average)
In effect, this is Adam’s speculation at the end of his question, as he suggested that perhaps fewer mathematics graduate students would decrease the number of “less ‘serious/important’ topics”. Unfortunately, the evidence suggests the opposite. When there is less competition, this is a result of fewer rewards and consequently requires less effort to succeed. As a result, the decrease in the number of math graduate students will lead to less research progress and increase in “less important” work, to use the above language.
To see this clearly, think of sports. Compare this list of Russian Major League baseball players with this list by that of Japanese. What explains the disparity? Are more Japanese men born with a gift to play baseball? The answer is obvious. Baseball is not very popular in Russia. Even the best Russian players cannot compete in the american minor leagues. Things are very different in Japan, where baseball is widely popular, so the talented players make every effort to succeed rather than opt for possibly more popular sport (soccer and hockey in Russian case).
So, what can be done, if anything?
To help graduate students, that is. I feel there is a clear need to have more resources on non-academic options available for graduate student (like this talk or this article). Institutionally, we should make it easier to cross register to other schools within the university and the nearby universities. For example, USC graduate students can take UCLA classes, but I have never seen anyone actually doing that. While at Harvard, I took half a dozen classes at MIT – it was easy to cross register and I got full credit.
I can’t think of anything major the universities can do. Government can do miracles, of course…
P.S. I realize that the wage increase argument has a “fighting straw men” feel. However, other government actions interfering with the market are likely to bring similarly large economic distortions of the academic market, with easily predictable negative consequences. I can think of a few more such unfortunate efforts, but the burden is not on me but on “reformers” to propose what exactly do they want the government to do.
P.P.S. We sincerely wish Rebecca Schuman every success in her search for a tenure track appointment. Perhaps, when she gets such a position, she can write another article with a slightly sunnier outlook.
It’s been awhile since I wanted to rant. Since the last post, really. Well, I was busy. But the time has come to write several posts.
This post is about a number of recent articles lamenting the prevalence of low paid adjuncts in many universities. To sensationalize the matter, comparisons were made with drug cartels and Ponzi schemes. Allegedly, this inequality is causing poverty and even homelessness and death. I imagine reading these articles can be depressing, but it’s all a sham. Knowingly or not, the journalists are perpetuating false stereotypes on what professors really do. These journalists seem to be doing their usual lazy work and preying on reader’s compassion and profound misunderstanding of the matter.
Now, if you are reading this blog, I am assuming you know exactly what professors do (Hint: not just teaching). But if you don’t, start with this outline by my old friend Daniel Liberzon, and proceed to review any or all of these links: one, two, three, four. When you are done, we can begin to answer our main semi-serious question:
What is academia, really, if it’s not a drug cartel or a Ponzi scheme?
I can’t believe this trivial question is difficult to some people, and needs a lengthy answer, but here it is anyway.
Academia rewards industriousness and creativity
This might seem obvious – of course it does! These are the main qualities needed to achieve success doing research. But reading the above news reports it might seem that Ph.D. is like a lottery ticket – the winnings are rare and random. What I am trying to say is that academia can be compared with other professions which involve both qualities. To make a point, take sculpture.
There are thousands of professional sculptors in the United States. The figures vary greatly, but same also holds for the number of mathematicians, so we leave it aside. The average salary of sculptors seems to be within reach from average salary in the US, definitely below that of an average person with bachelor degree. On the other hand, top sculptors are all multimillionaires. For example, recently a sculpture by Jeff Koons sold for $58.4 million. But at least it looked nice. I will never understand the success of Richard Serra, whose work is just dreadful. You can see some of his work at UCLA (picture), or at LACMA (picture). Or take a celebrated and much despised 10 million dollar man Dale Chihuly, who shows what he calls “art” just about everywhere… But reasonable people can disagree on this, and who am I to judge anyway? My opinion does not matter, nor is that of almost anyone. What’s important, is that some people with expertise value these creative works enough to pay a lot of money for them. These sculptors’ talent is clearly recognized.
Now, should we believe on the basis of the salary disparity that the sculpture is a Ponzi scheme, with top earners basically robbing all the other sculptors of good living? That would be preposterous, of course. Same with most professors. Just because the general public cannot understand and evaluate their research work and creativity, does not mean it’s not there and should not be valued accordingly.
Academia is a large apprenticeship program
Think of graduate students who are traditionally overworked and underpaid. Some make it to graduate with a Ph.D. and eventually become tenured professors. Many, perhaps most, do not. Sounds like a drug cartel to you? Nonsense! This is exactly how apprenticeships works, and how it’s been working for centuries in every guild. In fact, some modern day guilds don’t pay anything at all.
Students enter the apprenticeship/graduate program in hopes to learn from the teacher/professor and succeed in their studies. The very best do succeed. For example, this list of Rembrant‘s pupils/assistants reads somewhat similar to this list of Hilbert‘s students. Unsurprisingly, some are world famous, while others are completely forgotten. So it’s not about cheap labor as in drug cartels – this is how apprenticeships normally work.
Academia is a big business
Think of any large corporation. The are many levels of management: low, mid, and top-level. Arguably, all tenured and tenure-track faculty are low level managers, chairs and other department officers (DGS, DUS, etc.) are mid-level, while deans, provosts and presidents/chancellors are top-level managers. In the US, there is also a legal precedent supporting qualifying professors as management (e.g. professors are not allowed to unionize, in contrast with the adjunct faculty). And deservingly so. Professors hire TA’s, graders, adjuncts, support stuff, choose curriculum, responsible for all grades, run research labs, serve as PI’s on federal grants, and elect mid-level management.
So, why many levels? Take UCLA. According to 2012 annual report, we operate on 419 acres, have about 40 thousand students, 30 thousand full time employees (this includes UCLA hospitals), have $4.6 billion in operating revenue (of which tuition is only $580 million), but only about 2 thousand ladder (tenure and tenure-track) faculty. For comparison, a beloved but highly secretive Trader Joe’s company has about $8 billion in revenue, over 20 thousand employees, and about 370 stores, each with 50+ employees and its own mid and low-level management.
Now that you are conditioned to think of universities as businesses and professors as managers, is it really all that surprising that regular employees like adjuncts get paid much less? This works the same way as for McDonalds store managers, who get paid about 3 times as much as regular employees.
Higher echelons of academia is a research factory with a side teaching business
Note that there is a reason students want to study at research universities rather than at community colleges. It’s because these universities offer many other more advanced classes, research oriented labs, seminars, field works, etc. In fact, research and research oriented teaching is really the main business rather than service teaching.
Think revenue. For example, UCLA derives 50% more revenue from research grants than from tuition. Places like MIT are giving out so many scholarships, they are loosing money on teaching (see this breakdown). American universities cannot quit the undergraduate education, of course, but they are making a rational decision to outsource the low level service teaching to outsiders, who can do the same work but cheaper. For example, I took English in Moscow, ESL at a community college in Brooklyn, French at Harvard, and Hebrew at University of Minnesota. While some instructors were better than others, there was no clear winner as experience was about the same.
So not only the adjunct salaries are low for a reason, keeping them low is critical to avoid hiring more regular faculty and preventing further tuition inflation. The next time you read about adjuncts barely making a living wage, compare this to notorious Bangalore call centers and how much people make over there (between $100 and $250 a month).
Academia is a paradise of equality
College professors are different from drug gangsters not only in the level of violence, but also in a remarkable degree of equality between universities (but not between fields!) Consider for example this table of average full professor salaries at the top universities. The near $200,000 a year may seem high, but note that this is only twice that of average faculty at an average college. Given that most of these top universities are located in the uber-expensive metropolitan areas (NYC, Boston, San Francisco, Los Angeles, etc.), the effect is even further diminished.
Compare this with other professions. Forget the sculptors mentioned above whose pay ratios can go into thousands, let’s take a relatively obscure profession of an opera singer (check how many do you know from this list). Like academia and unlike sculpture, the operas are greatly subsidized by the governments and large corporations. Still, perhaps unsurprisingly, there is a much greater inequality than in academia. While some popular singers like Dmitri Hvorostovsky make over $3 million a year, the average salary is about $100,000 a year, giving a ratio of 30+.
In other words, given that some professors are much better than others when it comes to research (not me!), one can argue that they are being underpaid to subsidize the lackluster efforts of others. No wonder the top academics suffer from the status-income disequilibrium. This is the opposite of the “winner takes all” behavior argued by the journalists in an effort to explain adjuncts’ plight.
Academia is an experience
People come to universities to spend years studying, and they want to enjoy those years. They want to hear famous authors and thinkers, learn basic skills and life changing stories, make lasting friendships, play sports and simply have fun. Sometimes this does not work out, but we are good at what we do (colleges have been perfecting their craft for hundreds of years). Indeed, many students take away with them a unique deeply personal experience. Take my story. While at Moscow University, I heard lectures by Vladimir Arnold, attended Gelfand’s Seminar, and even went to a public lecture by President Roh Tae-woo. It was fun. While at Harvard, I took courses of Raoul Bott and Gian-Carlo Rota (at MIT), audited courses of such non-math luminaries as Stephan Thernstrom and William Mills Todd, III, and went to public lectures by people like Tim Berners-Lee, all unforgettable.
So this is my big NO to those who want to replace tenured faculty with adjuncts, leveling the academic salaries, and commodifying the education. This just would not work; it is akin to calls for abolition of haute cuisine in favor of more fast food. In fact, nobody really wants to do either of these. The inexpensive education is already readily available in the form of community colleges. In fact, students apply in large numbers trying to get to a place like UCLA, which offers a wide range of programs and courses. And it’s definitely not because of our celebrity adjuncts.
Academia is many things to many people. There are many important reasons why the ladder faculty are paid substantially better than TA’s and adjuncts, reasons both substantive and economical. But at no point does the academia resemble the Ponzi schemes and drug cartels, which are famous for creating the economic devastation and inequality (and, um, illegal). If anything, the academia is the opposite, as it creates economic opportunities and evens the playing field. And to those educational reformers who think they know better: remember, we heard it all before…
I tend to write longish posts, in part for the sake of clarity, and in part because I can – it is easier to express yourself in a long form. However, the brevity has its own benefits, as it forces the author to give succinct summaries of often complex and nuanced views. Similarly, the lack of such summaries can provide plausible deniability of understanding the basic points you are making.
This is the second time I am “inspired” by the Owl blogger who has a Tl;Dr style response to my blog post and rather lengthy list of remarkable quotations that I compiled. So I decided to make the following Readers Digest style summaries of this list and several blog posts.
1) Combinatorics has been sneered at for decades and struggled to get established
In the absence of History of Modern Combinatorics monograph, this is hard to prove. So here are selected quotes, from the above mentioned quotation page. Of course, one should reade them in full to appreciate and understand the context, but for our purposes these will do.
Combinatorics is the slums of topology – Henry Whitehead
Scoffers regard combinatorics as a chaotic realm of binomial coefficients, graphs, and lattices, with a mixed bag of ad hoc tricks and techniques for investigating them. [..] Another criticism of combinatorics is that it “lacks abstraction.” The implication is that combinatorics is lacking in depth and all its results follow from trivial, though possible elaborate, manipulations. This argument is extremely misleading and unfair. – Richard Stanlеy (1971)
The opinion of many first-class mathematicians about combinatorics is still in the pejorative. While accepting its interest and difficulty, they deny its depth. It is often forcefully stated that combinatorics is a collection of problems which may be interesting in themselves but are not linked and do not constitute a theory. – László Lovász (1979)
Combinatorics [is] a sort of glorified dicethrowing. – Robert Kanigel (1991)
This prejudice, the view that combinatorics is quite different from ‘real mathematics’, was not uncommon in the twentieth century, among popular expositors as well as professionals. – Peter Cameron (2001)
Now that the readers can see where the “traditional sensitivities” come from, the following quote must come as a surprise. Even more remarkable is that it’s become a conventional wisdom:
Like number theory before the 19th century, combinatorics before the 20th century was thought to be an elementary topic without much unity or depth. We now realize that, like number theory, combinatorics is infinitely deep and linked to all parts of mathematics. – John Stillwell (2010)
Of course, the prejudice has never been limited to Combinatorics. Imagine how an expert in Partition Theory and q-series must feel after reading this quote:
[In the context of Partition Theory] Professor Littlewood, when he makes use of an algebraic identity, always saves himself the trouble of proving it; he maintains that an identity, if true, can be verified in a few lines by anybody obtuse enough to feel the need of verification. – Freeman Dyson (1944), see here.
2) Combinatorics papers have been often ostracized and ignored by many top math journals
This is a theme in this post about the Annals, this MO answer, and a smaller theme in this post (see Duke paragraph). This bias against Combinatorics is still ongoing and hardly a secret. I argue that on the one hand, the situation is (slowly) changing for the better. On the other hand, if some journals keep the proud tradition of rejecting the field, that’s ok, really. If only they were honest and clear about it! To those harboring strong feelings on this, listening to some breakup music could be helpful.
3) Despite inherent diversity, Combinatorics is one field
In this post, I discussed how I rewrote Combinatorics Wikipedia article, largely as a collection of links to its subfields. In a more recent post mentioned earlier I argue why it is hard to define the field as a whole. In many ways, Combinatorics resembles a modern nation, united by a language, culture and common history. Although its borders are not easy to define, suggesting that it’s not a separate field of mathematics is an affront to its history and reality (see two sections above). As any political scientist will argue, nation borders can be unhelpful, but are here for a reason. Wishing borders away is a bit like French “race-ban” – an imaginary approach to resolve real problems.
Gowers’s “two cultures” essay is an effort to describe and explain cultural differences between Combinatorics and other fields. The author should be praised both for the remarkable essay, and for the bravery of raising the subject. Finally, on the Owl’s attempt to divide Combinatorics into “conceptual” which “has no internal reasons to die in any foreseeable future” and the rest, which “will remain a collection of elementary tricks, [..] will die out and forgotten [sic].” I am assuming the Owl meant here most of the “Hungarian combinatorics”, although to be fair, the blogger leaves some wiggle room there. Either way, “First they came for Hungarian Combinatorics” is all that came to mind.
Recently, there has been plenty of discussions on math journals, their prices, behavior, technology and future. I have been rather reluctant to join the discussion in part due to my own connection to Elsevier, in part because things in Combinatorics are more complicated than in other areas of mathematics (see below), but also because I couldn’t reconcile several somewhat conflicting thoughts that I had. Should all existing editorial boards revolt and all journals be electronic? Or perhaps should we move to “pay-for-publishing” model? Or even “crowd source refereeing”? Well, now that the issue a bit cooled down, I think I figured out exactly what should happen to math journals. Be patient – a long explanation is coming below.
Quick test questions
I would like to argue that the debate over the second question is the general misunderstanding of the first question in the title. In fact, I am pretty sure most mathematicians are quite a bit confused on this, for a good reason. If you think this is easy, quick, answer the following three questions:
1) Published paper has a technical mistake invalidating the main result. Is this a fault of author, referee(s), handling editor, managing editor(s), a publisher, or all of the above? If the reader find such mistake, who she/he is to contact?
2) Published paper proves special case of a known result published 20 years earlier in an obscure paper. Same question. Would the answer change if the author lists the paper in the references?
3) Published paper is written in a really poor English. Sections are disorganized and the introduction is misleading. Same question.
Now that you gave your answers, ask a colleague. Don’t be surprised to hear a different point of view. Or at least don’t be surprised when you hear mine.
What do referees do?
In theory, a lot. In practice, that depends. There are few official journal guides to referees, but there are several well meaning guides (see also here, here, here, here §4.10, and a nice discussion by Don Knuth §15). However, as any editor can tell you, you never know what exactly did the referee do. Some reply within 5 min, some after 2 years. Some write one negative sentence, some 20 detailed pages, some give an advice in the style “yeah, not a bad paper, cites me twice, why not publish it”, while others a brushoff “not sure who this person is, and this problem is indeed strongly related to what I and my collaborators do, but of course our problems are much more interesting/important – rejection would be best”. The anonymity is so relaxing, doing a poor job is just too tempting. The whole system hinges on the shame, sense of responsibility, and personal relationship with the editor.
A slightly better questions is “What do good referees do?” The answer is – they don’t just help the editor make acceptance/rejection decision. They help the authors. They add some background the authors don’t know, look for missing references, improve on the proofs, critique the exposition and even notation. They do their best, kind of what ideal advisors do for their graduate students, who just wrote an early draft of their first ever math paper.
In summary, you can’t blame the referees for anything. They do what they can and as much work as they want. To make a lame comparison, the referees are like wind and the editors are a bit like sailors. While the wind is free, it often changes direction, sometimes completely disappears, and in general quite unreliable. But sometimes it can really take you very far. Of course, crowd sourcing refereeing is like democracy in the army – bad even in theory, and never tried in practice.
First interlude: refereeing war stories
I recall a curious story by Herb Wilf, on how Don Knuth submitted a paper under assumed name with an obscure college address, in order to get full refereeing treatment (the paper was accepted and eventually published under Knuth’s real name). I tried this once, to unexpected outcome (let me not name the journal and the stupendous effort I made to create a fake identity). The referee wrote that the paper was correct, rather interesting but “not quite good enough” for their allegedly excellent journal. The editor was very sympathetic if a bit condescending, asking me not to lose hope, work on my papers harder and submit them again. So I tried submitting to a competing but equal in statue journal, this time under my own name. The paper was accepted in a matter of weeks. You can judge for yourself the moral of this story.
A combinatorialist I know (who shall remain anonymous) had the following story with Duke J. Math. A year and a half after submission, the paper was rejected with three (!) reports mostly describing typos. The authors were dismayed and consulted a CS colleague. That colleague noticed that the three reports were in .pdf but made by cropping from longer files. Turns out, if the cropping is made straightforwardly, the cropped portions are still hidden in the files. Using some hacking software the top portions of the reports were uncovered. The authors discovered that they are extremely positive, giving great praise of the paper. Now the authors believe that the editor despised combinatorics (or their branch of combinatorics) and was fishing for a bad report. After three tries, he gave up and sent them cropped reports lest they think somebody else considers their paper worthy of publishing in the grand old Duke (cf. what Zeilberger wrote about Duke).
Another one of my stories is with the Journal of AMS. A year after submission, one of my papers was rejected with the following remarkable referee report which I quote here in full:
The results are probably well known. The authors should consult with experts.
Needless to say, the results were new, and the paper was quickly published elsewhere. As they say, “resistance is futile“.
What do associate/handling editors do?
Three little things, really. They choose referees, read their reports and make the decisions. But they are responsible for everything. And I mean for everything, both 1), 2) and 3). If the referee wrote a poorly researched report, they should recognize this and ignore it, request another one. They should ensure they have more than one opinion on the paper, all of them highly informed and from good people. If it seems the authors are not aware of the literature and referee(s) are not helping, they should ensure this is fixed. It the paper is not well written, the editors should ask the authors to rewrite it (or else). At Discrete Mathematics, we use this page by Doug West, as a default style to math grammar. And if the reader finds a mistake, he/she should first contact the editor. Contacting the author(s) is also a good idea, but sometimes the anonymity is helpful – the editor can be trusted to bring bad news and if possible, request a correction.
B.H. Neumann described here how he thinks the journal should operate. I wish his views held widely today. The book by Krantz, §5.5, is a good outline of the ideal editorial experience, and this paper outlines how to select referees. However, this discussion (esp. Rick Durrett’s “rambling”) is more revealing. Now, the reason most people are confused as to who is responsible for 1), 2) and 3), is the fact that while some journals have serious proactive editors, others do not, or their work is largely invisible.
What do managing editors and publishers do?
In theory, managing editors hire associate editors, provide logistical support, distribute paper load, etc. In practice they also serve as handling editors for a large number of papers. The publishers… You know what the publishers do. Most importantly, they either pay editors or they don’t. They either charge libraries a lot, or they don’t. Publishing is a business, after all…
Who wants free universal electronic publishing?
Good mathematicians. Great mathematicians. Mathematicians who write well and see no benefit in their papers being refereed. Mathematicians who have many students and wish the publishing process was speedier and less cumbersome, so their students can get good jobs. Mathematicians who do not value the editorial work and are annoyed when the paper they want to read is “by subscription only” and thus unavailable. In general, these are people who see having to publish as an obstacle, not as a benefit.
Who does not want free universal electronic publishing?
Publishers (of course), libraries, university administrators. These are people and establishments who see value in existing order and don’t want it destroyed. Also: mediocre mathematicians, bad mathematicians, mathematicians from poor countries, mathematicians who don’t have access to good libraries (perhaps, paradoxically). In general, people who need help with their papers. People who don’t want a quick brush-off “not good enough” or “probably well known”, but those who need advice on the references, on their English, on how the papers are structured and presented, and on what to do next.
So, who is right?
Everyone. For some mathematicians, having all journals to be electronic with virtually no cost is an overall benefit. But at the very least, “pro status quo” crowd have a case, in my view. I don’t mean that Elsevier pricing policy is reasonable, I am talking about the big picture here. In a long run, I think of journals as non-profit NGO‘s, some kind of nerdy versions of Nobel Peace Prize winning Médecins Sans Frontières. While I imagine that in the future many excellent top level journals will be electronic and free, I also think many mid-level journals in specific areas will be run by non-profit publishers, not free at all, and will employ a number of editorial and technical stuff to help the authors, both of papers they accept and reject. This is a public service we should strive to perform, both for the sake of those math papers, and for development of mathematics in other countries.
Right now, the number of mathematicians in the world is already rather large and growing. Free journals can do only so much. Without high quality editors paid by the publishers, with a large influx of papers from the developing world, there is a chance we might loose the traditional high standards for published second tier papers. And I really don’t want to think of a mathematics world once the peer review system is broken. That’s why I am not in the “free publishing camp” – in an effort to save money, we might loose something much more valuable – the system which gives foundation and justification of our work.
Second interlude: journals vis-à-vis combinatorics
I already wrote about the fate of combinatorics papers in the Annals, especially when comparison with Number Theory. My view was gloomy but mildly optimistic. In fact, since that post was written couple more combinatorics papers has been accepted. Good. But let me give you a quiz. Here are two comparable highly selective journals – Duke J. Math. and Composito Math. In the past 10 years Composito published exactly one (!) paper in Combinatorics (defined as primary MSC=05), of the 631 total. In the same period, Duke published 8 combinatorics papers of 681 total.
Q: Which of the two (Composito or Duke) treats combinatorics papers better?
A: Composito, of course.
The reasoning is simple. Forget the anecdotal evidence in the previous interlude. Just look at the “aim and scope” of the journals vs. these numbers. Here is what Compsito website says with a refreshing honesty:
By tradition, the journal published by the foundation focuses on papers in the main stream of pure mathematics. This includes the fields of algebra, number theory, topology, algebraic and analytic geometry and (geometric) analysis. Papers on other topics are welcome if they are of interest not only to specialists.
Translation: combinatorics papers are not welcome (as are papers in many other fields). I think this is totally fair. Nothing wrong with that. Clearly, there are journals which publish mostly in combinatorics, and where papers in none of these fields would be welcome. In fact there is a good historical reason for that. Compare this with what Duke says on its website:
Published by Duke University Press since its inception in 1935, the Duke Mathematical Journal is one of the world’s leading mathematical journals. Without specializing in a small number of subject areas, it emphasizes the most active and influential areas of current mathematics.
See the difference? They don’t name their favorite areas! How are the authors supposed to guess which are these? Clearly, Combinatorics with its puny 1% proportion of Duke papers is not a subject area that Duke “emphasizes”. Compare it with 104 papers in Number Theory (16%) and 124 papers in Algebraic Geometry (20%) over the same period. Should we conclude that in the past 10 years, Combinatorics was not “the most active and influential”, or perhaps not “mathematics” at all? (yes, some people think so) I have my own answer to this question, and I bet so do you…
Note also, that things used to be different at Duke. For example, exactly 40 years earlier, in the period 1963-1973, Duke published 47 papers in combinatorics out of 972 total, even though the area was only in its first stages of development. How come? The reason is simple: Leonard Carlitz was Managing Editor at the time, and he welcomed papers from a number of prominent combinatorialists active during that time, such as Andrews, Gould, Moon, Riordan, Stanley, Subbarao, etc., as well as a many of his own papers.
So, ideally, what will happen to math journals?
That’s actually easy. Here are my few recommendations and predictions.
1) We should stop with all these geography based journals. That’s enough. I understand the temptation for each country, or university, or geographical entity to have its own math journal, but nowadays this is counterproductive and a cause for humor. This parochial patriotism is perhaps useful in sports (or not), but is nonsense in mathematics. New journals should emphasize new/rapidly growing areas of mathematics underserved by current journals, not new locales where printing presses are available.
2) Existing for profit publishers should realize that with the growth of arXiv and free online competitors, their business model is unsustainable. Eventually all these journals will reorganize into a non-profit institutions or foundations. This does not mean that the journals will become electronic or free. While some probably will, others will remain expensive, have many paid employees (including editors), and will continue to provide services to the authors, all supported by library subscriptions. These extra services are their raison d’être, and will need to be broadly advertised. The authors would learn not to be surprised of a quick one line report from free journals, and expect a serious effort from “expensive journals”.
3) The journals will need to rethink their structure and scope, and try to develop their unique culture and identity. If you have two similar looking free electronic journals, which do not add anything to the papers other than their .sty file, the difference is only the editorial board and history of published papers. This is not enough. All journals, except for the very top few, will have to start limiting their scope to emphasize the areas of their strength, and be honest and clear in advertising these areas. Alternatively, other journals will need to reorganize and split their editorial board into clearly defined fields. Think Proc. LMS, Trans. AMS, or a brand new Sigma, which basically operate as dozens of independent journals, with one to three handling editors in each. While highly efficient, in a long run this strategy is also unsustainable as it leads to general confusion and divergence in the quality of these sub-journals.
4) Even among the top mathematicians, there is plenty of confusion on the quality of existing mathematics journals, some of which go back many decades. See e.g. a section of Tim Gowers’s post about his views of the quality of various Combinatorics journals, since then helpfully updated and corrected. But at least those of us who have been in the area for a while, have the memory of the fortune of previously submitted papers, whether our own, or our students, or colleagues. A circumstantial evidence is better than nothing. For the newcomers or outsiders, such distinctions between journals are a mystery. The occasional rankings (impact factor or this, whatever this is) are more confusing than helpful.
What needs to happen is a new system of awards recognizing achievements of individual journals and/or editors, in their efforts to improve the quality of the journals, attracting top papers in the field, arranging fast refereeing, etc. Think a mixture of Pulitzer Prize and J.D. Power and Associates awards – these would be a great help to understand the quality of the journals. For example, the editors of the Annals clearly hustled to referee within a month in this case (even if motivated by PR purposes). It’s an amazing speed for a technical 50+ page paper, and this effort deserves recognition.
Full disclosure: Of the journals I singled out, I have published once in both JAMS and Duke. Neither paper is in Combinatorics, but both are in Discrete Mathematics, when understood broadly.