Women in math, and the overhaul of the publishing system

If you have not yet heard of the Elsevier boycott, you have a lot of reading to catch up on. I’ll wait. I’m not likely to miss traditional commercial publishers when they’re gone, which could well happen within the next decade or so, especially if they and their agents keep asking for it. Think whatever you want about the Cost of Knowledge website, but open access journals have already gained a lot of ground, we have taken charge of the dissemination and advertising of our own research on the internet, and good luck to any journal that tries to stop authors from placing their articles on publicly available webpages and preprint servers.

The better question is: do we still need journals, be it commercial or any other kind, and if not then what will replace them? Among other possibilities, open web-based evaluation systems have been proposed.

This post suggests that a web-based evaluation system would be good for women, the idea being that “women don’t ask” and therefore they are less likely to, say, submit a paper to Annals. I see it exactly the other way around. I’ve talked about some aspects of it already, but not all, and in any case it never hurts to say something more than once, especially when you’re female.

This is not to say that I’m against discussion boards for mathematicians on the internet. I’ll be very happy to have them, as long as they’re not mandatory for everyone and don’t drive out those parts of the current system that function reasonably well. We need more options, not fewer. For instance, I rather like the idea of “evaluation boards” to which authors could submit arXiv papers for validation, without the boards ever pretending to “publish” or “disseminate” the papers. That, if done right, would preserve the advantages of the current system while losing most of its disadvantages. (And it should work just fine for women, I think.)

Now, the details. (This is another one of those long posts. Sorry.)

Proxies. It would be really, really nice if we just evaluated everyone based on the actual merit of their work:

To fix the academic publishing mess, researchers need to stop sending their work to barrier-based journals. And for that to happen, we need funding bodies and job-search committees to judge candidates on the quality of their work, not on which brand name it’s associated with.

Happily, there are signs of movement in this direction: for example, The Wellcome Trust says “it is the intrinsic merit of the work, and not the title of the journal in which an author’s work is published, that should be considered in making funding decisions.” We need more funding and hiring bodies to make such declarations.

If we all did that, there would never be any need ever to worry about either publishing or gender bias. We’d love to be judged purely on merit. Also, everyone should get a pony.

In actual reality, the closest that we come to that ideal is when we solicit expert opinions on the work of candidates for new appointments, promotion, tenure or professional awards. The very existence of such letters testifies to our inability to “just evaluate the intrinsic quality” of the work of someone not in out area of specialization. Still, this is an opportunity for the referees to provide an evaluation that goes beyond the journal titles: what is especially important about this paper, why this other one was a breakthrough, the overall arc of someone’s career and their impact on the development of the field.

Gender bias has been observed in the language used in letters of recommendation, suggesting that a reasonably good and objective proxy (such as a publication list with journal titles) comes closer to measuring the intrinsic merit of our work than an unstructured general evaluation:


In the scholars’ analysis of the words that appeared in the letters of recommendation, they found clear patterns of word use for women’s and men’s letters. Women were more likely to be described with words such as those cited above, as well as “nurturing,” “kind,” “agreeable” and “warm.” Men, in contrast, were much more likely to be described in words classified as “agentive” — words such as “assertive,” “confident,” “aggressive,” “ambitious,” “independent” and “daring.”

What the analysis showed is that letter writers didn’t need to use words like “feminine” to create female stereotypes — and that they did so, time and again, with women who had the same intellectual achievements as their male counterparts.

The study in question analyzed letters of reference of applicants for junior positions. I have not noticed this as much in letters for candidates for tenure, promotion and senior appointments. The candidate will normally have built a substantial body of work by then, and there is a strong expectation for the letters to elaborate on that in specific detail rather than just communicate the writer’s general impression of the candidate.

Let’s assume, then, that reference letters are fair and look at the practical aspects of using them for evaluation purposes.

In my experience, it takes a minimum of 3-4 hours to write a tenure or promotion letter. This can easily expand to a full day if I need to look at a few papers in some depth or check the references. A junior appointment typically requires 3-4 letters of reference. A promotion or tenure case requires 5 or 6, with additional regulations on how many of them must be arms length (no collaborators, no current or former colleagues). Fortunately, most of us don’t get promoted or go on the job market every year. If every faculty member needed to have several expert opinions written about them annually, for instance for the departmental merit pay rankings, we’d never stop writing those letters.

It’s not just the oft-cited “deans and administrators” who need proxies. It’s us. We can pledge all we want to judge everyone on merit. As long as we have to evaluate and rank researchers beyond our own areas of expertise, we’ll use proxies anyway. And there, I would have us use reasonably legitimate proxies (journal rankings, awards, grants, conference invitations) rather than institutional affiliation, race, gender, or my youngish looks. We could in fact use more proxies, not less, to capture those aspects of our careers (such as expository work) that currently tend to get less recognition.

Keeping it professional. The general principle is that women in mathematics benefit whenever there are clear rules and formal procedures for career advancement, and suffer setbacks in the absence of such regulations. The same goes, probably, for every underrepresented minority in every human endeavour where unconscious bias is present.

Equity is of course hard to legislate. Regulations can be ineffective, their interpretation may well depend on who’s doing the interpreting, and I’ve seen policies on diversity that did more damage than good. That said, having a formal procedure with well defined requirements is pretty much always better for women than trusting our colleagues’ gut feelings.

Unconscious bias is unconscious. That’s why we call it that. I have it, too, according to the Harvard implicit association test. No discriminatory intent need ever be involved; it’s just a little bit easier to imagine a leading mathematician as a man, and this often suffices to skew our impulsive, intuitive decisions. A formal procedure prompts us instead to think about specific issues, requirements and criteria. One might not think of a woman as the “natural” candidate for a position or award, but a point by point comparison (when you’re forced to make one) could say otherwise.

The cornerstone of the journal publishing system is the refereeing process. We all know that it’s slow, inefficient and often inaccurate. Suggestions have been made to divorce the evaluation and ranking of papers in terms of their novelty and significance from the painful dance steps of pointing out the typos, little errors, missing or inaccurate references. The former is mostly independent of the latter and can be done much faster, or so the argument would have us believe.

This is where I think the reformers are missing a point. It’s not necessarily the debugging that’s valuable. I’ve found and fixed more bugs in my own papers than the referees ever did. The main thing is, I like that the person who’s being asked to evaluate my paper is also being asked to actually read it. That’s the expectation that the editors have of the referees. Even if the referees only point out a few typos and misspellings, it still prompts them to engage with the content of the paper, or at least to give it a try. That’s much better than, say, summary judgement based on the author’s reputation, institutional affiliation or gender.

What if (for example) the arXiv had a discussion page for each paper? Well, I guess any arXiv user would be able to comment on any paper, regardless of whether they’ve read it or have the expertise to judge it. The boundaries between social and professional interactions might get a little bit blurred, for instance an author might get comments and reviews from colleagues who are part of his social network. (His, because a male mathematician is far more likely to have a substantial social network within the profession than a female one.) Let’s say that a male author posts a paper:

COMMENTER A: Very nice! I really like your Theorem 2 – this is something I would have expected, but when I tried to prove it a few years ago, it was not at all clear how to proceed.

COMMENTER B: I like Theorem 2, too. Do you think that Theorem 3 would extend to the setting of Reference 15? That would be really interesting, because [detailed explanation follows].

MALE AUTHOR: I’ve thought about that, but there’s an obstacle when you try to […].

COMMENTER A: But if that happens, then perhaps you could use Reference 8, which says that […]

and so on.

And if a female author posts a similar paper?

COMMENTER C (2 days later): I think your Theorem 2 is already known, for example it should be in Standard Reference 1.

FEMALE AUTHOR: No, Theorem 2 is new. The difference between Theorem 2 and the results in the existing literature, including Standard Reference 1 and References 9, 10 and 15, is that [details follow]. I explained this in the introduction.

COMMENTER D: Are you aware of Standard Reference 2?

FEMALE AUTHOR: Yes, and I’m citing it in the paper. Do you have any specific questions about it?

COMMENTER E (5 days later): But isn’t your Theorem 2 included in Standard Reference 1 already?

FEMALE AUTHOR: No, and I have already answered that question.

COMMENTER E: Jeebus, why are you getting so emotional about it? I’m only trying to help.

[goes to a different forum and complains about women overreacting to a friendly and polite suggestion]

Yes, really. It doesn’t necessarily happen every time, but it happens often enough. It can get worse. Take a look at the comments on this post about women in math. The author is a professional mathematician with experience in academia and finance who now works at a start-up. Her job as a data scientist involves analyzing statistical data, which makes her especially well qualified to critique typical studies on women in math that rely on statistical arguments. You’d think that her insights on the subject should be taken seriously. Instead, the conversation morphs quickly into an open thread veering from Wall Street to history to Superbowl. Of those commenters who try to keep it on track, only some actually engage with the specific points made in the article. Others never go past the usual stories of bad math teachers, mathematicians with greasy hair, girls liking dolls, and women “just” being different. “Of course girls are different.” “No, they’re not.” “They are, too.” Can we just move past that, please?

In case you were wondering: my blog has a fraction of a fraction of a percent of the readership of Naked Capitalism, but I’ve had comments like that, too, as well as personal attacks. (The worst ones didn’t make it past moderation. There have been some that I approved but perhaps shouldn’t have.) I cited the discussion on Naked Capitalism because it dispels another myth, namely that mass participation will somehow fix all the wrongs and that the best arguments will always float to the top thanks to internet magic.

Women “don’t ask”. More like, we have to pick our battles. This article reports on a study that gets at least one part of it:


Although it may well be true that women often hurt themselves by not trying to negotiate, this study found that women’s reluctance was based on an entirely reasonable and accurate view of how they were likely to be treated if they did. Both men and women were more likely to subtly penalize women who asked for more — the perception was that women who asked for more were “less nice”.

“What we found across all the studies is men were always less willing to work with a woman who had attempted to negotiate than with a woman who did not,” Bowles said. “They always preferred to work with a woman who stayed mum. But it made no difference to the men whether a guy had chosen to negotiate or not.”

Megan McArdle has more, and for once I actually agree with everything she says. I’d add that women are more likely to have to ask, we have to be more persistent in it, and we’re more likely to be told “no”.

It’s generally expected that men will want positions of responsibility, high profile work assignments, or (in academia) that they will want to lead large research groups. It’s also generally expected that women will not want that. We’re less committed, less ambitious, more predisposed to “nurturing” (whatever that means), more family-oriented, or some such. When a man asks for a prestigious work assignment, it’s normal. When a woman does the same, her colleagues might be a little bit disoriented at first before they can muster any other response, because the world they know does not work that way. I’ve seen this, many times over.

Before we say “no” to a man, we’re likely to stop for a moment and consider how it might affect our working relationship. Have I said “no” too many times already? What if I have to ask something of him sometime soon? But a woman should be… easier to work with. Expected to be “kind” and “agreeable”, remember? This, again, is what I have seen in my own experience, along with the surprised looks that a less agreeable woman often gets.

All this said, submitting a paper for publication to a journal is one situation where “asking” is perfectly normal and acceptable, regardless of gender. Men might be somewhat more likely to overshoot wildly, based on what I’ve seen in my experience with refereeing such papers, but such submissions get rejected anyway. This is one point in the process where I do not see much of a problem.

Now, when it comes to getting a fair treatment once the paper is submitted… but that takes us back again to the refereeing process.

Author: Izabella Laba

Mathematics professor at UBC. My opinions are, obviously, my own.

22 thoughts on “Women in math, and the overhaul of the publishing system”

  1. Why not make the refereeing process blind (and perhaps even the editorial)?

    If we aspire to a system where work is judged solely on its merit, then it is hard to imagine the argument for not having a blind system. Of course this wouldn’t address every issue and bias (and surely in some cases the referee might be able to guess the author), but it seems that it would be a step in the right direction. There are minor logistical issues (such as allowing communication with authors) with this, but they could be easily addressed with software. One can also imagine an arXiv feature that would allow withholding author names on preprints, and provide a mechanism to communicate with the author(s) without publicizing their names to further support a blind refereeing system.

    Other disciplines (theoretical CS is one I’m familiar with) use blind refereeing, and I don’t understand what the argument against doing so in math is.

  2. You make a number of interesting points here — more than I’m going to be able to respond to, but I do feel like responding to at least some.

    1. Proxies. I agree very strongly that it would be great to evaluate everybody on the quality of their work. I remember being on a committee once that was evaluating scientists of all kinds. One of the other committee members was a physicist, and he and I had exactly opposite attitudes about how to interpret references and “metrics” such as H-factors. His view was that references were merely subjective opinions, whereas H-factors and the like gave you objective evidence. My view was that H-factors and the like were extremely unreliable indicators of how good someone’s work was, so you needed references if you really wanted to find out.

    Having said that, I do think one has to take into account that we do not have a good way of measuring the objective quality of a piece of mathematics. I think the best one can hope to get from a set of references is some feeling for the size of the splash that a result has made. Some results are worthy but in the end have very little impact on any other mathematicians. Some are of great significance to a small group of mathematicians working on some larger project. Some are exciting to everybody in an area of the size of, say, an ICM section. Some are so exciting that they transcend subject boundaries and interest a large proportion of mathematicians. That’s a pretty crude classification, but I don’t think one can get much finer, and it’s roughly what I’m trying to find out when I read references.

    Some people have made the point that under some circumstances — especially reducing a very large list of applicants for a job to something manageable — it is essential to have crude proxies. I think the problem we have at the moment is that the proxies are not very good. I know this from personal experience: if I were to put my own papers in order according to their quality, it would not be the same as the ordering I’d get if instead I used the quality of the journals they appeared in. So what I’d like to see is (i) a better system of proxies and (ii) the use of proxies only when dealing with a large number of candidates.

    I should add that one often has to judge mathematicians who are too far from one’s own area for it to be possible to assess their quality directly. Unfortunately, proxies are necessary there. [Sorry — I’ve just reread your post and spotted that you’ve made exactly this point.] I regard the process of decoding references as partly a sort of proxy measurement, though a good reference will also contain information about why the referee likes the candidate, which will sometimes allow me to guess whether I would like the candidate if I understood the work better.

    2. I agree with the general point that certain words appear more often when women are being described, but the actual words on the list you quote aren’t the ones that have struck me. For example, no reference for a female mathematician would describe her as “nurturing”. But I’d be willing to bet that the word “careful” comes up more often in references for women. Maybe not in maths, but definitely in the humanities the word “interweaves” occurs frequently in descriptions of the work of women and infrequently for men. I’ve noticed others but can’t remember them right now.

    3.

    That’s much better than, say, summary judgement based on the author’s reputation, institutional affiliation or gender.

    Those who, like me, are keen on the idea of splitting up the process of judging a paper and the process of helping the author to improve the paper are certainly not advocating judgments on those grounds. Maybe you were just being rhetorical, but even so let me say what kind of process I imagine being necessary for a quick judgment. The judger should aim for one of three conclusions: definitely should be published, definitely should be rejected, or not sure. To make one of the first two judgments, one should have read the paper enough to understand what its main results are, and to feel certain about whether obtaining those results was an achievement worthy of the journal. Sometimes that can be done quite quickly, and sometimes it can’t. In the second case, one simply says “not sure” and somebody else is asked to read the paper in more detail. (Or one could volunteer to do that task if one felt capable of it.)

    The advantage of splitting the two functions is that it would sometimes be much easier for someone who is trying to improve a paper to do so in consultation with the author.

    4. Call me naive, but I really don’t think the hypothetical sequence of comments you write about a hypothetical paper written by a woman would be representative of the kinds of comments that would typically be made. And if there were tiresome comments like that, I think men would get them a lot as well. (I’m not saying they’d get them with equal probability, but I don’t think the probabilities would be of a different order of magnitude.) I also think that commenting on a mathematical paper would be sufficiently formal — unlike commenting on a blog — for such problems to occur less frequently. Finally, I think there could be at least some effect in the reverse direction, since many people are very keen not to be perceived as sexist.

  3. Tim —

    As for 1, we indeed do not have a good measure of the “value” of a piece of mathematics. I don’t know what we can do about it. Maybe a descriptive evaluation such as you suggest is indeed the best we can do.

    2. The study I cited analyzed (if I remember correctly) letters of reference for applicants to a medical school. The language in letters for mathematicians would be indeed different, for example a candidate might be “collegial”, or “caring” in a teaching letter. I’d like to see it analyzed systematically.

    3. This kind of process is already employed by many journals, especially the quick rejection if the paper does not meet the expected standards (significance, wide interest) regardless of its correctness. The only difference between this and what you suggest seems to be that some papers should be “quickly accepted” while the detailed checks are still taking place. Possibly so – I would not have significant objections to that, although this might raise questions as to which version of the paper should be treated as “official” and “final” for the purpose of future reference.

    4. I really think that, as someone who would not normally be on the receiving end of such comments, you do underestimate their frequency. I have seen enough of them to be concerned. The hypothetical comments might be typical of only 20% (maybe) of what we actually get, but this is enough to spoil the fun for us. Women certainly get more of that than men do. I have had a good deal of it on this blog.

  4. On re-reading, I should also follow up on this:

    That’s much better than, say, summary judgement based on the author’s reputation, institutional affiliation or gender.

    Those who, like me, are keen on the idea of splitting up the process of judging a paper and the process of helping the author to improve the paper are certainly not advocating judgments on those grounds. Maybe you were just being rhetorical,

    No, I have certainly not seen you or anyone else advocating that. But once a process is in place, it will be used by everyone at large, and not always consistently with the intentions of those who designed it. A system does not have to encourage or explicitly permit summary judgements. If it does not discourage them strongly enough, then such judgements will inevitably be made.

  5. @Mark, actually theoretical computer science does NOT do blind reviewing for conferences or journals. Other disciplines within CS are moving in that direction: machine learning (mostly) and databases (partly) come to mind, as well as elements of data mining (these are areas I’m familiar with – not to say that others aren’t as well). But TCS has been a little more resistant to blind reviewing.

  6. @Suresh, You’re right that it isn’t all of theoretical computer science, STOC and FOCS do not have blind refereeing. However, the major cryptography conferences do, such as Crypto and Eurocrypt. Certainly, cryptography and machine learning account for a sizable slice of TCS.

  7. @Mark : Blind refereeing would be a complete farce in mathematics. There aren’t that many people who are capable of refereeing any given math paper. In the vast majority of cases, any reasonable referee would already know about the paper long before it hit their desk to review. For example, they might have seen it posted to the arXiv. Or the author could have sent them a copy. Or they might have heard the author give a talk about the paper at a conference or seminar. Etc.

    I think this is really a cultural difference between math and cs. For me, there is usually a long gap (greater than a year) between the time I prove a theorem and the time I get around to writing a paper. During that time, I give talks about the theorem, chat about it with anyone I think might be interested, etc.

  8. In principle, I like the idea of blind refereeing. However, in support of Andy’s comment, I have to say that iif I ever publish anything that I’ve been working on, virtually everyone who is capable of refereeing it with authority is already familiar with what I’m doing.

  9. @Andy, I agree that the referee would be able to guess the author some of the time. However, I doubt it would be nearly all of the time. Even in cases where the referee can narrow down the list of possible authors to a small few, it would still be useful in combating biases. @Richard, If the phenomenon that “virtually everyone who is capable of refereeing [my work] with authority is already familiar with what I’m doing” was universal, then the sentiment “never do I find interesting papers in my subfield on the arXiv that I didn’t know were in the works” would also be universal. I certainly can’t say the later (and I hope I never can, it would take the fun out of reading the arXiv every night!)

    The most compelling argument in favor of blind refereeing, however, is that there isn’t a good reason not to do it. Say that a referee can figure out the author, with absolute certainty, 90% of the time (which I think is wildly generous). In those cases, we are just back to the current system. In the other 10% of cases we have made some progress.

  10. Programming languages, in CS, has been moving towards double-blind reviewing; not all submissions are double-blind at this point, but many are. It depends on the conference. (Recall that CS is conference-based, rather than journal-based).

    Note that, often, names are made available after the first round of reviewing. The goal is to prevent bias on a first look at the paper.

    I don’t have a strong feeling about whether this is good or not. I’m pretty sure it does remove bias, which seems like it should be good. Many of the same arguments that people have used about math results not really being anonymous also apply here, as does the limited number of people who are qualified to review a paper.

    There are a couple of logistical issues with double-blind reviewing. See this page: http://pldi12.cs.purdue.edu/others/dbr-faq.html for a discussion of mitigating factors. Also see http://www.cs.utexas.edu/users/mckinley/notes/blind.html for an argument in favour of double-blind reviewing.

  11. Well here’s my 2 (c) prong solution.

    1) First a clear taxonomic ordering of the “fields”, this would automatically reveal (evolutionarily) close peers. Referees would then be broke into:

    Technical (sub group neighbours or closely related specialists) – Judges of the correctness of the findings in the paper

    Merit (Super family members, aka up one branch) – Judges of the novelty value and relative merit to the less specialized within the field

    Outgroup (Unrelated, but sufficiently competent to understand the paper) – Judges of the value and appeal to the wider potential audience, aka is this WOW material or just something weird that THOSE people came up with

    2) Second a method of coercing Judges to be critical. My preferred approach would be to tie the historical competence as a Judge to Grant considerations.

    Competence should be based on how close a judge of one category (technical or merit) compares to the judgement of the peers in the panel.

    Ideally this would be based on a quantative grade system, ie 50% grade on communication skills (the point of publishing something is to communicate to the reader), 50% on technical values.

    Judges could then be compared to the median scores of the judge panel, with far outliers being likely less competent (or outright corrupt, 7s across the board, you put out 10, the athletes were your countrymen).

    Course this only works in a involuntary system within a single (preferably public) publishing body.

  12. @ Aaron

    And Annals goes to publishing an issue every 20 years–there’s the issue for Wiles result, then Perelman’s result, the Riemann hypothesis would be next, etc. (I jest.:))

    On a more serious note, each paper needs 3 referees in this scheme, quite a lot.

  13. @Tim and Izabella about Tims number 4: It should be possible to make an experiment: Think of a decent question to ask on Mathoverflow, write down the question, and afterwards flip a coin to decide if it should be posted by a man or a woman. Then create a new account, with a name that is clearly a male name or clearly a female name, and see the responses. Repeat 20 times.

    David Speyer once made a similar experiment to test if he got more upvotes on answers when he posted with his own name, than when he posted as an anonymous: http://meta.mathoverflow.net/discussion/385/experimental-results/#Item_0

  14. I’ve done some experimenting of my own. Some of the regulars here might remember a rude comment that an anonymous commenter calling himself “Norman” (no email address provided) left on this post:

    https://ilaba.wordpress.com/2012/01/05/the-state-of-the-profession/

    I approved it anyway and responded to it, in part because it confirmed the point about hostility to women on the internet that I’d made in other posts. I wanted to see if people would pick on it. Instead, everyone else just stopped commenting, even though the discussion had been nice and lively up to that point. I waited a couple of weeks, then eventually deleted “Norman’s” posts along with my responses, and sure enough I got a few more comments after that.

    I’m not reposting “Norman’s” comments because (a) that would defeat the purpose of deleting them, and (b) there would likely be voices along the lines of “but this was an isolated incident”. To which I could say, I’ve linked already to many blog posts elsewhere documenting the phenomenon. I’m not going to “repeat 20 times” because then I’d have 20 rude comments splashed on my blog like bird poop and I don’t especially like the idea.

    Instead, I’ll suggest a variant of Pascal’s wager. Even if the phenomenon is rare, and even if it is not always aimed at women, what do we have to lose by moderating comments aggressively on all sites of importance to the community and restricting posting privileges on any websites that might play any part in formal professional evaluations? The best blog comment sections (Coates, Ebert, etc) are heavy on moderation. (Ebert screens all comments before posting. Coates deletes comments or just closes down the comment section as needed.) Others (Sullivan, Fallows) do not even have comment sections as such, instead posting excerpts from selected email from readers. My understanding is that the moderation on MathOverflow is fairly aggressive, but still it did not prevent this discussion on women in math to get a little bit out of hand towards the end.

    http://meta.mathoverflow.net/discussion/985/woman-in-mathoverflow/

    As for formal evaluations, it has been the professional norm since forever to have strict constraints on who can evaluate us, when, and in what capacity. While I would love dearly to see some of these procedures revised, I don’t see how any good could come from performance assessments via internet forum.

  15. I am sorry to be critical, Professor Łaba, but I find your hypothetical arXiv exchange with a female author utterly fantastic. I believe that in real life Commenter C would either apologize for misunderstanding Theorem 2 and making the author repeat herself, or else would perhaps explain that Theorem 2 may not explicitly have appeared but is a straightforward consequence of known results, and perhaps persuade the author of this.

    Do you have any real-world examples of boorish, patronizing public criticisms of the professional work of female mathematicians by male mathematicians? In MathSciNet, for instance? Would you regard MR0407865 as an example?

  16. Anna Marie Mickelson – Yes, I have plenty of real-world examples. Just very recently, a well known male mathematician wrote condescendingly and unprofessionally about a female mathematician in a blog post. I will not link, but I can email it to the address you provided.

    The repetitions and the pointless comments are not my invention, either. That’s what I get here on this blog. The worst offenders do not get posted. I approve borderline comments from time to time, for example someone commenting that I “seem to know a good deal about math” – this came from a non-mathematician, but still, I’m not convinced that a male blogger would’ve gotten the same. I have had comments offering me career advice for beginners, or directing me to books or other references that I had actually mentioned in the post (one commenter helpfully suggested that my library should have it, as if I didn’t even know how to find a standard math reference).

    Now, as to whether this qualifies as public criticism of my professional work – probably not. I treat this blog a little bit informally and do not list it on my activity reports. If I had to find similar examples of formal evaluations (including Math Reviews), I’d have to look harder. Which is why I was saying that formal, structured reviews are better for us than unstructured ones.

    As to MR0407865, that would depend on the actual merit of the case, wouldn’t it? This is not my area of research, and there might be a back story there that I don’t know, so I’m not going to comment on it. If there is gender bias on MathSciNet (and I would love to see an actual systematic study of this), I would not look for explicit offensive comments, but rather for disinterested, perfunctory reviews (“damning with faint praise”). Then again, it’s possible that a very large part of MathSciNet looks like that anyway.

  17. I agree that open comments on the arxiv would lead to obnoxious comments exactly as described. In fact I have given talks where I have repeatedly had to explain why my theorem wasn’t in a standard reference and whether I had read that reference. I’ve also seen this done to men mathematicians on occasion but far more often to women. It is common enough that I speak to young women mathematicians about how to deflect such comments and continue with their presentations. This happens far more often in a general audience setting and quite rarely in my experience at a presentation before specialists. Sadly I have seen it happen at job interviews.

  18. Even on facebook, during a casual conversation about science fiction, I posted a report in layman’s terms on a talk at a mathematical general relativity conference I was attending. I thought the SF fans would find it interesting and I said explicitly that this was a report on a talk I had seen that day. A physicist posts that I don’t understand this is impossible because of xyz. So I explained that xyz does not apply (now getting a bit more technical). He then became condescending explaining xyz on an undergraduate level. So I reminded him I was a specialist in mathematical general relativity and was reporting on a talk at a conference. Then he writes he has never been more insulted in his life. So it was somehow insulting that I knew what I was talking about and had defended myself? I’ve heard this before, being “insulting” when I use my credentials or cite an advanced article. A friendly general audience post about a general relativity talk became an unpleasant ordeal. This has happened often enough that I prefer to post where no comments are allowed.

Comments are closed.

%d bloggers like this: