Political science exploded in the news as a grad student and senior prof wrote a piece that made big news and then was revealed (allegedly, apparently, insert legal modifier here) to be fradulent.*
This is a guest post by Sara McLaughlin Mitchell, Professor and Department Chair of Political Science at the University of Iowa.
In my previous post, I discussed some problems women face when networking in political science. Here I focus on the progress we have made.
As a quantitative conflict scholar, I spend a great deal of time networking in several male-dominated research communities, including the Peace Science Society, the ISA SSIP section, the APSA Conflict Processes section, and the Society for Political Methodology. I first presented at a Peace Science meeting in 1996, being one female of 9 at the conference out of 66 participants. I attended my first Political Methodology summer conference in 1994 and was one of 9 women out of 50 participants. A healthy ego combined with enjoyment of traditionally male things such as drinking, gambling, and sports eased my own integration into these communities. Yet I attended many presentations by smart women in both organizations who soon afterwards made decisions to exit the groups or leave the profession. This included the female co-chair of my dissertation committee, two female students at Michigan State who graduated ahead of me and got jobs in top 25 ranked programs, and several women from other top institutions. Continue reading
But neither I nor most economists are going to make the effort of puzzling through difficult writings unless we’re given some sort of proof of concept — a motivating example, a simple and effective summary, something to indicate that the effort will be worthwhile. Sorry, but I won’t commit to sitting through your two-hour movie if you can’t show me an interesting three-minute trailer.
Krugman concludes with the admonition that “nobody has to read what you write.” I wish this were more generally understood. I’ve read articles in Political Analysis about things I don’t care about using methods I’ll never master that were nevertheless riveting, and I’ve slogged through articles on topics I care passionately about in allegedly substantive journals that I never understood. There’s one article, which my co-author on a long-term project and I have read a half-dozen times, that completely escapes our ability to summarize. Adopting a useful frame and engaging with readers is always good.
I get the sense that some folks believe that engaging with readers means dumbing down their argument. Far from it! Engaging with readers means presenting a complex argument smartly. That’s much more challenging than making a complex argument obscure. Anyone can be recondite; only geniuses can be understood.
Dan’s post on his self-experiment in raising citations to female scholars has drawn a critical comment from someone who wonders about whether similar patterns exist with reference to minority scholars and scholars from outside North America. The issues of gender, race, and national (regional) origin are distinct, but if we’re going to have a wide-ranging discussion about inclusion and exclusion in the field then we ought to address these issues squarely.
Both because of the unexpected direction yesterday took, and because I haven’t worked through my thoughts about any number of pressing current events, I thought I’d write about an experiment that I’ve been engaging in with my recent academic papers. You might recall the Maliniak, Powers, and Walter paper (soon to be out with International Organization) on citations and the gender gap. As Walter reported at Political Violence @ a Glance:
…. articles written by women in international relations are cited significantly less than articles written by men. This is true even if you control for institutional affiliation, productivity, publication venue, tenure, topic, methodology and anything else you can think of. Our hunch was that this gender citation gap was due to two things: (1) women citing themselves less than men, and (2) men tending to cite other men more than women in a field dominated by men.
After the wide-ranging discussion prompted by the piece, I decided to try to increase the number of women that I cited. Continue reading
The last two years saw some major stories in my corner of the blogsphere concerning sexual harassment. Colin McGinn’s resignation from the University of Miami saw widespread discussion across the academic interwebs, even if we didn’t say much about it. McGinn’s case seems not terribly unique in philosophy, as the What’s it Like to be a Woman in Philosophy blog has been chronicling for years. Sexual harassment at science-fiction conventions is also an ongoing problem. Genevieve Valentine’s treatment at Readercon produced an online firestorm last year.
Some of the discomfort with Brian’s recent post [which Brian has now pulled] derives from a generic rejection of its sexualization of conference dynamics. But some of it comes from the realities of sexual discrimination and harassment–not just in our field, but at conferences in particular.
I don’t need to rely on hearsay to conclude that sexual discrimination is a major problem in international studies and political science. My partner now refers to “practicing political science with ovaries” as a shorthand for acknowledged and unacknowledged sexism in the field.
On the other hand, most of what I know about sexual harassment at conferences comes from oblique, semi-whispered, or ‘you didn’t hear this from me, but’ style conversations. The problem seems both widespread and largely unacknowledged in the general community. Indeed, searching for “sexual harassment at APSA“, “sexual harassment at the ‘American Political Science Association’“, and cognate searches for the ISA turns up little more than documents describing official policies, a long list of conference papers, and reports on the meetings of caucuses within the organizations. Either the problem is not widespread–which I doubt–or we haven’t even reached the point where we have a safe environment for an open discussion of it. Continue reading
Another day, another piece chronicling problems with the metrics scholars use to assess quality. Colin Wight sends George Lozano’s “The Demise of the Impact Factor“:
Using a huge dataset of over 29 million papers and 800 million citations, we showed that from 1902 to 1990 the relationship between IF and paper citations had been getting stronger, but as predicted, since 1991 the opposite is true: the variance of papers’ citation rates around their respective journals’ IF [impact factor] has been steadily increasing. Currently, the strength of the relationship between IF and paper citation rate is down to the levels last seen around 1970.
Furthermore, we found that until 1990, of all papers, the proportion of top (i.e., most cited) papers published in the top (i.e., highest IF) journals had been increasing. So, the top journals were becoming the exclusive depositories of the most cited research. However, since 1991 the pattern has been the exact opposite. Among top papers, the proportion NOT published in top journals was decreasing, but now it is increasing. Hence, the best (i.e., most cited) work now comes from increasingly diverse sources, irrespective of the journals’ IFs.
If the pattern continues, the usefulness of the IF will continue to decline, which will have profound implications for science and science publishing. For instance, in their effort to attract high-quality papers, journals might have to shift their attention away from their IFs and instead focus on other issues, such as increasing online availability, decreasing publication costs while improving post-acceptance production assistance, and ensuring a fast, fair and professional review process.
I understand that there’s been some recent blog-chatter on one of my favorite hobbyhorses, peer review in Political Science and International Relations. John Sides gets all ‘ruh roh’ because of an
decades-old old, but scary, experiment that shows pretty much what every other study of peer-review shows:
This article discusses the importance of doing counter-intuitive work in the social sciences:
Erik Voeten has a nice piece up about recent research on the benefits of nuclear superiority. Does nuclear superiority provide an advantage to states engaged in crisis bargaining?
In the most recent issue of International Organization (ungated version) my colleague Matthew Kroenig argues that in a crisis between two nuclear powers, the state that enjoys a nuclear advantage is willing to run more risk than its opponent. This gives the nuclear superior state greater “effective resolve,” meaning that the other state is less likely to think that the state with nuclear superiority will back down.
The same issue of International Organization contains an article (ungated version) by Todd Sechser and Matthew Fuhrman, who claim that nuclear weapons are of no use in increasing the credibility of threats to seize territory or another asset. Moreover, using nuclear weapons is costly. Thus, they find that while nuclear weapons are extremely useful for deterrence, they do little for “compellence” (making a threat to force an opponent to take some desirable action). They show with a different data set of crisis bargaining that threats from nuclear states are not more likely to succeed than threats from non-nuclear states.
My “Death to Job Talks!” provocation has produced some longer-form responses at other Political Science blogs. Jeremy Wallace defends the institution. Tom Pepinsky goes further and argues that “there is no alternative to the academic job talk.” Nate Jensen gets to the heat of the matter by asking if the “academic hiring process [is] broken.”
I was part of a short conversation last night about the standard job-search process in political science. For those of you who aren’t political scientists, but nonetheless feel compelled to read this, the process for junior candidates looks something like this:
At liberal arts colleges, of course, (1) the meeting with graduate students is replaced by (often multiple) meetings with undergraduates and (2) the research presentation is either supplemented or substituted with a teaching presentation to undergraduates. Telecommunications interviews may occur at any stage of the winnowing process. Some schools also conduct interviews at APSA. I don’t know much about the two-year college process. Otherwise, YMMV.
1. Is there an objective standard for “so what?” No, there is not. Yet, this doesn’t make it fully subjective. Any good paper will explain why what it reports matters, and few papers under-sell their findings. A reviewer’s job is to evaluate the degree to which those assertions are warranted. A good rule of thumb here might be, if you are struggling to decide whether a paper you are reviewing is important, it isn’t.2. When is a methodological flaw a disqualifier? Always. Whether such flaws warrant a reject or an R&R depends upon the severity of the flaw and the potential significance of the results once the flaw is corrected.
Update: so the very first commentator revealed how much this was the product of a bad cold. Indeed, I’ve completely misnamed the post. It shouldn’t be “standard stories” but “contextual assumptions.” The most important rhetorical commonplace, in my experience, is exactly what the commentator said: “quality” of research and presentation. What I’m interested in is the broader issue of how we know what quality is and why we care about it–what are the appeals that adjudicate those issues?
Why do political science departments in research universities make offers to particular candidates? I don’t have a good answer for that, but I think listing common justifications is a good place to start thinking about the question. So here are some standard arguments, distilled down to their essence, for hiring decisions:
Some time ago Thomas Rid had an amazing post arguing for an open-access revolution in our field. I won’t repeat the arguments here; you can read them for yourself. The open-access movement is showing signs of momentum. Indeed, at BISA/ISA in Edinburgh, a number of people agitated for open access for the Review of International Studies (RIS) at its relaunch event.
It seems that there are very few significant IR journals in a position to go open access. The obvious candidates would be journals associated with professional associations — in addition to RIS, that would include the International Studies Association journals, the European Journal of International Relations, and some others. But at least BISA and SGIR (soon to be EISA) use the revenue from the journals to support their activities. That leaves the independent foundation journals, such as International Organization, as the most likely candidates for moving to open access.
Open-access journals sustain themselves through some combination of subsidy and pay-for-publication. In essence, authors provide a fee upon acceptance if they want their articles to appear “in print.”It took PLoS — probably the most famous member of the open-access family — a number of years for revenues to exceed costs. I can imagine a lot of IR scholars recoiling at paying such a fee. The math suggests that their institutions (if they are associated with one) should be happy to fork over the money, as doing so is cheaper than subscribing to journals. But right now, at least, institutions already pay for standard IR journals, so the open-access journals represent an additional fee. This isn’t an issue if the institution is Harvard University, but it might be for smaller places — particularly if the fee comes out of cash-strapped Departmental coffers rather than scientific grants.
The graphic comes from the Chronicle of Higher Education, which, in 201, reported on a study highlighting the two biggest hurdles to open access:
A new survey of nearly 40,000 scholars across the natural sciences, humanities, and social sciences shows that almost 90 percent of them believe open-access journals are good for the research community and the individual researcher. But charges for publishing and the perception that open-access journals are of lower quality than traditional publications deter scholars from the open-access route, according to the Study of Open Access Publishing report, by an international team of researchers.
These concerns are likely to be a particular problem in IR. The aforementioned factors suggest that most open-access journals will be both digital-only and new. Given the field’s elitism concerning “journal hierarchy,” and its general conservatism when it comes to all things smacking of “web 2.0”, those are both significant barriers to success. I think it would be very difficult to ask IR scholars to pay-for-publication in an unranked, digital-only journal. While everyone knows this is the future, it isn’t clear how we will get there.
This reticence comes despite the fact that, if mid-tier blogs such as the Duck of Minerva are any indication, more people will read a given piece in an open-access digital journal than a typical one in a top-tier — let alone a second-tier — traditional journal.* Thomas Rid got access to the raw Taylor and Francis “most read”numbers and this is what he found:
These are, as Thomas notes, crude indicators. And blog posts are, in general, shorter and more accessible than academic articles. Still, they point to the advantages of ungated academic work, particularly if presented in the right way. It would be interesting to know the readership of the papers at e-ir, which might provide a better comparison.
Indeed, a few months ago PTJ and I had some discussions about starting a journal using a “non-traditional” model. We estimated our barebones costs at about ~$25k to pay for a graduate-student assistant, plus some unspecified amount to handle incidentals. Startup costs would probably run between $5-7K, and it would be best to have some money to subsidize undergraduate interns to help keep the technical side running. All of this assumes a journal that is, in essence, a labor of love. No money for course releases, travel and promotion, and all that other stuff.
One idea was to publish volumes as e-books for .99$, but the economics don’t work and you wind up with a cheap, but still gated, product. The pay-for-play model would impose prohibitively expensive costs on authors, particularly in the context of a startup. And, of course, we both think that there are too many journals in the field already.
So the question remains: how to finance this kind of endeavor?
Still, there’s a certain attraction to the model.
An online open-access journal could firmly break with the tyranny of the quarterly volume. No more “online first” as an orphan, uncertain category. The editors simply need to keep the standards of the journal high — as reflected in quality and acceptance rate — and they can publish pieces whenever they are accepted and processed. Volume numbers would persist, but as temporal markers for the purpose of citation rather than as bundled artifacts.
Because the content would be ungated, it would be even easier to integrate the journal into a blogging and social-media environment than it would be for a traditional publication. One could build an intellectual community and ensure repeat visitors — and with them, greater likelihood that articles would be read and cited.
But, even if we could somehow come up with the funds, the experiment strikes me as pretty high risk. We would need to convince some high-profile scholars to provide quality pieces — ones good enough to survive rigorous peer review — to legitimize the endeavor. We’d need to convince reviewers to take it seriously. And there are a lot of other institutional barriers.
I guess what I’m talking about is, in essence, a Duck of Minerva journal, but (probably) with a less whimsical name. I wonder what our readers think of that?
*As I discovered while putting together a proposal for wrapping a journal in a webzine (see here for an example of poor implementation of a good idea) an undercount of the most-viewed pages at the Duck outdistances the download figures for the most-read piece at the American Political Science Review. And, as I alluded to earlier, neither KoW or the Duck are in the same league as Crooked Timber, The Monkey Cage, Steve Walt, Dan Drezner, or any number of higher-profile blogs. By the way, if any journal editors out there are interested in bringing me on to spearhead a web strategy likely to (among other things) increase your impact factor, contact me.
I have been asked to revise and resubmit an article submitted for an IR journal. But it’s a big r&r; the editor even said it would be “a great deal of work” (groan). While I must make the changes to the ms, I must also submit a letter to the editors and reviewers to explain my changes. That’s normal of course, but I wonder how the community would appraise the proper length of a letter to the editor for a major r&r? In my last r&r, thankfully a minor, I wrote 2-3 pages. But for a major r&r that “needs a great deal of work’’, I was thinking around 10 pages. Is that too much? Would that you bore you to tears ? (Actually, don’t answer that.)
More generally, I think this is an interesting, undiscussed question for the field, because I have no idea if there are any norms at all on this. I can’t recall discussing this issue ever in graduate school (probably because I couldn’t have gotten an r&r anyway and didn’t even know what r&r meant). Nor can I recall seeing anything on this in all those journals we get from APSA (so many…). So whadda ya think?
Cross-posted on Asian Security Blog.