Publication truths

Publication is the bread and better of scholarship. Funders, university administration, department chairs, and fellow scholars expect—and even demand—that academics publish their findings in respectable, acceptable scholarly forms. What I’ve figured out being a historian in an ecology department is that what counts as respectable and acceptable varies greatly by discipline.

I’m writing here with a fair amount of publishing experience under my belt. I have published articles in both history journals like Technology & Culture and Journal of Urban History and natural science journals including Frontiers in Ecology & Environment and Restoration Ecology. I have published an edited collection of history articles (New Natures) and have another forthcoming this fall. I have not published a monograph (note: I do not count a PhD dissertation as a monograph since it is not independently peer reviewed prior to publication), although I worked with my husband on his book.

Based on all that, I want to share with you some “truths” that I’ve discovered through my interdisciplinary journal publishing experiences. Perhaps they are not true everywhere or for everyone, but I know they apply to me and the academic scene around me. I think they are important to put out for scholars in both the humanities and natural sciences to see, because both groups have things they could learn from each other.

1. Every discipline values only articles published in ‘their’ journals.

I was recently talking to my department chair about putting together my application for docent, a qualification which has no equivalent in the US academic system but marks higher scholarly competence than a PhD and is supposed to mean you are qualified to teach. He asked me about my publication history and was happy to learn that I had forthcoming articles in Biodiversity & Conservation and Bioscience, because those are “good science journals”. The message I took away was that my publications in history journals probably wouldn’t count for much when the Faculty of Natural Sciences reviewed my application.

The same bias holds true the other direction, of course. When I have been a candidate for a medieval history job, the reviewers couldn’t care less that I have published in science journals. They only talk about what was specifically in medieval history. Most historians know about Science and Nature (although they’ve probably never read them), so publishing in those might earn some points, but other than that, not really.

2. Scientists have faith in scholar citation counts, even though they shouldn’t.

Soon after I started in my department, I learned that we have ‘citation cakes’. When someone has published an article that gets to have 100 citations according to Web of Science, they bring in a cake for everyone to celebrate. My jaw almost hit the floor the first time I heard this. 100 citations?!? I’ll be lucky if the total number of citations to all of my history articles put together reaches 100 before I die.

What this reveals is that science journal articles get – and scientists expect – a huge number of citations to a given article. This is reflected in the development of metrics like journal citation indexes which measure quality by quantity.

As I see it, there is a fatal flaw with counting citations in Web of Science as a measure of quality for a historian. The majority historical scholarship is not captured in Web of Science because it happens in edited volumes, monographs, and unindexed journals. There are currently only 69 journals listed in the History category in Web of Science, meaning that many, many history journals are missing. For example I’ve published in Water History and History & Technology, neither of which is listed in Web of Science. According to Web of Science I have 7 publications and 10 citations—which you can compare to Google Scholar, which shows 17 publications and 23 citations because it captures a wider variety of types of articles. Moreover, I don’t think citations really tell you whether or not someone is having a qualitative impact on the field.

3. An article doesn’t have to be 6000+ words to be real scholarship.

History journals are really hung up on length. You’ll be hard pressed to find an outlet that will publish an article less than 5,000 words, and there are some that won’t even consider something less than 10,000. In the ecology journals, most often the trick is getting a text down to the maximum length, which for some formats is 2,000 or 3,000 words, although there are outlets like Ecology Monographs for ridiculously long texts.

I’ve written some of these short texts and I can attest that it takes a sharp knife to be able to pare down an argument to its essence and still make it solid, but it is doable. And in some cases, the story being told really is a story that doesn’t need more words, yet it is an important story to make known. I think history journals need to start having “short communications” sections to make history more readable for the public and faster to write for scholars. Shorter history is not watered down history; it can be more focused, better history.

4. Science journals reject articles based on negative reviews much more often than history ones.

I’ve seen reviews from a history journal that basically said the reviewers disliked the whole framing of the article, but the editors still allowed a resubmission. History editors often have faith that the scholar can address the reviewers criticisms, either by making changes (even very significant ones) or arguing why a change should not be made. To me, this is a good practice because it gives the authors a chance to respond to criticism.

Science journals I’ve interacted with don’t work this way. If one of the reviewers is particularly negative about the work, it’ll be rejected. It doesn’t matter that the author might have perfectly good arguments against the statements. Science journals have too many submittals to deal with, so every reviewer doesn’t agree that it is pretty close to publishable, it’s out. The reviews I’ve seen for rejected manuscripts in science journals would have almost always been a “revise and resubmit” decision for a history journal.

Now, the most problematic part of the science journal rejection rate is that when you are publishing an interdisciplinary piece, you often get one reviewer of the three that doesn’t like it because…well…it’s outside of the norm. The editors could read the reviewer comments and override the shortsightedness, but, as I said, with so many submittals, rejection is easier. This makes publishing a history-based article in a science journal a challenge because what reviewers you get greatly determines your chances to publish.

5. Science journals can be too rigid in their article structure requirements.

I have a policy history piece published in Ocean and Coastal Management. In my version, it had nice descriptive headings to tell a narrative about the historical development of a particular policy. But the journal insisted that it follow their standard layout: Introduction, Method, Results, Discussion, Conclusion. Historians out there know that “results” and “discussion” don’t make any sense for historical analysis. Each documentary source gets integrated into the narrative—it is both included as a data point and is interpreted at the same time. Needless to say, it was a struggle for me to get my policy history to fit the scientific experimental model. I did it, but it wasn’t pretty.

6. What counts as ‘data’ is different for each discipline.

Scientists collect data points and then process them, synthesize them, and present the end results in graphs, tables, and summary statistics. Nobody sees the raw data (except if the journal has an online data repository, but I’m not sure how many people actually go look at it). Scientists present processed data and then draw conclusions.

In history, that’s not how it works at all. Instead, bits of data from a myriad of historical sources (be they quotations or ‘facts’ or suppositions) are interwoven with analysis. Historians then put citations for each of those bits of data—something scientists never have to do—because a future historian should be able to re-create that exact piece of information. While scientists describe the methods of an experiment so it could be re-created, they would never expect the raw data to be exactly the same. Yet that is precisely what a historian expects from a well written, fully documented history. Moreover, historians often expose the raw data to the reader so that the reader can come to the same conclusion as the historian has.

7. Science journals don’t know how to handle historical primary sources.

My articles in science journals are policy histories. I have looked at how policymaking has happened with archival documents, governmental published documents and news media coverage. Journals in the sciences never have listed accepted formats for those kind of sources. As a historian, I want the citations to make the sources ‘findable’ by future historians, a requirement that is entirely different from science data (see #6). Although I write the citations in acceptable formats according to Chicago Style or MLA or the specific archive’s cataloging standards, inevitably, the copy editor changes it to some mangled citation that completely loses its value and I have to fight to change it back to something a historian would understand.

8. History scholarship is invisible for extended periods because of publishing practices.

Most of the science journals have websites that show articles that have been accepted but not yet printed in an issue. These Early View or Online First articles are completely citable and have doi numbers. What this means is that cutting edge scholarship is available quickly to others.

Very few history journals have such systems. Because of the backlog publishing in history journals (you have to realize that most of them publish 4 times a year with 4-5 articles, instead of the science model of every month with 10 articles), I have had articles that were accepted over a year before they ever appeared.  The same delay, of course, happens with edited volumes, which are often about a year from final acceptance to print. What this means is that no one knows about my article (and I can’t tell them) until long after it was finished.

This invisibility was particularly pertinent after I published a peer-reviewed letter in Frontiers in Ecology & Environment that commented on an article published a few months before. The article’s authors were given the chance to write a reply and in their reply, they used an example that I had in fact written an article about. According to my article, their analysis of the example was wrong, but my article had been accepted for an edited collection that was not yet published, so I couldn’t officially correct them while the issue was still timely.

9. Peer-review doesn’t work the same in different disciplines.

I always thought peer-review worked the same everywhere: an author submits a piece and it is sent out to reviewers who don’t know who wrote it and then the reviews come back and the author isn’t told who they were. And that is basically the history journal system.

What I’ve found out is that science journals often don’t work that way at all. Instead, the author puts all of their information on the first page of the article and the reviewer knows exactly who wrote it. In fact, I’ve been a reviewer for Ocean & Coastal Management and their online review system is so slick that they provide a link to the first author’s Web of Science profile so that you don’t even have to search for him/her (based on #2, you can guess what I think of this). So science publishing is rarely double-blind.

10. Publishing an article can cost money.

Historians always seem to get outraged when the idea of paying a publishing fee arises, but it is standard practice in science journals. I’ve had to pay “page fees” on several occasions. These are assessed only after the article is accepted, so I don’t think they have affected the review process. But they certainly might affect the scholarship published there. Many natural scientists work under grants by funding agencies (certainly that is the case at Umeå University) and these page fees are built into the budgets of the projects. Historians, however, are much more rarely funded by outside agencies, thus the question arises: who pays? If a publication costs €300 and you don’t have a grant to cover it, it would end up coming out of your own pocket and I don’t know many historians that would find that acceptable. So if you are a historian who wants to publish in a science journal, remember the page fees.

As I see it, these important differences in the way publishing works need to be recognized if interdisciplinarity is ever to be truly embraced. Crossing disciplinary lines means knowing the rules of the game on the other side, while at the same time trying to stretch them.

2 Kommentarer

Trackbacks & Pingbacks

  1. […] publishing cultures have some fundamental differences. One of the things I did not touch on in my previous discussion of publishing differences, but want to take up here, is collaboration and […]

  2. […] Originally posted on Umeå University Forskarbloggen […]

Lämna en kommentar

Lämna ett svar

E-postadressen publiceras inte. Obligatoriska fält är märkta *