Setting editorial policy for the new journal
If there is to be a new low-cost, open-access vision science journal, what should be its editorial policy and reviewing process?
I think we all agree that reviewers should judge papers to be of "high technical quality". Other questions I'd like to discuss are:
- Should reviewers also consider novelty? If yes, could the journal have a section specifically for replications?
- Can we innovate in the reviewing process itself?
Tom Wallis Thu 16 Jul 2015 11:54AM
Regarding innovation in reviewing
I've been hearing very nice things about the eLife reviewing procedure. Specifically, in the eLife review process, the reviewers and the editor get together to consolidate their feedback into a single, coherent document:
Reviewers get together online to discuss their recommendations, refining their feedback, and striving to provide clear and concise guidance. If the work needs essential revisions before it can be published, the Reviewing editor incorporates those requirements into a single set of instructions.
We all know that sometimes reviewers will request the same thing in different language, or worse, reviewers can request conflicting changes. The consolidation process seems to improve the quality of peer review by ensuring a single set of changes required for the article to be acceptable. It could also help to weed out uninformed / ill-considered reviewer comments, because the other reviewers can suggest that these are not important (all before the author receives feedback).
I have never submitted or reviewed for eLife, but I have heard second hand reports that the process works very well and that both reviewers and authors are enthusiastic about it.
I think we could consider something like this for the new journal. It seems to achieve a number of the things that the Frontiers review system tried and failed to do.
Nick Scott-Samuel Thu 16 Jul 2015 5:04PM
I'd be cautious about a system that seems to mean more work than normal for reviewers. Many of us are already turning down review requests because of a lack of time...
Is the replication crisis something that hits vision as hard as other parts of psychology? My impression is that it is tied up with p<0.05 and all it implies. For much of psychophyscis, at least, that doesn't feel like such a burning issue. Or have I misunderstood the finer details?
Tom Wallis Thu 16 Jul 2015 7:53PM
I've heard two or three second-hand accounts that it's not much more work than normal and that it can make for shorter overall reviews because there are fewer back-and-forths. Consolidating the reviews into a clear set of requests makes it easier to assess whether or not the authors have fulfilled those requests upon resubmission.
It would be good to get a more complete overview of how people find it though. I will email someone at eLife to ask whether they could give us a sense of how reviewers like the system.
Tom Wallis Fri 17 Jul 2015 2:17PM
I sent the following email to Andy Collings, eLife Executive Editor:
Dear Dr Collings,
I am part of a group of vision researchers discussing the idea of starting a new open-access journal for research on visual perception (see our online discussion here:
https://www.loomio.org/g/gblruSIx/open-science-in-vision-science).We are interested in ideas to make the new journal innovative and useful, and I have heard good things about the review process at eLife (specifically, the consolidation of reviews into a coherent whole based on discussion between the reviewers and editor). I can see how
this would be of great benefit to authors, but one concern is that it would require even more time investment on the part of the reviewers.Can you give us a sense of the feedback from your community on your review system, from the perspective of both authors and reviewers? How have you convinced reviewers that the larger time investment is worthwhile?
Your advice or opinions would be greatly appreciated.
Tom Wallis Fri 17 Jul 2015 2:26PM
He responded here (I checked that he was fine with me posting this):
Hi Tom,
Thanks very much for your message. This looks/sounds really interesting. I saw there was some discussion of PeerJ: I wonder if there's any scope to see whether you'd be able to curate a collection of visual perception articles within PeerJ? If you go ahead and launch a journal, though, it would be interesting to hear how things progress! Either way, good luck, and please do not hesitate to contact me if you think I can help along the way.
In terms of our review process, you've touched upon the ways in which it's different (though others, like eNeuro, have certainly started to adopt parts of it).
Most papers that go out for review end up with three reviewers: our strong preference is for one of them to be the Reviewing Editor him or herself, so that the editor knows the paper inside out (i.e., can't just rely on the reviews from others).
Once the reviews have been submitted independently, the reviewers are revealed to one another and are encouraged to discuss their comments, and agree upon a decision (i.e., reach consensus). The Reviewing Editor leads that process. If the paper is rejected, we'd explain why and include the separate reviews. If we ask for revisions, the Reviewing Editor is expected to summarise the revisions the reviewers have agreed upon (ideally, rather than sending the separate reviews): note that one outcome of the discussion between the reviewers is to try to decide whether additional work is needed and can be completed in a reasonable time frame. The idea is to ensure that the authors know what they need to do to have their work accepted.
Because there's been a discussion between the reviewers as to what the authors need to do, and because the Reviewing Editor would usually have served as one of the peer reviewers, most accepted papers are accepted with one round of revision, and without going back to the peer reviewers. Authors really like this, for obvious reasons, for example because the process is designed to be constructive and efficient. Clearly there's more
time invested by the editors and reviewers upfront, but certainly for the papers being accepted the reviewers aren't usually having to spend lots of time later on with re-reviews, and so on.On the whole, reviewers enjoy the interactions with one another, and are enthusiastic about the approach, because they can see that it's much harder for one very negative person to derail the process (that person goes in knowing they'll need to justify their position to some peers), and so the process is usually quite constructive, and they see the value in that.
I'd close by saying that the feedback has been very positive, and that we haven't really needed to convince reviewers that the larger time investment is worthwhile, because they can see for themselves, they tend to enjoy the greater responsibility, and they usually appreciate that they won't be called upon later for re-review (though please note that in some cases, we do still go back to the reviewers, so we certainly don't guarantee this).
We publish the end result in almost all cases, so you can hopefully get a feel for the review process from the published letters and authors responses. Here are a couple from relatively recent articles: http://elifesciences.org/content/4/e07436#decision-letter;
http://elifesciences.org/content/4/e08033#decision-letterCheers,
Andy
Nick Scott-Samuel Fri 17 Jul 2015 2:54PM
Thanks Tom: that's really useful. That review process sounds interesting. One thing I noticed is that the editor also acts as a reviewer; I guess this might have implications for the number of editors our proposed journal...
Lee de-Wit Mon 3 Aug 2015 2:40PM
Offering an improved peer review process could be one of the best ways of ensuring that any new journal developed a good reputation, and I think we should be aiming to hold high standards (to break the 'open-access=poor quality research' mindset). The model used by eLife sounds great, certain requires a bit more communication and coordination at first, but I suspect it leads to a much more efficient process in the long run (avoiding 3 revisions based on miss-understandings and a lack of consensus among reviewers...).
The novelty question is an important one. Personally I would prefer to ask, is this a useful contribution to the field? If it is a replication, but with a good design, robust data, more controls etc., then I would say it is useful - whether or not it is novel.
Tom Wallis · Thu 16 Jul 2015 11:46AM
Regarding the bar to acceptance
In the other discussion thread, I suggested that we want to set a relatively high bar on "high technical quality". I think that one way that this new journal could easily fail is if the community sees it as a dumping ground for "lesser" papers. I suggest that we want to maintain a high standard for the journal so that it is considered to have the same standing as Vision Research or Journal of Vision.
One thing I'm not sure about then, is whether it is best to also include a novelty component. Do we want the journal to publish a straight replication of Campbell and Robson (even if it is executed with the utmost methodological care)? If yes, could the journal have a section specifically for replications (to delineate these articles from primary research articles)?