The Peer Review Week 2017 celebrates the importance of peer review in maintaining the quality and accuracy of science. Today we shed light on the Peer Review process in Conference Proceedings.
Conference Proceedings can be a great format for publishing important and valuable research and communicating new results much faster than journals. Did you know that conference proceedings are not just a simple compilation of conference papers but also go through rigorous, often-times a stricter peer review process?
Let鈥檚 look at an example. The proceedings of the 18th International Conference on Agile Software Development, XP 2017, were published at . As seen from the preface, there were 46 submissions, out of which 14 full and 6 short papers have been selected for the presentation at the conference. This translates to a 30% acceptance rate for full papers, meaning that only one in three papers made it to the conference 鈥 plus, each paper received at least three reviews!
So, where鈥檚 the transparency?
Through the , Dr. Mario Mali膷ki of University of Split of conference proceedings to extract any information that might pertain to peer review. In particular, the terms used by authors to describe conference peer review processe were searched for. Building upon his work, in late 2015 Springer Computer Science Editorial staff started gathering such information from conference chairs in a systematic manner. This was done for all conferences publishing in Computer Science proceedings series, including the Lecture Notes in Computer Science (LNCS), which has recently celebrated its :
This information about the peer review process takes into account the following parameters:
Such indicators show how the process of one conference differs from the other and how strict and competitive the peer review is. For instance, the 13th International Conference on had 150 submissions and 31 full papers accepted (an acceptance rate of 20%). However, for example, , only 29 papers were accepted out of 55 submissions (52% acceptance rate).
These differences reflect various cultures prevalent within the community like some conferences might be more closed with shared quality values and might not take in many submissions from outside, or, a conference might be more international, famous and therefore also sometimes attract more dubious papers.
Making review indicators explicit enables better comparison of the peer review processes across conferences and sub-disciplines. One can now answer research questions like 鈥渋s it true that pattern recognition uses single-blind review, while the AI community goes for double-blind鈥? Is acceptance rate in HCI higher than in machine learning? Interestingly enough, such differences are often not explicitly known within a certain community 鈥 our analysis has shown that many conferences refer to the peer review process as 鈥淭HE peer-review process,鈥 assuming it is known to everyone.
Description of the peer review processes contributes further to transparency. Staff members from the Springer CS Editorial team are discussing the parameters for describing conference peer review processes within the .
Since the group includes important conference publishers and other relevant stakeholders, the goal is to develop a new industry standard for peer review transparency in conference proceedings. Such standard will then be most likely implemented within CrossMark 鈥 which will allow everyone to see which peer review process the paper was subject to just by clicking the CrossMark icon.
More information about study can be found in the abstract 鈥淧eer Review in Computer Science Conferences Published by Springer鈥 by Mario Mali膷ki, Martin Mihajlov, Aliaksandr Birukou, and Volha Bryl, to be presented as a poster at the in Chicago. This work was also presented at the
We welcome your thoughts about peer reviews in conference proceedings. Find more about Conference Proceedings at Springer .