Nikolaus Kriegeskorte
Published:
Total Pages: 194
Get eBook
A scientific publication system needs to provide two basic services: access and evaluation. The traditional publication system restricts the access to papers by requiring payment, and it restricts the evaluation of papers by relying on just 2-4 pre-publication peer reviews and by keeping the reviews secret. As a result, the current system suffers from a lack of quality and transparency of the peer-review evaluation process, and the only immediately available indication of a new paper’s quality is the prestige of the journal it appeared in. Open access is now widely accepted as desirable and is slowly beginning to become a reality. However, the second essential element, evaluation, has received less attention. Open evaluation, an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current system. However, it is unclear how exactly such a system should be designed. The evaluation system steers the attention of the scientific community and, thus, the very course of science. For better or worse, the most visible papers determine the direction of each field and guide funding and public policy decisions. Evaluation, therefore, is at the heart of the entire endeavor of science. As the number of scientific publications explodes, evaluation and selection will only gain importance. A grand challenge of our time, therefore, is to design the future system, by which we evaluate papers and decide which ones deserve broad attention. So far scientists have left the design of the evaluation process to journals and publishing companies. However, the steering mechanism of science should be designed by scientists. The cognitive, computational, and brain sciences are best prepared to take on this task, which will involve social and psychological considerations, software design, and modeling of the network of scientific papers and their interrelationships. This Research Topic in Frontiers in Computational Neuroscience collects visions for a future system of open evaluation. Because critical arguments about the current system abound, these papers will focus on constructive ideas and comprehensive designs for open evaluation systems. Design decisions include: Should the reviews and ratings be entirely transparent, or should some aspects be kept secret? Should other information, such as paper downloads be included in the evaluation? How can scientific objectivity be strengthened and political motivations weakened in the future system? Should the system include signed and authenticated reviews and ratings? Should the evaluation be an ongoing process, such that promising papers are more deeply evaluated? How can we bring science and statistics to the evaluation process (e.g. should rating averages come with error bars)? How should the evaluative information about each paper (e.g. peer ratings) be combined to prioritize the literature? Should different individuals and organizations be able to define their own evaluation formulae (e.g. weighting ratings according to different criteria)? How can we efficiently transition toward the future system? Ideally, the future system will derive its authority from a scientific literature on community-based open evaluation. We hope that these papers will provide a starting point.