Scientific Publishing: Quality Control, Review, and Impact Measurement

On December 17, 2024, I delivered a workshop on “Scientific Publishing: Quality Control, Review, and Impact Measurement” at the Hamburg University of Applied Sciences. The event gave participants a comprehensive overview of key aspects shaping scientific publishing today. Below, I summarize the central points discussed during the workshop.


Why Publish? Individual and Institutional Perspectives

The workshop began by examining the underlying motivations for publishing:

  • For Researchers: Academic careers are deeply tied to the reputation built through impactful publications. Metrics such as publication count, collaboration networks, and publication venues are often critical for advancement.
  • For Institutions: Universities rely on publication metrics for evaluations, resource allocation, and defining research priorities. Metrics like the number of PhDs, third-party funding volume, and publication output are central to institutional success.

This “publish or perish” culture highlights the need for understanding not only how to publish but also how to ensure quality and impact.


The Role of Peer Review in Quality Assurance

A significant part of the workshop focused on Peer Review—the backbone of academic quality control. Key points included:

  • What is Peer Review?: Submissions are evaluated by experts (peers) chosen by editors. The process ensures scholarly rigor and determines whether a manuscript is rejected, revised, or accepted.
  • Review Timelines: Review durations vary across disciplines, from an average of 12–14 weeks in the life sciences to 22–25 weeks in the humanities and economics.
  • Rejection Rates: Differences in rejection rates reflect both discipline norms and publication competitiveness. For instance:
    • Life Sciences: 52%
    • Medicine: 54%
    • Humanities and Social Sciences: 61%

We also discussed different peer review types, including:

  • Single-blinddouble-blind, and triple-blind reviews.
  • Emerging alternatives like open review (see below), where reviews are accessible, transparent, and often collaborative.

Alternatives and Innovations: Open Peer Review

An engaging part of the discussion centered on the innovative approach adopted by journals like Atmospheric Chemistry and Physics. Their open peer review model enhances transparency, efficiency, and accountability:

  • Submissions, reviews, and final publications are openly accessible.
  • The process prevents plagiarism, ensures faster community feedback, and integrates reviews into the scientific dialogue.

Impact Measurement: How Do We Assess Research Quality?

A core aspect of the workshop addressed how research quality and impact are evaluated using bibliometric tools and alternative metrics.

  1. Journal Impact Factor (JIF):
    • The JIF remains widely used but was criticized for its limitations, including bias toward English-language journals, manipulation potential, and a focus on journals rather than individual articles.
  2. Hirsch Index (h-Index):
    • The h-index measures an individual researcher’s cumulative impact but has its own shortcomings, such as disciplinary bias and the inability to capture innovative, low-citation work.
  3. Altmetrics:
    • Altmetrics provide a newer perspective, measuring research attention on platforms like X/Twitter, blogs, and news outlets. While they often correlate with citation counts, they can serve as early indicators of impact, particularly in the digital age.

The discussion also highlighted initiatives like CoARA and DORA, which advocate for fairer research assessment practices beyond traditional metrics. By the way, you can find more information about innovations in research assessment here.


Persistent Identifiers (PIDs): Ensuring Visibility and Attribution

In the final part of the workshop, we highlighted the importance of Persistent Identifiers (PIDs) for researchers and their publications. PIDs, such as DOIs (Digital Object Identifiers) for publications and ORCID IDs for researchers, play a crucial role in ensuring:

  • Clear Attribution: PIDs link research outputs to authors reliably, avoiding ambiguities caused by similar or changing names.
  • Improved Discoverability: Publications with DOIs are easier to track, cite, and measure in both traditional citation metrics and altmetrics.
  • Integration Across Platforms: Tools like ORCID ensure that researchers’ outputs are seamlessly recognized across databases such as Scopus or Web of Science.

Takeaways

The workshop emphasized that while traditional models of publishing and peer review remain central, evolving metrics and open approaches offer promising pathways to improve transparency, efficiency, and fairness in scholarly communication.

By understanding these tools and trends, researchers can better navigate the complex landscape of academic publishing, enhance the quality of their work, and maximize its impact.

Further information about the workshops I offer (in English or German language) can be found here.

Thank you to all participants for their insightful questions and lively discussions.

I also created a podcast episode with Google’s notebooklm about this workshop. In case you listen to the podcast, please be aware that it was produced by AI with no intellectual corrections.

Scientific Publishing: Quality Control, Review, and Impact Measurement
pulse49
Scientific Publishing: Quality Control, Review, and Impact Measurement
Loading
/

By Ulrich Herb

Graduate sociologist, information scientist (PhD degree), associate of scidecode science consulting – De Castro, Herb, Rothfritz GbR, working for the Saarland University and State Library (Germany)

Leave a Reply

Your email address will not be published. Required fields are marked *