The evaluation of scientific research is undergoing a seismic shift, driven by the integration of Current Research Information Systems (CRIS), Open Science principles, and Artificial Intelligence (AI). In our recent paper, Otmane Azeroual, Joachim Schöpfel and I explore the concept of Revolutionising Research Assessment: The Role of CRIS, highlighting the transformative changes in the research landscape of France and Germany, and presenting compelling arguments for innovation in research assessment.
The Paradigm Shift in Research Evaluation
Traditional metrics such as the Journal Impact Factor (JIF) and the h-index have been cornerstones of research evaluation for decades. However, these tools face mounting criticism for their limitations, including disciplinary biases, a narrow citation window, and inadequate representation of diverse scholarly contributions. To address these issues, new frameworks and methodologies are emerging that emphasize qualitative aspects of research alongside quantitative ones.
The Role of Current Research Information Systems (CRIS)
CRIS, also known as Research Information Systems (or Forschungsinformationssysteme, FIS) in Germany, are at the forefront of this transformation. These systems consolidate data on publications, funding, collaborations, patents, and more to provide a comprehensive view of research performance. Universities and research institutions across France and Germany are leveraging CRIS to facilitate performance-based budgeting, streamline project management, and support Open Science initiatives.
In France, the development of platforms like HAL and CapLab exemplifies the growing emphasis on centralized and interoperable research management systems. In Germany, widely adopted CRIS platforms such as PURE and HISinOne-RES are fostering data integration and transparency, aligning with international standards.
Open Science: A New Era of Research Accessibility
Open Science principles, championed by initiatives such as the San Francisco Declaration on Research Assessment (DORA) and the Coalition for Advancing Research Assessment (CoARA), are reshaping how research is evaluated. These movements advocate for reducing reliance on traditional metrics and embracing indicators that reflect openness, accessibility, and ethical practices.
Key Open Science indicators include:
- The number of Open Access publications and datasets.
- The number of FAIR-licensed items.
- Contributions to citizen science and media engagement.
These indicators aim to democratize research and ensure fair recognition of diverse scientific contributions by prioritizing transparency and accessibility.
Artificial Intelligence: The Game-Changer
The integration of AI into research assessment offers unprecedented opportunities for data analysis and prediction. Machine learning algorithms can identify patterns, track emerging trends, and evaluate research impact with remarkable precision. However, this technological advancement is not without its challenges. Ethical concerns such as data privacy, algorithmic bias, and the need for explainable AI are critical considerations for the responsible implementation of AI in research evaluation.
The Pivotal Role of Libraries
Libraries are uniquely positioned to play a central role in this evolving landscape. As custodians of research data and advocates for Open Science, libraries can:
- Ensure data quality and standardization through the use of Persistent Identifiers (PIDs).
- Facilitate the implementation of ethical AI tools.
- Serve as intermediaries between researchers, IT departments, and funding agencies.
To effectively fulfill these roles, libraries must invest in training and resources to keep pace with technological advancements.
Conclusion
The convergence of CRIS, Open Science, and AI heralds a new era of research assessment that promises greater fairness, transparency, and inclusivity. While challenges such as ethical considerations and data standardization remain, the potential benefits are transformative. This revolution in research assessment is an opportunity to redefine how we measure and value scientific excellence.
The paper (in German language) is accessible here:
Azeroual, O., Herb, U., & Schöpfel, J. (2024). Forschungsbewertung neu definiert: FIS, Open Science und KI-gestützte Innovationen. b.i.t.online, volume 27, issue 6, pages 521–528.
https://dx.doi.org/10.22028/D291-43771
https://www.b-i-t-online.de/heft/2024-06-fachbeitrag-azeroual.pdf
A podcast (in English language) is accessible here:
I also created a podcast episode with Google’s notebooklm about this publication. In case you listen to the podcast, please be aware that it was produced by AI with no intellectual corrections.