Background of QRiH


The report Duurzame Geesteswetenschappen (Sustainable humanities), also known as the Committee Cohen Report (2009), observed that, in terms of research assessment, the humanities are too much at the mercy of models derived from the exact sciences and medicine. The report recommended that the humanities develop its own set of assessment standards. In the years that followed, various Royal Academy committees tackled this problem in both the humanities and the social sciences (see, e.g., the report Quality Indicators for Research in the Humanities, May 2011). These developments coincided with the drafting of the Standard Evaluation Protocol 2015 – 2021 (SEP) for research assessment, which created greater scope for both the research quality and the societal relevance of research.


Basic principles

While the Academy's report provided several useful starting points, it did not offer a ready-made working model for researchers, faculties and assessment committees. In late 2012, the national organisation of deans of humanities faculties (DLG) – with the consent of the Coordinating Board for Sustainable Humanities – endorsed a proposal to develop an operational model for Quality Indicators for Research in the Humanities. The national organisations of deans of Philosophy (DWB), Religious Studies (DGO) and the Royal Netherlands Academy of Arts and Sciences (KNAW) agreed to join forces with the DLG. 


The lead partner in the project was the University of Amsterdam and coordination was entrusted to an Academic Committee and a Steering Committee or work group (for a list of members, see ‘About this publication’QRiH’).

The Academic Committee identified the underlying principles of the project in 2014 as follows:

  • The model should mainly reflect ‘internalised values', and avoid any suggestion of a sort of absolutist objectivity.
  • The model should mainly serve as guidance and not as a template for bureaucratic exercises.
  • The model should allow for considerable differentiation so as to (1) achieve coherence and (2) be recognisable within the separate research domains.
  • Open access publications should also have a place in the model.
  • The model should be multifunctional: it should not only serve to assess individual achievements, but also to support policymaking, for example with regard to the composition and objectives of research groups.
  • The model should also serve as a ‘career reflection’ for researchers (especially younger ones), and should promote certain practices associated with societal impact and exposure.
  • The model should should allow for valorisation and impact.


Setup and implementation of KIG project

After a hesitant start in which energies were focused on surveying the task at hand, the project objectives were identified and a strategy designed to create a model. The idea was to do this from the bottom up, based on empirical findings and values familiar to and acknowledged by researchers. The project thus struck out in new directions, even when viewed in an international context.

The first step was to survey the various publication cultures within the humanities. A pilot project was set up for this purpose, based on research data at the universities in Leiden and Amsterdam. The following step along the way to a geography of publication cultures was to call on the national research schools, based on the premise that this would cover the vast majority of humanities research, and with existing communities there serving as liaisons.


In this setup, the research schools were allocated a crucial role – a role that would later take the shape of the newly formed Domain Panels, which have been accorded a specific status in QRiH. Their first task was to classify and qualify journals and publishers based on material supplied by the KIG project group.


At the time that the research schools joined the project, in mid-2015, the Dutch academic world was in turmoil – a situation that had an impact on the project. Some research school boards were troubled. Who would guarantee that this model wouldn’t be used to create a culture of blame? Publication of the SEP 2015-2021 provided an answer. While data collection was shifted to the back burner, the focus of the project became the development of a manual for assessing humanities research based on the SEP. The underlying rationale was that the SEP offered ample scope to create a framework specifically for the humanities, along with the relevant indicators.

In 2016, this shift in approach led to the first version of the manual, whose core message was that self-assessment reports in the humanities should take the form of a narrative, with arguments underpinned by robust data categorised using the indicators. The manual was given a very positive reception. The steering group could then go about developing the indicators and collecting the necessary data to identify the indicators deemed ‘authorised’. 


These efforts led to a presentation on 15 December 2016 featuring the manual and a demo of the QRiH website. The presentation was held at the Royal Netherlands Academy of Arts and Sciences in the presence of its president, José van Dijck. In the six months that followed, the focus was on rewriting the manual, reorganising the website (partly in light of various discussions and trial runs), elaborating and refining the indicators, and having the Domain Panels review all lists and profiles. In the meantime, the three national organisations, DLG, DWB and DGO, joined with the Royal Netherlands Academy of Arts and Sciences in founding the QRiH Alliance. This also involved appointing an overarching National Authorisation Panel, responsible for authorising the substance of the instrument. A decision was further taken to use the instrument in the approaching round of site inspections and assessments and to make resources available to maintain and continue to refine the instrument in the years ahead.