Benchmarks in QRiH

In a benchmark a comparison is made between the research unit to be evaluated and a comparable other unit, national or international. A benchmark is at the one hand useful for the research unit to be evaluated: how do we compare with another relevant unit? On the other hand a benchmark helps evaluators to evaluate.

A good, transparent benchmark is a comparison on characteristics with level playing fields. This is difficult to achieve in practice, firstly because research units can vary considerably in scope, mission and research program. Secondly, because evaluation procedures and criteria often show large differences.


There are two ways to work with benchmarks. In a limited number of domains in which a lot is published in international journals, it is possible to use production and citation indicators (for the usability per domain see the Domain Profiles). A second way is a more qualitative form of benchmarking which can be applied to all domains: a comparison is made based on research programs and objectives. Both forms of benchmarking can be part of the SWOT analysis, which also aims to establish the positioning of the research unit in the research environment.


For the benchmark, indicate which relevant research unit (or research units) is used for the comparison, and in which way a comparison can be made:

  • The relevance of the chosen research unit (or units) becomes clear from the own narrative, in particular the mission and ambition. 
  • Besides indicating similarities, it is also possible to indicate the differences.
  • If possible, compare the research units on the basis of the relevant indicators (section 2 of the format). The indicators will usually be of a qualitative nature. Consider, for example, a comparison of the used research methods, the type of partners outside the academy, the type of activities to share knowledge, or the focus on personal or social grants.
  • If available, results from recent research evaluations of the other units can be mentioned.

Comparisons can also be made on other grounds, for example on the basis of the QRiH instrument:

  • The profile of the research unit can be compared with the (national) domain profiles (see Domain profiles).
  • The interdisciplinary character can be underlined by highlighting the diversity of publications in authorized channels of different domains.
  • The publication profile of the unit can be translated into specialism, disciplinary and multidisciplinary orientation (see the webpage production profiles of institutes).

In general, the LAP advises to be cautious with quantitative benchmarks. In the humanities it is in most cases not possible to draw up quantitative benchmarks that are based on generic domain-specific characteristics, such as citation analyzes based on bibliometric data from Web of Science, Scopus or Google Scholar (see web pages Bibliometrics, Domain Profiles).


If desired, it is possible to make use of the QRiH lists of journals and publishers or other international quality systems, such as VABB-SHW, CRISTiN or ERIH Plus (see webpage Other Initiatives). The LAP also notes that these lists are much more extensive than the QRiH lists and are therefore less selective.

The lists of journals and publishers can also be used to describe the institute’s or unit’s production profile, in support of the narrative. This can be done by breaking down the lists of journals into various target groups, i.e. Subdomain, Domain, Multidisciplinary and Hybrid. Articles that form part of the research unit’s actual output can be classified by target group. It will be obvious that this is only possible for articles published in journals selected by the panels.


By way of example, we look here at the profiles of researchers affiliated with the Humanities faculties at Leiden University and the University of Amsterdam (UvA) and employed by the Huizinga, Art History, NOSTER, LOT and NICA research schools, and of researchers affiliated with five selected research units within the ICOG at the University of Groningen. These differences should not be regarded as normative but rather as indicative of the relevant research group’s specific research culture. (Text continues after two figures.)

Profielen UvA



Profielen Leiden

Comparing the production profiles of researchers affiliated with the School for Linguistics at Leiden and at the UvA shows that in both cases, the profiles are more specialist than that of other schools and that the two groups differ in that UvA researchers produce more specialist publications than their peers in Leiden. There are also differences in the domain of Cultural Studies: researchers in Leiden are more multidisciplinary or interdisciplinary in orientation than those in Amsterdam.



Profiles 5 units of ICOG

This page provides more information on the Usefulness and limitations of bibliometrics (e.g. citation analysis) in the humanities.

  • Possibilities and limitations of such databases as Google Scholar, Scopus and Web of Science
  • Examples of Google Scholar analyses for publications in various domains
  • Using the h-index


Possibilities and limitations of bibliometrics in the humanities

Traditional bibliometrics has a considerable number of limitations when applied in the humanities. That is mainly owing to the data sources used. They consist of data that can be traced to a pre-defined set of scientific/scholarly journals. This set consists mainly of English-language journals with an international scope (e.g. Web of Science (WoS) or Scopus). However, much of the research conducted in the humanities is published in non-English-language journals. While these journals may well be internationally oriented and have an international reputation, they are not included in the aforementioned data sources (Van Leeuwen, 2013). In addition, these data sources do not include books and book chapters. As the profiles of the research cultures demonstrate, these forms of scientific/scholarly communication are crucial for most domains in the humanities. Traditional bibliometrics may prove useful in humanities domains in which internationally oriented (English-language) journals are an important channel of communication (such as linguistics).


Google Scholar

Google Scholar is another way to collect bibliometric data. It is a search engine that searches through the files of most of the world's largest university libraries, large-scale repositories, the complete electronic versions of journals published by major publishing houses, and books indexed and made accessible by Google Books. This set of sources is much larger than that covered by WoS or Scopus, although its actual size is unclear. Google Scholar provides much more information than WoS or Scopus, including citations in books and edited volumes referencing other books and edited volumes. Google Scholar can therefore be used for bibliometric analysis in the social sciences and humanities (Prins et al., Research Evaluation, 2016).

Google Scholar does have some obvious limitations: it does not index all journals, for example because they are only held by libraries that are not available to this search engine. It is also unable to access some forms of citation, depending on the editorial rules maintained by certain journals and, in some instances, on citation practices and cultures in the various humanities domains.

The diagram below shows how certain domains differ in terms of the percentage of journals indexed in Google Scholar, and in the degree of fit between each one’s citation culture and the methods used by this search engine. It reveals, for example, that less than half of the journals selected by the Art History panels are properly indexed in Google Scholar and that most of the journals that are indexed have a citation culture that differs from the rest of the domain. That means that Google Scholar cannot be used for bibliometric analysis in the Art History domain. That is otherwise for Islam Studies and Cultural Studies, although journals in languages other than English are not as well indexed, making Google Scholar less useful in these domains. 


Different reference culture five domains



Why the h-index does not work for research assessment, including in the humanities

Definition: A researcher has index h if h of his/her total output of N publications have been cited at least h times in other publications and the other (N-h) publications have been cited no more than h times.


The h-index is a fairly straightforward bibliometric index for quantifying a researcher’s publication impact. It was devised by  physicist Jorge E. Hirsch. In the original h-index calculation, journal publications indexed by the Web of Science and Scopus play a major role. It is possible to correct this bias to some extent by using Google Scholar. However, doing so leaves you with three different scores, which is in turn problematic. In addition, the Google Scholar quantification is not without its issues, because it is not clear how this search engine collects citations. What is more important is that the h-index does not provide any calibration specific to the subject area (but that is also true of WoS and Scopus). The h-index also suffers from a few technical problems, for example when unpicking a researcher’s oeuvre (the problem of homonyms and synonyms), but it also rewards those who publish prolifically, seems to encourage ‘salami slicing’ (dividing up interrelated research outputs across multiple publications), rewards ‘one-indicator thinking’, and

Hybrid publications are consumed in both the research domain and the societal domain. Quantitative evidence for this can be found in the form of Google Scholar citations and by conducting online searches using search engines such as Google and Bing, in accordance with contextual response analysis. This analysis makes it possible to filter search results and to examine the extent to which a publication is used. The numbers of civil society stakeholders specified below pertain to websites run by individual organisations and persons (e.g. bloggers), excluding webstores, libraries, repositories (university and otherwise) and self-citations.

The number of citations depends in part on disciplinary differences, citation cultures and publication date. The number of civil society stakeholders depends in part on the immediate relevance of the publication, the extent to which their various occupational fields are institutionalised or organised, and publication date.

The table gives examples of hybrid publications nominated by various panels. These publications are clearly used both in academia and in society. (Text continues after table.)




Societal  stake­holders

Scholar Cites per 8 sept 2016


 Annemarie Mol  (2003) The Body Multiple, Duke UP



 Science Studies

 José van Dijck (2013) The Culture of Connectivity, Oxford UP




 James C. Kennedy (1995) Nieuw Babylon in aanbouw, Boom



 Political History

 Piet de Rooy (2002) Republiek van rivaliteiten, Metz & Schilt



 Political History

 Ernst van de Wetering (1996) Rembrandt. The Painter at Work,      AUP



 History of Art

 Trudy Dehue (2008) De depressie epidemie, Augustus



 Science Studies

 Frits van Oostrom (2013) Het woord van eer, Ooievaar




 Leo Lucassen & Jan Lucassen (2011) Winnaars en verliezers, Prometheus



 Economic History

 Marieke de Winkel (2006) Fashion and fancy, AUP



 History of Art

 Henk te Velde (2002) Stijlen van Leiderschap, Wereldbibliotheek



 Political History

 Marita Mathijsen (2002) De gemaskerde eeuw, Querido




 Floris Cohen (2008) Herschepping van de wereld, Bert Bakker




 Cor Wagenaar (2011) Town planning in the Netherlands since 1800, NAI010



 History of Art


Contextual response analysis also makes it possible to develop user profiles for hybrid publications. The diagram below gives a number of examples showing that each publication has its own user profile. User analysis can thus serve to demonstrate and examine in detail productive interactions pursuant to the SIAMPI method. For more about the SIAMPI method, see Spaapen and Van Drooge, 2011; for more about Contextual Response Analysis, see Prins and Spaapen, 2016.




Hybrid publications