Impact Practitioners

A guide to research evaluation frameworks

By 04/07/2024

This 40-page paper by the RAND Corporation is a guide to research evaluation frameworks (REFs) and tools. Through these frameworks, the study explores the challenges and trade-offs in evaluating research and arrives at its key findings for creating a suitable REF. This report will be particularly interesting for policymakers, institutional leaders and managers of research departments. The report also contains 137 pages of appendices that cover useful additional information, including a summary table for the frameworks and tools mentioned; a more detailed description of the methodology used in the study; and examples of research evaluation frameworks used in different country contexts.

The paper argues evaluation of research has become incredibly important; there is a need to show that policymaking is evidence-based and that there is accountability for the investment of public funds in research. The key findings are as follows;

Designing a REF requires trade-offs. For example, while quantitative approaches are more transparent, they require significant upfront work. 

The second finding is that in order to be effective, the design of the framework has to depend on the purpose of the evaluation. As a general rule, your research evaluation will aim to do one, or more, of the following – which you should establish at the beginning; 

  • Advocate. Demonstrating the benefits of supporting research to policymakers and the public, and making the case for policy or practice change.
  • Show accountability. Showing that resources have been efficiently and effectively used.
  • Analyse. Understanding how and why the research worked, and how it can be better supported through stronger evidence.
  • Allocate. Determining where best to allocate funds in the future. 

Thirdly, the reports recommends the categorisation of evaluation tools into one if two groups:

  • Formative tools. These are flexible and capable of dealing with inter and multi-disciplinary assessment.
  • Summative tools. These are more suitable for high frequency, longitudinal use, and do not require interpretation.

The fourth finding is that the units of aggregation for dealing with data will depend on the needs of the target audience, as well as privacy requirements.

Research evaluation faces ongoing challenges, the importance of which depends on the framework’s goals. For instance, attribution and contribution matter for accountability and allocation, while recognizing multiple inputs matters less in advocacy.

The fifth finding states that evaluation approaches should suit their wider context. As such, we need to consider history, politics, and social and economic factors. To enhance credibility and acceptability, we must also address potential discrimination risks and unexpected consequences, especially concerning specific groups.

Finally, implementation needs ownership, incentives, and support. Whether participation in the framework is compulsory or voluntary, participants should be equipped with the necessary skills for the process.

Overall, this guide elaborates on the key challenges and considerations when developing a suitable framework for research evaluation.

This article is part of our initiative, R2A Impact Practitioners. To find out more, please click here.