Nutbeam: more paths to evidence-informed policy making

In this post for The Mandarin, Sax Institute Senior Adviser Professor Don Nutbeam argues that better collaboration and impact assessment should help bridge the researcher-policy maker divide. 

Nicholas Gruen’s recent two-part series on the problems with the use of evidence in policy making, and with policy evaluation offered a thought-provoking interpretation of the challenges involved in effective policy making today. However, rather than creating another arm of bureaucracy devoted solely to evaluation as he suggests, I would argue that improving the dialogue between policymakers and researchers is a more effective solution with longer term benefits.

It’s true, as he argues, that there are cultural differences between the worlds of research and policy that create barriers to policymakers making better use of research.

Too often, researchers claim their work is ignored without making an effort to understand that policy making is complex, often messy, and driven by many considerations, of which available research findings are just one. And it’s true that there are some researchers so focussed on methodological purity that they run the risk of ending up with a perfect answer to the wrong question, often years later than the answer was actually needed.

But it’s too simplistic to suggest that researchers are a vain lot who care solely about displaying their academic prowess and who are indifferent to whether their work has impact. In general, they care deeply about their ability to make a difference, but too often they encounter barriers to communication with policymakers, and respond to career incentives that take them away from the practical application of their research.

Policy develops and changes on the basis of underlying beliefs about the cause of a problem and the potential effects of an intervention, as well as consideration of the social and political context in which action is taken.

At its simplest this means that the use of evidence is set alongside consideration of what is popular or at least acceptable to the population that is affected by a policy. Policy development and change is also significantly directed by the power and influence of the competing interests of “policy influencers”. Who wins, who loses, who will fight and who will compromise are significant determinants of this process. Consider for example current debates about reducing sugar consumption as a response to epidemics of overweight and obesity in many countries.

It follows that policy making is rarely an “event”, or even an explicit set of decisions derived from an appraisal of evidence and following a pre-planned course. Policy tends to evolve through an iterative process, subject to continuous review and incremental change. Policy making is an inherently “political” process, and the timing of decisions is usually dictated as much by political considerations as the state of the evidence.

In this context, policy change depends upon a point-in-time appraisal of what is scientifically plausible (evidence-based); what is politically acceptable or achievable; and what is practical for implementation. It follows that policy is more likely to be influenced by research evidence if it is available and accessible at the time it is needed; it is presented in a way that is sensitive to the political and social context; and the evidence points to actions for which powers and resources are (or could be) available, and the systems, structures and capacity for action exist.

Can researchers meet this brief? Gruen suggests not — but they can and they do, though not as much as they might. Those who do it well work effectively with policymakers who bring political experience and practical insights to the table and who, once engaged, are more likely to be invested the outcomes.

Many more benefits would flow if more researchers worked with policymakers to understand more clearly the type of questions that need answering, and to deploy the research methods that deliver the best possible answers to the questions of greatest public importance. This is more likely to happen if the incentives are right.

At present, our research grant funding system, especially in health and medicine, favours narrowly defined, methodologically pure applications. One unintended consequence is that many research questions of great public policy significance remain unanswered – we have become masters of learning more and more about less and less. In such a system it is especially difficult to win funding for “real world”, complex and technically challenging intervention evaluation.

Impact assessment — for programs, funded research

In the UK, the introduction of “impact assessment” as a part of the regular national assessment of university research performance has positively influenced the dialogue between researchers and policymakers. With substantial resources and public reputation at stake, universities set about systematically understanding and describing the impact of their research on society and the economy, and in working with partners outside of the university to provide the evidence required to validate impact.

The outcomes were impressive. Universities across the UK provided well written case studies of social, health and economic impact. These are all now accessible and searchable online. With such a powerful advocacy tool it is unsurprising that the most recent UK government Public Spending Review saw continuing protection of national research spending at a time when almost all other government budgets were subject to significant reductions.

In Australia we also need to embrace the importance of research impact assessment. It will help optimise the usefulness of research to the policy community and support our advocacy for continued substantial taxpayer investment in publicly funded research. It may also help to reset the incentives for undertaking more complex and messy research that addresses complex and messy questions of great public importance.

The challenge of finding ways to ensure that evidence forms part of an inherently fluid political decision-making process is both a responsibility of those who generate evidence and those who use it. For those who generate evidence and those who wish to see it used, the key is to provide timely access to information, and to employ improved techniques for communicating and managing the inevitable uncertainties that arise through scientific research.

Knowledge brokers working for bridge building organisations such as the Sax Institute where I work play an important and nuanced role in identifying policymakers’ needs, helping to define research questions and design evaluations. They can also create productive linkages between policymakers and researchers. We should be aiming for more of these links and better dialogue between policy and research — not creating new barriers. Both policy making and relevant, impactful research depend on it.

This article was first published on The Mandarin: More paths to evidence-informed policy making

Find out more