Systematic Reviews: why do they lack policy relevance?

Systematic reviews can play an important role in supporting the introduction of evidence-based policy but recent research by James Kite from the University of Sydney shows that often these studies fail to realise their potential to influence policy makers. His review of 153 systematic reviews of obesity prevention initiatives found that many did not contain information relevant to policy makers and/or did not appear to consider the policy implications of their findings.

Kite argues that one major reason for this is the focus in academia on peer review publications, to the detriment of other markers of achievement, such as informing public policies. He argues that unless systematic reviews are designed to maximise their use to policy makers, researchers and policymakers will continue to be ‘travellers in parallel universes’.

James Kite writes below in this piece first published on Croakey.

 

We hope that policymakers take into account the best available evidence when making decisions about how to design and implement public policy and health interventions. However, policymakers are busy people and don’t always have the necessary time, skills, or access to search all of the scientific literature and make sense of it. As a result, policymakers often look for, or commission their own, systematic reviews (SRs).

SRs are one (increasingly popular; for example, see Figure 1) method of synthesising evidence, commonly thought to be the gold standard method. They involve systematically searching all available literature on a given topic, selecting relevant research based on predefined inclusion and exclusion criteria, and drawing summary conclusions from the included studies. Policymakers have long been recognised as a key target audience of SRs, with discussions on how best to make them policy-relevant stretching back at least two decades.

kite chart

Figure 1 Average number of systematic reviews on obesity prevention interventions published per 3-year period

Source: Kite et al, 2015

There is also recognition among both policy makers and researchers that there is a need to build and maintain strong linkages between research and public policy. Indeed, the ultimate aim of all research should be to influence policy and practice. However, our recent study of 153 SRs of obesity prevention interventions found that many lack important information for policymakers and that very few considered the policy implications of the review’s findings. It is little wonder then that policymakers often report feeling that researchers do not sufficiently consider the policy implications of their results or present results in a useful way. There is clearly a mismatch between policymaker needs and researcher objectives: in the words of Ross Brownson and colleagues, researchers and policymakers are ‘travellers in parallel universes’. The question is why?

Publish or Perish

The ‘publish-or-perish’ culture is probably the predominant reason. This term refers to a culture of constant pressure to produce publications, particularly in high impact factor journals, in order to maintain or further a career in academia. For instance, a Canadian study found that considerably more weight is given to peer-reviewed articles than work with policymakers by academic promotions committees and funding bodies. This culture has been linked to an increasing tendency to only publish positive or confirmatory results (known as ‘publication bias’), the inefficient or wasteful use of taxpayers’ money, and fraud and other unethical behaviourSome have even argued that the content of publications is no longer important; only how often, where, and with whom you write. Consequently, while researchers agree that making research accessible to policymakers is important, doing so is often not seen as a high personal priority.

SRs are comparatively easy papers to write and they are likely to become easier still with continuing improvements to electronic indexing of articles. Moreover, a review of the evidence is usually the first step in any project so turning this regular activity into a publication is an attractive proposition: it means you get both what you need for the project (i.e. an understanding of what has been done and what is known about your topic) and a publication with which you can fatten your CV. It is especially attractive to early career researchers and research students, who are acutely aware of the need to be constantly publishing in order to stand any chance of a career in academia.

Questionable usefulness

Ultimately, this means that when researchers design and report on SRs they may not be considering the needs of end-users, especially policymakers. And if researchers are not considering the needs of end-users then the usefulness of SRs is questionable. It’s this kind of environment that leads to the publication of no less than 10 different SRs on school-based physical activity interventions in a 4-year period, as we noted in our recent study.

Addressing the ‘publish-or-perish’ culture will be a considerable challenge. Recent suggestions from the Turnbull Government that they will revise major government grant guidelines to lessen the focus on publications and instead focus on community and commercial impacts are likely to help, if well designed. Certainly, lessening the focus on the publication track record of grant applicants will reduce the pressure to be constantly publishing and may encourage more meaningful and considered reporting of research. It may also assist in reversing the bias inherent in grants programs that has led to older academics (i.e. those with established) track records receiving a disproportionate share of available funding at the expense of younger researchers. Adopting other metrics for measuring research impact (called ‘altmetrics’) apart from just the number of publications and citations is another option.

Including costs

In the meantime, SRs could be improved by encouraging the inclusion of policy-relevant information and consideration of policy implications of the SRs’ findings. This push could come from journal editors or through revising commonly used SR guidelines, like PRISMA. It will also necessitate improvements in reporting in primary studies, particularly around intervention costs. In our study, of the few SRs that did attempt to include information on costs, over half were unable to do so because the information was not available.

Policymakers can also play a role in improving the policy-relevance of SRs when they commission them. Building partnerships between researchers and policymakers is regularly identified as one way of increasing the relevance of SRs but our study suggests that this would not be sufficient: just under half of the SRs in our study had received funding or other support from a policy-based organisation but this made no difference to whether the SR included policy-relevant information. It’s understandable that policymakers would want to avoid the appearance of unduly influencing the findings of a SR but this should not prevent them from demanding the inclusion of particular types of information, like costs.

SRs are an important step in getting research into policy and practice but they must be presented in such a manner that they maximise their usefulness to policymakers. Otherwise, they risk being nothing more than a publication for the sake of a publication.

Find out more

  • James Kite is an Associate Lecturer and PhD candidate within the Prevention Research Collaboration at the University of Sydney.
  • Follow James Kite on Twitter @jkite13
  • This article was originally published on Croakey