This opinion piece by Professor Don Nutbeam, senior adviser to the Sax Institute and professor of public health at the University of Sydney, was first published in The Mandarin.
Different groups — frontline workers, auditors, the public — value different aspects of evaluation for different reasons. It’s important to keep these competing values in mind.
“When I use a word,” Humpty Dumpty said in rather a scornful tone, “it means just what I choose it to mean — neither more nor less.” “The question is,” said Alice, “whether you can make words mean so many different things.” “The question is,” said Humpty Dumpty, “which is to be master — that’s all.” (Lewis Carroll, Through the Looking Glass)
Governments, departments and agencies across Australia appear to be giving greater attention than ever before to program and policy evaluation. The reasons for this are self-evident. Properly conducted evaluations can provide information on whether or not a policy delivered the planned outcomes and reached the intended population. Such evaluations can also help with an assessment of the return on investment. Disappointingly, a narrow interpretation of the purpose of evaluation may result in current investment failing to deliver a broader range of potential benefits.
The word ‘evaluation’ has at its core, both literally and metaphorically, the concept of ‘value’. The value we place on a particular action and its outcome defines its importance, how we interpret information and, in many cases, how we assess success or failure. These values are contestable. We are witnessing an extraordinary battle of values in the US at the moment — epitomised by the debate about facts and ‘alternative’ facts. As Humpty Dumpty reminds us, the question is: which is to be master?
What represents value?
This battle of values is also being played out, often unconsciously, in evaluations of policy or programs. Policy makers, academic researchers, frontline staff and the wider community may all have different views on what represents ‘value’ from public investment. The values held by one set of stakeholders may be significantly different to those of another. As a consequence the prism through which they place ‘value’ on a policy or program and assess its success can be radically different.
Take the recent evaluation of the Indigenous Advancement Strategy from the Australian National Audit Office as an example. The ANAO office pulled no punches in criticising the program, providing a detailed report that identified problems with the strategy implementation and outcomes. While there can be no debate about the importance of closely examining how public money was used in implementing the strategy, the ANAO value judgements relating to the program ultimately stem from their role as auditors. A very high value is placed on cost-effectiveness and budgetary rigour. However, there are other ‘values’ that could have been incorporated in the evaluation, and these might have produced a more rounded assessment of what we could learn — good and bad — from the implementation of the strategy.
A policy maker, a scientist, a frontline employee and a community member walk into a bar…
Policy makers and budget holders generally place ‘value’ on the relationship between financial investment and the achievement of clearly defined outcomes within a set time-frame. These time-frames are often set by political rather than practical considerations to enable decisions about the future allocation of resources within tight election time-frames. Achieving ‘impact’ may involve the use of intervention methods that tend to focus on short-term change, and as a consequence may not be sustainable. Sometimes quick fixes and headline grabbing ‘announceables’ may be favoured by those looking for high-impact, high-profile activity. There is much available evidence to suggest that any observable short-term changes from this type of intervention are very hard to sustain.
By contrast, frontline workers (health professionals, teachers etc) value the likely and actual success of an intervention in achieving its defined outcomes in real-life situations. They place value on the practicality of implementation, program sustainability and the maintenance of improvements in the longer term. Such evidence will often come from a study of the process of implementation over the long-term, and from case reports rather than typical cost- and outcome-focused evaluation.
The community, which is intended to benefit from government investment, may place great value on the methodology of a program or policy development, particularly whether or not it provides opportunity for engagement and enables shared decision-making around priorities that the community itself has identified. These values may be at odds with what practitioners and funders want to achieve, and with what research scientists consider optimal for methodological rigour.
For research scientists, an intervention’s success or failure is judged according to prevailing scientific standards. Value is placed on the rigour and integrity of a study design, the quality of the measures used to assess outcomes, and is focused on the internal validity of reported results. This often leads to the study and evaluation of health and social interventions that have very narrowly defined and carefully controlled processes and objectives. We learn more and more about less and less.
Integrating different values: the key to success
These perspectives are distinct but not mutually exclusive. Ultimately, all place value on achieving pre-defined objectives. Where each differs is in the emphasis given to the cost, complexity, engagement processes, types of outcome and timescale involved in achieving these outcomes. These values can be based on ideology, on perceived scientific objectivity or on political views. Successfully integrating these differences in emphasis will have a major influence on all aspects of a program’s development and evaluation, determining the likelihood of achieving project funding, community and practitioner buy-in, and the very feasibility of evaluation.
When it comes to evaluation, we need to be aware of these competing values and whenever feasible make them transparent. Different values are important to different stakeholders. The best program evaluations will explicitly consider and reconcile these different values to provide more comprehensive implementation on effectiveness, practicality and sustainability.
This article was first published on The Mandarin: What’s in a word? Finding the value in evaluation