The importance of public sector policy and program evaluation
Professor Don Nutbeam
Professor Don Nutbeam

Policy and program evaluation is messy and complex, and the NSW Government should be given credit for its highly visible attempt to embed evaluation across its government and agencies, argues Professor Don Nutbeam.

Earlier this month, the New South Wales auditor general published a report on the program evaluation initiative by the NSW government that was designed to improve and expand public service delivery evaluation.

The auditor general Margaret Crawford certainly didn’t pull any punches, with The Mandarin branding it a “stinging report”.

While it is the role of the auditor general to hold the government accountable for its use of public resources, the report gives the NSW government little credit for the actions they have taken in what is a fiendishly difficult area.

It would be a great shame if the complex, but absolutely critical process of embedding policy evaluation across the government and its agencies was quietly dumped into the too-hard basket off the back of reports such as this.

While federal, state and territory governments across Australia have begun to move towards more systematic evaluation of programs and policies, NSW is doing so in a highly visible way.

In response to a Commission of Audit inquiry in 2012, the NSW government committed to a robust evaluation system that would investigate programs to ensure they met their stated objectives and that they were providing value for money.

As the NSW public service has discovered, this is much easier said than done.

Embedding evaluation within a single program, let alone across an entire government operation, is complex, technically challenging and culturally alien in some parts of government.

The challenge of evaluation

Last year, the Centre for Informing Policy in Health with Evidence from Research (a centre of research excellence hosted by the Sax Institute) conducted a trial called SPIRIT. SPIRIT was established to act as ‘field guide’ to help steer intervention studies designed to increase the use of research in policy.

One of its key findings was a widespread recognition across the public service that evaluation was important, but that it was challenging to implement in practice. For example, one participant noted:

“Policies themselves are reviewed, but as to the evaluation of the specific things that the policy’s implemented … it’s far less clear as to what actually happens there … You know, there’s lots of data that’s collected, key performance indicators on a whole range of things for example … but actually evaluating policy implementation, that’s a far more nebulous thing.”

Too often evaluation has been considered as an afterthought − something to be tacked on as an added extra − or worse still, conducted at the conclusion of a program or intervention to provide “evidence” of success.

Attempting evaluation after a program has been established or at its conclusion almost invariably produces technically unsatisfactory and contestable findings.

The NSW government seem to want to do this better, but are finding that the best of intentions can come back to bite you.

Policy and program evaluation is messy, and by its nature, contestable.

These are not characteristics that sit well with the concept of audit and the demand for clear evidence of value for money.

Suggestions for moving forward

Choosing the right evaluation methods to fit a policy implementation requires fine judgement, some technical knowledge and nuanced political expectation management.

We know that evaluation is more feasible and produces more useful findings when it is embedded from the beginning of a new program or intervention.

The auditor general in her report made some important suggestions in her “good practice model”, particularly that evaluation is aligned to NSW government priorities. This means that the most important programs are top of the queue when it comes to evaluation.

I would add to this some degree of risk management. For example, policies that are genuinely unique – never before been implemented − and those directed at the most vulnerable require a much higher level of scrutiny than those that are based on previously evaluated programs with standardised implementation.

Another suggestion, an evaluation “centre of excellence”, could certainly help drive the necessary cultural shift across the entire government. But, as I have argued previously in The Mandarin, I would caution against the creation of a single arm of bureaucracy devoted exclusively to evaluation. It will almost inevitably become a silo within government and allow departments and ministries to excuse themselves from their responsibilities to embed evaluation in policy development and implementation.

A more effective solution with longer term benefits involves building evaluation capacity across all sectors of government, embedding evaluation into all new programs and improving dialogue between policymakers and researchers.

Connecting researchers and government decision makers

Increasingly sophisticated knowledge brokering services exist, to act as a link between government decision makers and researchers, and to help ensure that evaluation answer the right questions at the right time, and in ways that are most useful to the people that need it.

Governments can make better use of these services to connect more efficiently with the knowledge and expertise that exists on the outside while progressing the important tasks of capacity building and cultural change from within.

This article was first published on The Mandarin. Professor Nutbeam is a senior adviser at the Sax Institute and professor of public health at the University of Sydney.