The challenge of evaluating integrated care initiatives

Healthcare systems globally are forging ahead with integrated care initiatives, but the complex programs pose a host of challenges for the researchers who evaluate them, visiting expert Professor Nicholas Mays told a Hospital Alliance for Research Collaboration (HARC) Forum this month.

Professor Mays, Director of the Policy Innovation Research Unit (PIRU) working out of the London School of Hygiene and Tropical Medicine, outlined the many challenges facing evaluators, starting with the fact that there are at least 170 definitions of “integrated care” in the literature.

“It is an intrinsically difficult area,” he told the packed Forum, which was chaired by Mr Chris Shipway, Acting Chief Executive at the NSW Agency for Clinical Innovation.

Multi-level, multi-faceted and multi-site

Most integrated care programs covered a wide range of different service changes and were specific to their local environment, meaning research on such programs wasn’t easily transferable. The programs were also typically multi-level, multi-faceted and multi-site, so evaluating the impact of specific elements wasn’t easy..

“It is difficult to accumulate knowledge,” Professor Mays said.

Professor Mays said there were often high expectations on evaluators to assess the cost-effectiveness of programs, despite the fact that many programs were set up in a way that made such assessments problematic. For example, it was often difficult to find comparison groups, or to establish exactly when the integrated care initiative “switched on”.

“There is a mismatch between evaluation imperative and the reality of the programs.”

He said evaluation of integrated care initiatives so far suggested they were unlikely to reduce costs, and may in fact increase the demand for health and social services by uncovering unmet patient needs.

He added that it took three to five years for the impact of many integrated care programs to become evident, over which time priorities could change, meaning evaluations needed to be flexible.

Does it work?

Rather than tackling the question of “does it work”, Professor Mays suggested it was better for evaluators to ask questions such as:

  • What works for whom in what circumstances, in what respects, how and why?
  • How can the program be adapted to help it work better? and
  • How should the context be modified to help?

Evaluation in practice

While the evaluation of complex systems wasn’t easy, it could be done successfully, as demonstrated by a 2014 multi-method evaluation of changes in stroke care in the UK, he said. The study highlighted the importance of using mixed methods of research (quantitative and qualitative) and several different strands of data collection and analysis.

Professor Mays also outlined how his team at PIRU was tackling the challenge of evaluating NHS England’s Integrated Care Pioneers Programme , which involves 25 different regions introducing new, patient-centred approaches to integrating health and social care.

The evaluation had three strands, he said, including whole system analysis; assessing the impact at the initiative-level; and working with the Pioneers, partners, patients and national policy makers to disseminate learnings.

The HARC Forum also heard from Dr George Argyrous, senior lecturer with the Australian and New Zealand School of Government who was an advisor for the development of the NSW Whole Of Government Evaluation Framework, Dr Jean-Frederic Levesque, Chief Executive of the Bureau of Health Information and Dr Anne-Marie Feyer, advisor to the NSW Ministry of Health’s Integrated Care Monitoring and Evaluation Work Stream.

Dr Levesque said there were three elements to the new paradigm for evaluation of integrated care programs: being agile, using mixed methods of research, and depth of research.

Find out more