‘Known unknowns’, or how to plug the gaps in public research

0

In 1979, Archie Cochrane published an essay chastising (not for the first time) his fellow doctors. “It is surely a great criticism of our profession,” he wrote, “that we have not organised a critical summary, by speciality or subspeciality, adapted periodically, of all relevant randomised controlled trials.”

The idea of “organising a critical summary” reeks of manila folders and unimaginative paper-shuffling — unworthy of a man like Cochrane, who was a heroic figure in the field of medicine. And yet, as so often, Cochrane had struck at the heart of the matter.

The basic building block of evidence in medicine is the randomised trial, as Cochrane understood as well as anybody. But some randomised trials may be flawed. Others may have disappeared from the academic record, perhaps unpublished because they did not find the positive results their funders were hoping for. Even if all the trials of a particular treatment are rigorous and reported, the most robust evidence comes from combining them. When properly synthesised, several inconclusive trials may collectively produce a conclusive result. Yet to turn those basic building blocks into more than a pile of epistemological rubble, producing a robustly structured edifice of knowledge, takes work.

Is that work taken seriously enough? I wonder. In 1993, Sir Iain Chalmers, a health services researcher, founded Cochrane, an international non-profit best known for the Cochrane Library of systematic reviews in medicine. Named in honour of Archie, Cochrane has magnificently responded to his challenge: the Cochrane Library now lists more than 9,000 systematic reviews. 

But in other fields, such as education, policing or economic development, the picture is less rosy. Education is arguably of comparable importance to health for any government, and the UK government is typical in spending about half as much on education as on health. One might expect, then, that governments would spend about twice as much on health research as on education research. Instead, the disparity is glaring. As David Halpern and Deelan Maru point out in their recent Global Evidence Report, the UK government spends 18 times as much on research into health than it does on research into education — or, to put it another way, education research is underfunded by a factor of 10. 

If anything, that paints too optimistic a picture of research into social policy, because other countries spend even less. And, says Will Moy, CEO of the Campbell Collaboration, education research is probably the best of the rest when it comes to research funding. The Campbell Collaboration, which aims to do for social policy what Cochrane does for medicine, boasts just 231 systematic reviews — a fair reflection of the fact that social policy research enjoys a fraction of the money and attention lavished on medicine.

There is more going on here than a lack of spending on primary research into criminal justice, education and other areas of social policy. While money is sometimes available for project-by-project evaluations, there seems to be a reluctance to support the basic infrastructure of a database of systematic reviews, or to fund the frequent updates that turn a systematic review into the appealingly named “living evidence review”. 

As an example, consider the International Initiative for Impact Evaluation (3ie), widely admired for its Development Evidence Portal. The Portal is very much in the spirit of Archie Cochrane’s organised critical summary of all relevant trials — but it struggles for steady funding. Marie Gaarder, the executive director of 3ie, ruefully notes that the entire portal can be run for a year at a cost of less than a typical impact evaluation — but “public goods tend by their nature to be underfunded”. 

On the bright side, the UK’s Economic and Social Research Council recently teamed up with the Wellcome Trust to announce more than £50mn of funding for evidence synthesis. That makes sense, as a modest amount of funding could go a long way towards building an “evidence bank” on which policymakers could draw.

Systematic reviews have one obvious appeal. It makes sense to assemble and organise all the relevant evidence in one place. But there are two other advantages that may be less apparent.

The first is that a good systematic review can bridge the gap between the academic and the policymaker. The natural unit of analysis for a researcher is a particular intervention: “Does neighbourhood policing reduce crime?”. For a policymaker, the natural unit of analysis is the problem: “How do I reduce crime?”. By bringing together relevant research in the right way, systematic reviews can help to answer policymakers’ questions.

And the second advantage? Evidence synthesis highlights what Donald Rumsfeld infamously called “known unknowns”. There is no surer way to identify gaps in research than to put together a systematic review — at which point funders can commission research to plug those gaps, rather than yet another study of a familiar topic. Since the 1990s, medical research councils have been demanding systematic reviews as a precondition for funding new studies. The lesson should be more widely learnt. 

This advantage was eloquently expressed by one of the 20th century’s great policy evaluators, Eleanor Chelimsky. In 1994, she explained, “I hoped that synthesis could dramatise, for our legislative users, not only what was, in fact, known, but also what was not known.”

Dramatising our ignorance is one of the most valuable things an evidence review can do. There is more to this than manila folders. 

Follow @FTMag to find out about our latest stories first and subscribe to our podcast Life and Art wherever you listen



#unknowns #plug #gaps #public #research

Leave a Reply

Your email address will not be published. Required fields are marked *