Translate

11 dezembro 2014

Metrics: combatendo ciência de baixa qualidade

“WHY most published research findings are false” is not, as the title of an academic paper, likely to win friends in the ivory tower. But it has certainly influenced people (including journalists at The Economist). The paper it introduced was published in 2005 by John Ioannidis, an epidemiologist who was then at the University of Ioannina, in Greece, and is now at Stanford. It exposed the ways, most notably the overinterpreting of statistical significance in studies with small sample sizes, that scientific findings can end up being irreproducible—or, as a layman might put it, wrong.
Dr Ioannidis has been waging war on sloppy science ever since, helping to develop a discipline called meta-research (ie, research about research). Later this month that battle will be institutionalised, with the launch of the Meta-Research Innovation Centre at Stanford.
METRICS, as the new laboratory is to be known for short, will connect enthusiasts of the nascent field in such corners of academia as medicine, statistics and epidemiology, with the aim of solidifying the young discipline. Dr Ioannidis and the lab’s co-founder, Steven Goodman, will (for this is, after all, science) organise conferences at which acolytes can meet in the world of atoms, rather than just online. They will create a “journal watch” to monitor scientific publishers’ work and to shame laggards into better behaviour. And they will spread the message to policymakers, governments and other interested parties, in an effort to stop them making decisions on the basis of flaky studies. All this in the name of the centre’s nerdishly valiant mission statement: “Identifying and minimising persistent threats to medical-research quality.”

The METRICS system

Irreproducibility is one such threat—so much so that there is an (admittedly tongue-in-cheek) publication called the Journal of Irreproducible Results. Some fields are making progress, though. In psychology, the Many Labs Replication Project, supported by the Centre for Open Science, an institute of the University of Virginia, has re-run 13 experiments about widely accepted theories. Only ten were validated. The centre has also launched what it calls the Cancer Biology Reproducibility Project, to look at 50 recent oncology studies.
Until now, however, according to Dr Ioannidis, no one has tried to find out whether such attempts at revalidation have actually had any impact on the credibility of research. METRICS will try to do this, and will make recommendations about how future work might be improved and better co-ordinated—for the study of reproducibility should, like any branch of science, be based on evidence of what works and what does not.

Wasted effort is another scourge of science that the lab will look into. A recent series of articles in the Lancet noted that, in 2010, about $200 billion (an astonishing 85% of the world’s spending on medical research) was squandered on studies that were flawed in their design, redundant, never published or poorly reported. METRICS will support efforts to tackle this extraordinary inefficiency, and will itself update research about the extent to which randomised-controlled trials acknowledge the existence of previous investigations of the same subject. If the situation has not improved, METRICS and its collaborators will try to design new publishing practices that discourage bad behaviour among scientists.
There is also Dr Ioannidis’s pet offender: publication bias. Not all studies that are conducted get published, and the ones which do tend to be those that have significant results. That leaves a skewed impression of the evidence.

Researchers have been studying publication bias for years, using various statistical tests. Again, though, there has been little reflection on these methods and their comparative effectiveness. They may, according to Dr Ioannidis, be giving both false negatives and false positives about whether or not publication bias exists in a particular body of studies.

Dr Ioannidis plans to run tests on the methods of meta-research itself, to make sure he and his colleagues do not fall foul of the very criticisms they make of others. “I don’t want”, he says, “to take for granted any type of meta-research is ideal and efficient and nice. I don’t want to promise that we can change the world, although this is probably what everybody has to promise to get funded nowadays.”

Fonte: aqui

Nenhum comentário:

Postar um comentário