As a new parent, I am familiar with the amount of ‘how to do parenting’ advice that becomes ‘available’ from different angles. Good advice or not, it’s obvious that when it comes to babies, people want to help out sometimes without considering the impact their advice may have. I see health and medical research in a similar vein – there are many opinions with no consideration of any repercussions. Evidently, there are many competing agendas in medical research. Since I’m somewhat of a romantic I do believe the foundation of academic research is the intention to do ‘good’. We cannot be blindsided though by such intentions as there are real consequences to our work that we all should consider before we conduct it.
Leo posted recently on the state of research quality (here). While there is a bit of overlap between the issues, in this post (and the next) I want to explore the problems I see with producing high volumes of research. Since I wanted to avoid posting something with a higher word count than my thesis I should mention that these are probably only a few of the problems worth considering.
The first relates to access to good information. Recently I conducted a systematic review that required screening more than 10,000 titles, (stupid I know), 60 of which were remotely relevant. While it might be unfair to use this as an example, compared to synthesizing evidence in real practice (a search for a systematic review is usually more sensitive), it seems keeping up to date with high volumes of literature would be beyond the capacity for any normal clinician (or researcher). The issue becomes more apparent if we consider that departure from evidence-based care remains enormous in most areas of health care.(1) In my view, the dissemination of large quantities of (arbitrary) literature is a major contributor to this. Certainly, well-conducted and implemented research does occur and it forms the basis of quality health care. The reality though is that this is a small proportion of the ‘evidence bank’. The sheer weight of ‘dubious evidence’ surely has some impact on shaping practice even though we really only ever care to guess how our (good) research informs practice.
An added consideration to the issue of sourcing reliable information is the pressure patients bring to the health care table. Pressure (from patients) to perform a test or treatment is a reality. While I believe management decisions should be a two-way interaction between the clinician and patient, unskilled opinions based on inaccurate heath information means that this pressure is often misguided. Antibiotic prescription is an example that springs to mind. Many time-poor clinicians may not be well equipped to deal with demanding patient requests or misinformed beliefs. Moreover, it is common to encounter the advocacy (and use) of screening procedures and treatments despite higher risk of side effects, because the ‘research’ supporting them may form the majority of evidence but not the best available evidence. The issue is that while 20 poor studies do not provide better quality evidence than 1 or 2 good ones, they may be perceived to do so. Simon Chapman has commented on a similar issue in prostate cancer. (2) Agreed, part of the problem is how this information is represented. However, as producers of the information it seems irresponsible to dismiss our own liability and blame the misrepresentation of our research.
Without question, a leading cause to high volumes of publication is the pressure to publish…or perish. It seems odd though that irresponsible publication of research does not face similar ramifications treating clinicians would for providing flawed treatments, the right treatment poorly or over treating a patient.
Always the afterthought, another consequence of producing large quantities of research is regarding dollars (or € or £).
Conducting a clinical trial is expensive and is an obvious waste of resources in the case of high volumes of poor research. Big piles of poor research also add unnecessary pressure to overextend peer review and require additional resources to synthesize evidence for systematic review and guideline development. However, trial quality does not have to be poor to have wasted the funding and resources required to conduct the research.
Should a trial of an intervention that is not expected to be more effective (or cost effective) than current usual care or recommend practice go ahead? Unfortunately this outcome is often the case and maybe should be considered before we conduct our research. In my view, an impressive mission statement (e.g. to reduce the burden of disease) and the usual statistics about the significant burden of a disease do not of themselves justify the conduct a trial (of a certain intervention). Surely, with competing interests for the public purse, when research is publically funded we require…. (deep breath)…. greater regard to the expected benefit relative to costs of the proposed solution (including the cost to conduct a trial) and compared to existing options.(3) I wonder how many current and previous studies would have passed this test before they were sent off for funding.
I have considered only a few of the consequences of conducting excessive (and not just poor quality) research. An obvious flaw in the scientific process is to disregard the adverse impact of our work and focus on the positive implications. We must be careful that this does not lead us astray, if it hasn’t already.
1 – Williams C, Maher C, Hancock M et al. Low Back Pain and Best Practice Care: A Survey of General Practice Physicians. Arch Int Medi. 2010; 170:1088
2 - Champman S, Barratt A, Stockler M – Let Sleeping Dogs Lie; 2010 Sydney University Press http://sydney.edu.au/news/84.html?newscategoryid=1&newsstoryid=6003
Hell is paved with good intentions (part 2)
3 - Torgerson DJ, Byford S - Economic modelling before clinical trials BMJ 2003; 325: 98