Evidence-based practice helps ground professional decisions in the literature of the discipline and in the context of the patient(s), subjects (human or otherwise) or situation-- such studies are done and used in architecture, ecology and public policy as well as health, social work, and education. The biggest single question is some form of: "What has been shown to work, in ____situation?"
Sometimes, there is enough to suggest relevant practices but not count them as proven, what Adams and Sandbrook refer to "evidence-informed practice" (Conservation, evidence and policy. 2013. Oryx, 47,3: 329-335)
While the majority of evidence-based practice studies relate to health of human beings, the practice is spreading to other fields. Appropriate sources may vary by discipline; IF in doubt, consult your advisor/professor.
Sources of Evidence |
Classification |
---|---|
Meta-analysis of multiple well-designed controlled studies |
1A |
Well-designed randomized controlled trials |
1 |
Well-designed non-randomized controlled trial (quasi-experiments) |
2 |
Observational studies with controls (retrospective studies, interrupted time-series studies, case-control studies, cohort studies with controls) |
3 |
Observational studies without controls (cohort studies without controls and case series) |
4 |
Robey, R. R. (2004, April 13). Levels of Evidence. The ASHA Leader.http://www.asha.org/Publications/leader/2004/040413/f040413a2.htm
There are other examples. This is a commonly-used hierarchy.
Rating System for the Hierarchy of Evidence: Quantitative Questions
Level I: Evidence from a systematic review of all relevant randomized controlled trials (RCT's), or evidence-based clinical practice guidelines based on systematic reviews of RCT's
Level II: Evidence obtained from at least one well-designed Randomized Controlled Trial (RCT)
Level III: Evidence obtained from well-designed controlled trials without randomization, quasi-experimental
Level IV: Evidence from well-designed case-control and cohort studies
Level V: Evidence from systematic reviews of descriptive and qualitative studies
Level VI: Evidence from a single descriptive or qualitative study
Level VII: Evidence from the opinion of authorities and/or reports of expert committees
Practice that is based on empirical research is more likely to be sound. Looking for systematic reviews of the literature allows you to have some confidence that the practices recommended are based on more than just a few patients. Systematic reviews that include meta-analyses of the data in the articles are more likely to be reliable.
What rules did the authors follow when doing the meta-analysis or systematic review? There should be:
Did they cover the relevant, important literature in the field? Is the number sufficient? How did they define useful studies? Does their definition make sense? Did they include randomized controlled studies as at least part of their selection? Did they suggest implications of the results of the study for practice and research? Are recommendations or protocols included, or are you directed to them in another document?
Things to consider: Did they ask a good question? the 'right' question? Did they make their methods explicit? Was their search detailed? Was it comprehensive? What did they miss, and why?
A different model for evaluation that I also like is called the rhetorical triangle: the three points of the triangle are author, audience and purpose (Laura Wukovitz, http://Research Guides.hacc.edu/milex). This brings into the consideration that there are often social dimensions to even the most data-driven science ("Whose ox is being gored?"), another way of asking who benefits or loses from a particular publication or study, and in what way?