Evidence-based Practice: Some Tools to Find It

CSES, HORT, AECT, PLPA, HESC, POSC, AMPD, FDSC Agriculture Librarian

Profile Photo
Necia Parker Gibson
she/her/hers/Ms.
Contact:
Mullins Library 422 (Level 4, Librarians' suite)

Email is the best way to contact me. Click the Email me button or use neciap@uark.edu.

My office phone forwards to my cell phone. If not, send email.

Email or use the text number below. If you want me, particularly, ask for me.

Text a librarian(not me specifically): 479-385-0803

Email me for an appointment, or meet me on Teams by clicking the button! If I'm available, I'll answer.
479-575-8421
Website

Chat with Us

What is Evidence-based Practice?

Evidence-based practice helps ground professional decisions in the literature of the discipline and in the context of the patient(s), subjects or other situations. The practice started in medicine and related disciplines, but has spread into other areas, including veterinary medicine, conservation, education and dietetics. Appropriate sources may vary by discipline; IF in doubt, consult your advisor/professor.

A feature of reviews for evidence-based practice is the structure of the review, and its protocols. What rules did the authors follow when doing the the review, the meta-analysis or systematic review? There should be a summary of findings, details on where they searched, how, when, how the studies were selected, how many, what quality, and whether they limited the search to English language sources, or other languages, among other possibilities. Did they cover the relevant, important literature in the field? Is the number sufficient? How did they define useful studies? Does their definition make sense? Did they exclude studies, and if so, do they detail why? Did they include randomized controlled studies as at least part of their selection, if those studies exist in the discipline? Did they suggest implications of the results of the study for practice and research? Are recommendations or protocols included, or are you directed to them in another document?

Some Databases with Systematic Reviews

Levels of Evidence

Example Levels of Evidence

 

Sources of Evidence 

Classification 
Meta-analysis of multiple well-designed controlled studies

1A

Well-designed randomized controlled trials

1

Well-designed non-randomized controlled trial (quasi-experiments)

2

Observational studies with controls (retrospective studies, interrupted time-series studies, case-control studies, cohort studies with controls)

3

Observational studies without controls (cohort studies without controls and case series)

4

 

Robey, R. R. (2004, April 13). Levels of Evidence. The ASHA Leader.http://www.asha.org/Publications/leader/2004/040413/f040413a2.htm

There are other examples. This is a commonly-used hierarchy.

Another Hierarchy for EBM

Rating System for the Hierarchy of Evidence: Quantitative Questions

Level I: Evidence from a systematic review of all relevant randomized controlled trials (RCT's), or evidence-based clinical practice guidelines based on systematic reviews of RCT's

Level II: Evidence obtained from at least one well-designed Randomized Controlled Trial (RCT)

Level III: Evidence obtained from well-designed controlled trials without randomization, quasi-experimental

Level IV: Evidence from well-designed case-control and cohort studies

Level V: Evidence from systematic reviews of descriptive and qualitative studies

Level VI: Evidence from a single descriptive or qualitative study

Level VII: Evidence from the opinion of authorities and/or reports of expert committees

Above information from "Evidence-based practice in nursing & healthcare: a guide to best practice" by Bernadette M. Melnyk and Ellen Fineout-Overholt. 2005, page 10. From: http://researchguides.ebling.library.wisc.edu/content.php?pid=325126&sid=2940230

In addition:

Practice based on empirical research is more likely to be sound. Looking for systematic reviews of the literature allows you to have some confidence that the practices recommended are based on more than just a few patients. Systematic reviews that include meta-analyses of the data in the articles are more likely to be reliable.

Things to consider: Did they ask a good question? the 'right' question? Did they make their methods explicit? Was their search detailed? Was it comprehensive? What did they miss, and why?

A different model for evaluation that I also like is called the rhetorical triangle: the three points of the triangle are author, audience and purpose (Laura Wukovitz, http://Research Guides.hacc.edu/milex). This brings into consideration that there are often social dimensions to even the most data-driven science (sometimes colloquially known as whose ox is being gored, or in other words, who benefits from a particular publication, and in what way(s)?