top of page
Search

Low back pain: These academics got it precisely right, and practically wrong.

  • jeffreybegg
  • Sep 5
  • 6 min read

Updated: Sep 8

Recently published systematic review on low back pain diagnostic accuracy ... How do we know what we know about diagnosis? ... Why you should review your inactive files regularly to enhance your cognitive awareness of conditions... How not to be fooled by unsupported research conclusions.

Click here to listen in podcast format
Click here to listen in podcast format

Sometimes, when we're in a hurry, we might just grab an interesting article and skip to the conclusions section to see if we can glean some quick information. We know we're not supposed to, but we do. Here's a basic conclusion from a leading medical journal in its time:

The evidence suggests a diagnosis may be possible for some patients with low back pain, potentially guiding targeted and specific treatment approaches. (1)

What's your best guess as to when this article was published? 1860? 1930? You might be surprised. This is a rigorous, well designed research study from 2023 published by a journal under the umbrella of the Lancet. These academics put together a rigorous study design and used reasonable methods. They completed a strong statistical analysis. But they got it wrong with this conclusion.


It's not all bad. Let's talk about what we can take from this study and then where exactly they got it wrong.


Study design & Methods

Remember from your classes on 'putting evidence into practice' that we don't bother reading study conclusions if we aren't comfortable with the study design and methods in the first place. So, let's take a look. This team of 9 academics aspired to complete a systematic review. They reviewed the literature for a recent 17-year span, looking at studies on how to diagnose three sources of low back pain: disc, facet joint, and SI joint. They admitted papers comparing a strong reference standard (like discography, or anaesthetic blocks) to at least one reference test available to front line clinicians (e.g., special tests, features of the history, and MRI). They screened the studies, pooled the data where possible, and assessed the risk of bias. This is a solid start, so let's look at their statistical analysis next.

Hip and knee osteoarthritis guidelines
from eClinicalMedicine, Vol 59 (April, 2023)

Here are the gold standard tests they used for each diagnosis. For disc pain they used pain provocation on discography. For facet joint and SI joint pain, they used pain reduction after 2 repeated joint injections. And in comparison, they looked at a number of imaging findings, features of the history, and physical exam findings.  Comparing clinical tests against a gold standard is a classic study design, so on the website we've posted a table outlining the resulting positive and negative likelihood ratios, for those who are interested. But here's the summary:

Han, CS. Hancock MJ et al.  Low back pain of disc, sacroiliac joint, or facet joint origin: a diagnostic accuracy systematic review. eClinicalMedicine 2023;59: 101960
Han, CS. Hancock MJ et al. Low back pain of disc, sacroiliac joint, or facet joint origin: a diagnostic accuracy systematic review. eClinicalMedicine 2023;59: 101960

When it comes to ruling in a diagnosis, only one test meets the criteria for strong evidence, and another 2 tests provide moderate evidence. Only a handful of tests provide moderate evidence for ruling out a diagnosis. When considering 10 different examination tests, none really help to rule diagnoses out. Only 1 really helps rule things in, that being the presence of Modic Type 1 changes in the lumbar spine on MRI, which is strongly correlated with discogenic pain. Again, you can see the full details posted in easy-to-read table format on the Physiotherapy Guild website.


A heads-up: the Guild is all about balancing the science with the art of what we do. Not all researchers are adept at creating a visual summary of their data in a quick, useable format for clinicians. You'll find lots of visual aids and eye-catching charts throughout the Guild's blog site, and in its coursework designed for practicing physiotherapists.


Now a narrow view of this research paper's findings might lead you to assume that we should discard the tests that don't provide good evidence. That would be a big mistake, because each test is a data point that helps complete a picture, even if the test by itself is not statistically valid.


How we know we know

We know that some clinicians can make an accurate diagnosis on a regular basis. Some clinicians can accurately predict which patients are going to respond to an SI joint block, for example. How do they know? They're not using the tests from this systematic review, because they don't work so well. So, there must be more going on. To find out exactly what, let's turn to the study of “how we know we know”, as we again put into context this misleading study conclusion that "evidence suggests a diagnosis may be possible for some patients with low back pain."


Long before the researchers prove things, we know things. Medicine has a long history of experimentation and refinement based on anecdotal evidence. In fact, one physician has written a helpful article on how we know what we know in health care, and you can find a link here. Our ways of knowing include a host of things aside from randomized controlled research: observation, experience, intuition, learning from others, even collective consciousness.


Here's an illustration. Some years ago, researchers tried to find out how accurately a diagnosis could be made by medical school graduates versus physicians with 10 years of experience. They provided a list of symptoms to each group and asked them to come up with a differential diagnosis.


The graduates did a pretty good job, typically coming up with a list of 3 possibilities, and the correct diagnosis was usually included. The experienced doctors rarely included more than one diagnosis, and it was always the correct one. The most interesting part of the study was when they asked each group how they knew. The graduates generally gave a reasoned explanation as to what each test finding meant, and how that lead them to their conclusions. But the veteran doctors, as a group, responded with some variation of this answer: "what do you mean how did I know? That's just what it is."


The researchers concluded that ways of knowing change as one becomes more experienced. Novice clinicians use hypothesis generation and deductive reasoning. This allows them to sort through the data to see which diagnosis fits. This is a foundational method that helps reduce the risk of error, though it is cumbersome and slow.


Expert clinicians use a different cognitive process that some academics refer to as pattern recognition. As the story unfolds, and the patient explains their symptoms, the expert clinician sees a pattern they've seen many times before, and the physical exam is no longer standardized but rather tailored to confirm or refute the assumption. Pattern recognition arrives at the correct diagnosis quickly and efficiently. This is the preferred process for clinicians with enough experience to have seen many conditions many times over many years.


Follow up with your tricky patients

If you are a novice clinician with less than 5 years of experience, there is power in knowing this. In order to make the leap to expert thinking, you need to know how your cases turn out. You can let that happen organically, slowly over a number of years. Or you can aggressively pursue this by following up with each and every patient who falls off the program. You'll learn more quickly if you follow up with them to find out what happened. Review of your inactive files regularly provides an opportunity to reach out to patients to enhance your cognitive awareness of conditions.


You can start by making sure that every time your patient is referred for a spinal injection that you follow up with them 2 weeks later, even if they are no longer under your care. Facet blocks and SI joint blocks are very helpful diagnostically. Determining how your patient responded to them gives you the gold standard diagnostic information you need to complete your picture. You'll find yourself saying "oh, so it was a facet joint after all!"


A better conclusion

Let's go back to our study design again to finish up. Perhaps their methods weren't as robust as we concluded. They were looking for evidence that one test, by itself could point to a low back pain diagnosis with accuracy. And yet, they make a broad, sweeping conclusion that is definitely not supported by their research:

a diagnosis may be possible for some patients...

They might have done better to have concluded that "using a single test to diagnose low back pain is not yet a valid approach." Now that's a true statement.


Let's help them out here a bit. Based on what we know about how we know, can you imagine a more accurate conclusion to make? How about this one:


The research has not yet been able to determine how experienced clinicians are accurately making the right diagnosis in cases of low back pain.

Summary

You're a keen physiotherapist, and you want to provide the best-evidence care for your low back pain patients. Sometimes the researchers get in the way and mess with your confidence. Don't let them. Think things through and know what you know.


Jeff Begg, PT


  1. Han, CS. Hancock MJ et al. Low back pain of disc, sacroiliac joint, or facet joint origin: a diagnostic accuracy systematic review. eClinicalMedicine

    2023;59: 101960, Published Online 6 April 2023. https://doi.org/10.1016/j.eclinm.2023.101960


 
 
 

Comments


An image of the Physiotherapy Guild Embelm.

join the guild

Becoming a Guild subscriber is free. Sign up to keep abreast of current issues and continuing education opportunities.

Thanks for subscribing!

  • Instagram
  • Facebook

E: admin@physiotherapyguild.ca

Edmonton, Alberta, Canada

© 2022 Physiotherapy Guild. All rights reserved. Website by Juliana Laface Design.

bottom of page