The CAM-Lite: Why this Unofficial Delirium Screening Tool Falls Dangerously Short

How a short-cut, unofficial version of the CAM crept into everyday care - and undermined efforts to secure effective delirium detection in routine practice

Alasdair MacLullich

Original Confusion Assessment Method (CAM)

First described by Inouye and colleagues in 1990, the Confusion Assessment Method was an important innovation that definitely moved the field forward. It offered non-specialist clinicians a five-minute bedside tool: a brief cognitive interview (orientation plus a simple attention test such as months-backwards) followed by a yes/no rating of four features - acute/fluctuating change, inattention, disorganised thinking, and altered level of consciousness. Multiple validation studies over the next few years showed that when scored by trained research teams the CAM showed very good validity in relation to reference standard diagnosis. A systematic review of seven formal diagnostic test accuracy studies reported pooled sensitivity of 94% and specificity of 89% when the tool was applied as instructed in the manual (Wei et al., 2008). Those data meant that the CAM grew to become a mainstay in research and in guidelines.

From research success of the original CAM to “CAM-Lite” in clinical practice

The CAM’s robust accuracy and prominent use in research studies triggered widespread adoption of the CAM in routine clinical practice. This is where the story becomes more complicated. Using the CAM in clinical practice would imply that that it is the same tool and same process as validated in diagnostic test accuracy studies. This is critical: if one states that the CAM has sensitivity of 94% and specificity of 89%, this only holds true if the CAM is being used in the same way as in the studies that generated these numbers.

However, without any formal documentation or validation, a different form of the CAM began to emerge and began to be widely used. This new form of the CAM departed in a radical way from the original in that the preceding cognitive interview was removed. Additionally, rather than the CAM being used as a one-off assessment (as described in the validation studies), or at certain points in the patient journey, it started to be used as a tool used to monitor for inpatient delirium. The reasons why this altered version emerged are unclear but one can speculate that the main driver was to reduce the burden on staff.

For clarity, in this blog I shall call this unofficial and non-validated variant “CAM-Lite.” This term is mine and does not originate with the original CAM team. Essentially, CAM-Lite keeps the four algorithm items and structure of the CAM, but in practice the mandatory pre-scoring cognitive test and interview are apparently omitted. Nurses score the four CAM items yes or no once or more per shift, guided mainly by their impression of the patient during the previous hours.

Is the CAM-Lite validated? No!

To my knowledge, despite this fundamentally altered CAM completion process spreading across many hospitals via EPIC and other EHRs, no documentation or validation supported or justified this change.

This table documents the differences between the original CAM and CAM-lite.

Feature Original CAM CAM-Lite
Cognitive stepOrientation + attention testOmitted
Time per screen5 min15–60 s
OperatorTrained assessor / physicianWard nurse, end-of-shift
FrequencySingle assessment1–3 × per shift
Validation evidencemultiple studies; sensitivity ≈ 90 %None
Delirium +ve rate≥ 20% with research teams2–8% in routine care


CAM-Lite in the literature

Though there are no formal validation studies of this ‘CAM-Lite’, there are some studies which have documented its presence in clinical practice, as below. Alarmingly, these studies show that whilst the CAM-Lite may have good completion rates, the effective sensitivity is very low. This issue is documented in more detail in a recent systematic review published in the Journal of the American Geriatrics Society (Penfold et al, 2024; a top 10 JAGS paper for 2024).

  • Rohatgi 2019 (Stanford, n=105,455) — Nurses documented a CAM once per shift (CAM-Lite); they reported the task took “< 1 min” and involved no cognitive test. Completion 98.8%; 2.4% were CAM positive against a measured level of 17% in a subset.

  • Corradi 2016 (Hartford, n=88,206) — Three CAM entries per day were mandated; half of “unable-to-assess” codes were assigned to alert patients, indicating skipped testing. Completion not stated; CAM positive rate 7.9%, true rate not measured but likely 15-20%. Notably, 12% of patients were “Unable to Assess” with the CAM and these patients had higher mortality than the CAM positive patients; most of the UTA patients likely had delirium but the CAM-Lite failed to pick this up.

  • Wong 2018 (Ontario, 43 nurses) — CAM every 12 h; only 35% recalled formal training, attention tests “rarely used”, forms often completed retrospectively.

  • Froedtert Health policy 2023 — Nursing manual mandates a CAM entry every 8 h; the Epic template contains only Yes/No boxes.

Across more than 220 000 admissions the pattern is consistent: documentation compliance nears 100%, yet detection is greatly below the expected rate of delirium.

My observations from visiting multiple hospitals in the USA

I have had the opportunity to visit many hospitals in the USA using Epic and have witnessed that the CAM is often employed in this “CAM-Lite” form. Training, when provided, is uneven; sometimes it is missing altogether. At one centre a geriatrician told me they tried to mine Epic CAM-Lite data to measure delirium prevalence in their geriatrics ward, but the rates - low single digits - were so misaligned with the real rates of delirium that they abandoned the dataset.

What if the CAM-Lite was classed as a medical device?

Both EU MDR 2017/745 and US FDA §201(h) classify any instrument “intended for diagnosis” as a medical device. A bedside scale embedded in an EHR to diagnose delirium meets that definition. If a blood-pressure cuff detected only one hypertensive patient in three, it would be withdrawn. Although, perhaps wrongly, bedside tests for delirium are generally not classed as medical devices, it is sobering to consider that if they were, the CAM-Lite’s unacceptably low estimated real world sensitivity would mean that it certainly would be abandoned and better alternatives sought.

Conclusions

The emergence and widespread adoption of the unvalidated CAM-Lite in EHR systems has created a patient-safety disaster by missing most delirium cases. We are now saddled with a widely-embedded, yet unofficial, unvalidated, and markedly inferior version of the original CAM that, in short, does not work. The evidence summarised in this blog indicates clearly that the CAM-Lite greatly under-detects delirium. Under-detection of delirium is associated with multiple negative consequences and, given the scale of delirium, this is one of the most impactful patient safety issues in current healthcare across the world.

We need to move beyond this blind alley and find better ways of serving our patients with delirium.

Notes

  • CAM-Lite arose without the approval or involvement of Dr Inouye’s original team.

  • The label “CAM-Lite” is mine and is purely descriptive; it simply names an unvalidated implementation drift.

References

American Geriatrics Society Expert Panel on Post-operative Delirium in Older Adults. (2015) Guideline for postoperative delirium. J Am Geriatr Soc, 63, 142-150. https://doi.org/10.1111/jgs.13281

Australian Commission on Safety and Quality in Health Care. (2021) Delirium Clinical Care Standard. https://www.safetyandquality.gov.au/standards/clinical-care-standards/delirium-clinical-care-standard

Corradi JP, et al. (2016) Electronic health-record surveillance for delirium. Comput Biol Med, 75, 267–274. https://doi.org/10.1016/j.compbiomed.2016.06.016

European Parliament & Council. (2017) Regulation (EU) 2017/745 on medical devices. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32017R0745

Food and Drug Administration. (2023) Is my product a medical device? https://www.fda.gov/medical-devices/classify-your-medical-device/how-determine-if-your-product-medical-device

Froedtert Health. (2023) Nursing Instructor Policies: CAM documentation requirements. https://www.froedtert.com/sites/default/files/upload/docs/professionals/education/nursing/nursing-instructor-policies.pdf

Gibb K, et al. (2020) Occurrence of delirium in acute medical in-patients. Age Ageing, 49, 74–84. https://doi.org/10.1093/ageing/afz201

Inouye SK. (2003) Confusion Assessment Method (CAM): Training Manual and Coding Guide. Boston: HELPS. https://americandeliriumsociety.org/wp-content/uploads/2021/08/CAM-Long_Training-Manual.pdf

Inouye SK, et al. (1990) Clarifying confusion: the CAM. Ann Intern Med, 113, 941-948. https://doi.org/10.7326/0003-4819-113-12-941

National Institute for Health and Care Excellence. (2019) Delirium: prevention, diagnosis and management (NG103). https://www.nice.org.uk/guidance/ng103

Penfold RS, et al. Delirium detection tools show varying completion rates and positive score rates when used at scale in routine practice in general hospital settings: A systematic review. J Am Geriatr Soc. 2024 May;72(5):1508-1524. https://agsjournals.onlinelibrary.wiley.com/doi/10.1111/jgs.18751

Rohatgi N, et al. (2019) Hospital-wide initiative to reduce delirium. Am J Med, 132, 1421-1429.e2. https://doi.org/10.1016/j.amjmed.2019.05.039

Scottish Intercollegiate Guidelines Network. (2019) SIGN 157: Delirium. https://www.sign.ac.uk/our-guidelines/delirium/

Wei LA, et al. (2008) Accuracy of the Confusion Assessment Method. J Am Geriatr Soc, 56, 823-830. https://doi.org/10.1111/j.1532-5415.2008.01607.x

Wilson JE, et al. (2020) Delirium. Nat Rev Dis Primers, 6, 90. https://doi.org/10.1038/s41572-020-00223-4

Wong EKC, et al. (2018) Nursing perspectives on the CAM. Age Ageing, 47, 880-886. https://doi.org/10.1093/ageing/afy083 

Next
Next

The Forest of Revolving Nightmares : a personal account of experiencing delirium