A big gap in the delirium literature: data on clinical implementation

The waves of delirium research and influence on practice

We have come far in our understanding of delirium over the last 3-4 decades. In an editorial Andy Teodorczuk and I framed progress in delirium research and practice as having occurred in 3 waves:

  1. Early 20th century to 1970s: minimal delirium-specific research, with rare coverage in textbooks, professional education, and policy.

  2. 1980s to late 1990s: new measurement methods, fundamental epidemiology, and evidence that with good systematic care delirium could be partly prevented. This work stimulated new interest by clinical educators and policymakers.

  3. 2000 onwards: rapidly growing activity, with the foundation of new international associations dedicated to delirium, new evidence-based guidelines, more consistent use of the term delirium, and policy initiatives to improve clinical care at scale. For example, post-operative delirium screening in hip fracture patients was made mandatory in the UK National Health Service (NHS) and is audited by the government. You can see the 2019 National Hip Fracture Database report here.

blog5d.png

Implementation studies remain unusual in the delirium literature

The present situation is that delirium has a much higher profile than before, and there are pockets of good practice. For example, it is encouraging that data from the UK National Hip Fracture Database and show that implementation of delirium detection at scale is possible.

But we also know that most delirium around the world remains undetected, and this means that the treatment of delirium is also mostly unsatisfactory.

How can we improve this situation? One important way is going to be increased scientific work on implementation.


Electronic medical records: use in studying the quality of implementation of delirium detection

A striking feature of the delirium detection literature is that we have a lot of validated tools, but there is very little literature on how they perform in the real world. Guidance to practitioners based on validation data alone is incomplete: a tool performing well under research conditions may not perform well under clinical conditions.

We are now in a good position to start studying implementation, for example by examining how tools that are supposed to be used are actually used in practice. Electronic medical records such as EPIC and Trakcare are now the norm in many countries, and many allow for large-scale analyses. Such analyses could include rates of (a) completion of delirium detection tools, and (b) the rates of positive scores of the tool being used. Huge potential, but as yet not unleashed.

In Edinburgh we have some data on this, with >75% tool completion and 17% positive scores. To be published soon.

So perhaps a fourth wave of progress in the field will include a upsurge in reports of how we are doing in the real world in delirium detection, implementation of prevention protocols, outcomes, and so on. At the moment we are mostly operating in the dark, not really knowing if policy advice based on data from research validation data alone is good.

One great example of a high quality implementation study on delirium detection has just been published. More on that to come but here is the main figure from the paper, showing what is possible:

blog5f.png