Why Aren’t Prompts Followed?
One feature of EHRs is the ability to provide prompts to clinicians about the care being provided to their patients. Such prompts, generally called Clinical Decision Support (CDS) are usually driven by practice guidelines or derived from AI efforts. They may be as simple as a yes/no answer (flu shot?) or involve the consideration of multiple variables. At their best these prompts are driven at least in part directly by the medical record so that they are based on recorded facts of the patient’s personal attributes, health and prior treatments. In this way repetitive inputs and needless prompts can be avoided. Prompts may occur in advance of clinician decision making or they may be responsive to decisions already made, or pending. In principle this seems helpful yet it has been noted in various studies that physician compliance with prompts is often very low. If prompts are actually to be useful in promoting better patient care, it can be of value to determine why compliance is so low. Three possible reasons for low compliance are (1) prompt not visible, (2) prompt visible but not seen, (3) prompt seen but rejected.
Not visible means the prompt wasn’t presented on the right screen at the right time. Thus, it was somewhere and perhaps could have been found but under the normal work flow this would not occur, or on a case-by-case basis, it didn’t occur. This is simply bad design and it is also a trap for the user in that it could be demonstrated that the prompt was there if only the user had looked for it.
Visible but not seen can also be a design problem in terms of lack of obviousness because of appearance, size, color, clutter or workflow issues that actually involve looking away from the screen. These features can separate systems that could possibly be useful from those that actually are useful, and how to present information in a way that attracts the needed amount of attention at the right time is a core human factors issue in computer interface design As above this is also a trap for the user because it can be pointed out after the fact that the prompt “was right there”. Beyond the visibility of the prompt a system could require some kind of active response before the user can move in. This would be like the common form filling experience in which the next step cannot occur until the current step is completed to the system’s satisfaction. In the worst of such designs what you already filled in is erased when the system rejects your input and resets. Any such design is likely to be hated by the users, especially if, as discussed below, the user intends to not do what the prompt says anyway. Moreover, such behavior can reinforce the feeling that the prompts are more annoyance than help.
There can be a number of reasons why a prompt is seen and rejected. One is that a particular user routinely ignores all prompts because they find them annoying, unhelpful and intrusive. This may go with a general aversion to advice and second guessing from either people or computers, and be consistent with a high level of self-assurance by the user that their decisions are always correct. Another reason for ignoring a prompt is the general or case specific circumstances in which the user believes that the prompt is incorrect in the current instance and that they know better than the CDS. This is of course sometimes true since all but the most simplistic CDS have less than perfect performance. When the user is correct in disregarding the prompt, it is good that they have used their own judgement and not been slavish to questionable advice. When the user is wrong however it will be easy to show that the correct answer was right in front of them but they actively chose to do something else. This quandary is deepened by the fact that the vendors of such system routinely say that the user shouldn’t rely on the advice given but should instead exercise their own judgement.
The idea of prompts being helpful seems sound but it has often been found that they have little positive effect on users. This lack of effect occurs at the interface between system design and human behavior. To be effective prompts must be timely, visible, non-annoying, not excessive and correct. This is a challenge for the quality of the advice, presentation design, and compliance. Audits and associated feedback, if not compulsion, outside of the EHR can perhaps overcome bad design and non-compliance, if the environment will tolerate such additional intervention. On the other hand, compliance with weak or bad advice is not a worthy goal.