Are You Getting Good Advice from Your Clinical Decision Support Systems?
Electronic Bad Advice
Meaningful Use requires the adoption of patient specific Clinical Decision Support systems (CDS). These can range from the relatively simple (e.g. age driven flags) to the more sophisticated and complex which use multiple patient parameters. For Stage 2 the number of CDS interventions required is five in addition to drug-drug and drug-allergy interactions. The FDA regulates at least some CDS’s under its medical device authority. These include those which are directly linked to specific devices (e.g. an infusion pump drug library, or image reading software), although stand-alone software is not excluded from regulation. The still forthcoming FDA Mobile Apps guidance (released as a draft in July, 2011) will further address the regulatory domain of personalized health, diagnostic and advice giving apps.
A global concern with software derived advice (or guidance, or suggestions, or recommendations, or lists, etc.) from whatever platform is whether or not the advice is correct, and is therefore suitable to be relied upon. Even if generally correct, there may be reason to be concerned about the universe of patients and situations for which it is correct, as opposed to some other group of patients for whom it is not correct. If this is the case then knowing who is in and who is out is clearly important. This is true despite common disclaimers that it is always up to the provider to second guess the system, i.e. the system is actually not to be relied upon. Although not related to a CDS, the issue of patient population attributes was recently illustrated by a report that patients with diabetes did better with coronary artery bypass surgery rather than with stenting.
Within this framework comes a recall of an “error reduction system” for a drug infuser because of the potential for the recommended dose to be incorrect. In this case the problem was in part the user interface in that the error was triggered only for a specific sequence of key strokes. Rather than assert that the user would catch such an event the recall instead called it a “malfunction”, noting further that there would be an increased risk of over-infusion or under-infusion that could cause serious adverse health consequences, including death. An interim fix was recommended which involved removing the library card which disabled the library/advice feature. The manufacturer correctly pointed out that having done this the clinical staff would need to be notified that they would no longer be getting the recommendations they were used to. The problem was to be corrected with a software “upgrade” to 3.5.1. It is noteworthy that the manufacturer acted after only one report of the error occurring. Does your EHR vendor have a standard for how many reported events will trigger corrective action and notification?
The ubiquitous version numbering of software, here to three figures, is suggestive to some that software is too often designed and released, and then “upgraded”, with the ongoing expectation that errors will be found that need to be corrected. This in turn suggests that it is expected that software will have errors, possibly including in the advice that a CDS provides. In this regard tracking version numbers in EHRs, if visible, is something that should become routine.
While this is a case of a specific medical device under a specific set of circumstances, and not an EHR, there is no reason to believe that EHR associated CDS systems will be immune from internal errors that generate wrong advice. It is noteworthy in this regard that an EHR recall was recently voluntarily reported to the FDA by the vendor and posted on the FDA’s website. The problem in that case was lost doctor’s notes. This recall posting was curious because the EHR was not an FDA regulated product. If and how other anomalies will be reported to and by EHR vendors is not clear. Further, for a CDS that is not regulated by the FDA (whether or not it should be), there is no mandatory recall system in place to notify users and the public about such errors when they occur. More generally, the FDASIA Health IT Workgroup report that was recently accepted by FDA/ONC/FCC noted that when even serious safety-related issues with software occur, there is currently no central place to report them to, and they do not generally get aggregated at a national level.