Don’t Ignore Inaccurate Data From Your EHR Extraction Process
Automating extraction of data from electronic health records (EHRs) is an excellent idea. Done right, it can provide a complete data set, something not possible with the sampling methods used in the current labor-intensive hand extraction used for reporting many hospital quality measures. A complete data set, if accurate, is far more useful than a sample set in population analytics and in efforts to bring greater precision to medical treatment protocols.
Unfortunately, the data set is seldom accurate. A disconnect between EHR vendor tools and the way the EHRs are implemented has been a big barrier to pulling accurate data out of patient medical records. The programmers who create the vendor tools work under the assumption that data are entered in a particular field and at a particular point in the treatment process. But end users can be amazingly creative in their use of EHRs, especially when they are resistant to changing their workflows. Without consistent, strong governance over EHR adaptations, healthcare organizations can end up with numerous, often undocumented, variations and customizations of their software. And many of those variations are adaptations that the application coders never imagined. The result is extraction tools that can’t find relevant data.
Adding to this problem is the consistent inconsistency of how data are entered, especially in departments where traffic is high and work is fast-paced. Demographic details, which are important to identifying all relevant patients for a particular measure, are often overlooked. Sometimes data that should be entered in a particular field are recorded in unstructured notes, making it invisible to the automated tool.
Unlike experienced human chart reviewers, who can search out missing data, or infer essential qualifying data points through unstructured notes, automated tools are seldom sophisticated enough to find the missing data. That’s why there are often big discrepancies between the data reported to attest to Meaningful Use (MU) extracted by automated tools and what those data would show if manually extracted by an expert reviewer. This disparity is more conspicuous when comparing automated reporting of whole population measures, as with Meaningful Use, and manually sampled population measures, as with the Inpatient Quality Measures. As the Centers for Medicare & Medicaid Services (CMS) attempts to synchronize the various quality measure reporting programs within Health and Human Services, this issue will need to be tackled.
It’s time to pay attention to inaccurate data extraction
Up to now, this hasn’t been an urgent issue for many hospitals. So far, MU criteria has only required them to certify that the data are calculated within and reported from certified technology, not reconcile disagreements they may have between system-generated numbers and what they may believe is truly representative of a particular measure population. For example, the certified EHR may return a zero in a measure numerator when the clinical team and quality staff “know” patients should qualify based on the measure definition. Up to now, the quality of their care has not been judged on these data and it has had no effect on their reimbursement rates or their MU incentive rewards. It has been seen as a bit of CMS wheel-spinning by many, and with many other more-urgent issues on their plates, quality officers, IT staff and c-suite leaders have been satisfied with noting the problem and moving on to other things that matter more.
But the time has come to pay more attention to this issue. CMS is quickly moving toward requiring electronic submission of the extracted data, which is the first step toward public reporting of the data and using it to calculate performance-based payments. In the near future, your reputation and your bottom line will depend on the accuracy of these data. At a minimum, you want system-reported numbers to reasonably reflect what you think is your true performance on these measures.
You don’t want to wait until that future arrives on your doorstep. Trouble-shooting extraction tool errors is not a job that can be rushed. It requires patience, diligence and teamwork, all of which are hard to muster in a frantic, last-minute dash to fix the problem. If you haven’t started working on this problem, now is the time to spin up a team and get started.
This isn’t an IT project!
While your IT staff will be an important part of the team, troubleshooting an extraction tool is not an IT project. Ideally, you need to involve your quality staff, your informatics staff, your IT staff and your EHR vendor. Because the problems often lie in a mismatch between the idealized implementation and the real-world adaptions that are inevitable, quality staff and informatics staff must be involved. You need people who are intimately familiar with the clinical workflows, especially where essential data points should be captured, to help you identify where the problems lie.
You also have to collaborate with your EHR vendor. If the problem lies in the coding or the algorithms of the tool, you may not be able to see that on your own. You’ll need to work with the people who created the tool – the application coders – to identify those problems. If your vendor isn’t responsive, be insistent. While the vendors with the deepest pockets have put significant time and effort in working with hospitals on this issue, others with fewer resources may be reluctant to dedicate the time and effort needed. With these vendors, be adamant that they do so. They sold you the EHR with the extraction tool and it is their responsibility to help you get it right.
Connect with other users to gain insights
Chances are that if you are experiencing a problem in your data extraction, other organizations that use the EHR are having similar difficulties. Consult the user forums or community sites offered by the vendor to see if anyone has found a solution or can help identify the root of the problem. Contact your peers at other hospitals in your system to see what their experience has been. In my work with hospital systems, I am often amazed at how often important information like this just isn’t shared. People solve a problem and move on, forgetting that their sister hospitals may also be struggling with the issue. So don’t be shy about sharing either your data woes or your solutions. It really does take the whole healthcare community to troubleshoot extraction issues! At the end of the day, the effort is worth it. Our goal is to build a robust analytics capability—with trustworthy data—that clinical teams can use to better understand patient outcomes. Their ability to continuously improve care and deliver the positive patient outcomes we all desire is directly tied to the accuracy of the data extracted from your EHR.
About the Author: Paul J. Rosenbluth, MBA, CPHQ is a Principal Consultant with Dell Global Healthcare Services and the leader of Dell’s Meaningful Use Community of Practice. Paul works with healthcare organizations to optimize the use of EHR technology to achieve positive patient outcomes and meet regulatory requirements.