October 24, 2012
To:
Sens. Coburn, Burr, Roberts and Thune
Reps. Ellmers, Camp, Herger, Upton and Pitts
United States Congress
Washington, DC
Re: HITECH and healthcare information technology
Dear Senators and Representatives,
I applaud your recent inquiries to HHS regarding critical issues related to healthcare information technology (EHRs, physician order entry, decision supporting systems, etc.) Issues such as the possible role of these systems in upcoding and Medicare overbilling, test overutilization, abuse of incentives, etc. must be addressed.
However, you did not address an issue probably more important to the public, indeed to us all as patients – that of health information technology safety.
Congress must be made aware that health IT exists in two forms: good health IT and bad health IT. Bad health IT reduces safety, creates close calls, injures, kills, raises costs, and sacrifices information privacy and confidentiality, among other ill effects.
Congress must also be made aware that unfortunately due to systemic impediments to free flow of information about health IT systems and lack of FDA or other independent industry regulation, bad health IT is rarely removed from the marketplace or fixed.
FDA and its director of the Center for Devices and Radiological Health (CDRH), Jeffrey Shuren MD JD, testified to HHS in Feb. 2010 that “under the Federal, Food, Drug, and Cosmetic Act, health information technology software is a medical device”, but that FDA has “largely refrained from enforcing our regulatory requirements with respect to HIT devices.”
To clarify about the two types of health IT:
Good Health IT provides a good user experience, enhances cognitive function, puts essential information as effortlessly as possible into the physician’s hands, keeps eHealth information secure, protects patient privacy and facilitates better practice of medicine and better outcomes.
Bad Health IT is ill-suited to purpose, hard to use, unreliable, loses data or provides incorrect data, causes cognitive overload, slows rather than facilitates users, lacks appropriate alerts, creates the need for hypervigilance (i.e., towards avoiding IT-related mishaps) that increases stress, is lacking in security, compromises patient privacy or otherwise demonstrates suboptimal design and/or implementation.
The Agency for Healthcare Research and Quality (AHRQ) recently reported that the highest prevalence of medical technology safety issues was related to EHR systems. Even worse, there is a lack of reporting transparency. Harms are known of, but the magnitude admittedly unknown due to systematic impediments to reporting transparency, collection and analysis, as noted by FDA in a 2010 internal memo and IOM itself in its 2012 report on health IT safety. This is unprecedented in modern medicine, violates patient’s rights, and under no circumstances should be considered acceptable.
I personally know of adverse patient outcomes including death related to bad health IT that are unreported (even in a state that mandates reporting of medical incidents and serious events), as do numerous colleagues.
This paper’s recommendations will not happen without the oversight of Congress. As stated in the paper itself, “Some medical and IT leaders have invested their reputations, and their organization’s time and money, in the software [implementation] program; complaints that expose large problems may not be appreciated or carried forward.”
Some claim safeguards are already in place in the form of HHS certification of health IT.
Unfortunately, the HHS health IT certification guidelines do not have sufficient depth nor the correct focus to distinguish between bad health IT and good health IT. Certification for MU does not look at real-world testing for safety, reliability and usability, for instance, under real loads, in actual clinical settings, and is not very thorough.
On the other hand, NASA, the pharmaceutical industry (via FDA's regulation of pharmaceutical research and manufacturing IT) and others dependent on mission-critical software have rigorous validation procedures to check for such factors, e.g., NASA’s "Certification Processes for Safety-Critical andMission-Critical Aerospace Software" that includes rigorous testing to distinguish bad IT from good IT, and remediate or abandon the former.
p. 6-7: In order to meet most regulatory guidelines, developers must build a safety case as a means of documenting the safety justification of a system. The safety case is a record of all safety activities associated with a system throughout its life. Items contained in a safety case include the following:
• Description of the system/software
• Evidence of competence of personnelinvolved in development of safety-critical software and any safety activity
• Specification of safety requirements
• Results of hazard and risk analysis
• Details of risk reductiontechniques employed
• Results of design analysis showing that the system design meets all required safety targets
• Verification and validation strategy
• Results of all verification and validation activities
• Records of safety reviews
• Records of any incidents which occur throughout the life of the system
• Records of all changes to the system and justification of its continued safety
These processes need to be put in place regarding healthcare IT as well, but will take much time and regulatory push on the industry to occur. In the absence of truly rigorous testing, though, transparency is essential.
The aforementioned IOM Discussion Paper outlines the creation of a nationwide post-marketing surveillance process and transparency on health IT usability problems, safety issues, billing fraud promotion, etc. is essential. It recommends:
¨ “Flight simulator”-like, thorough laboratory evaluation of test scenarios;
¨ Point-of-use reporting by doctors and nurses on their experiences;
¨ Third party–administered doctor and nurse surveys about their experiences with EHR systems;
¨ Direct clinician-to-public reporting; and
¨ A formalized system of hazards reporting from EHR systems.
These measures are essential if the technology is to achieve the benefits of which it is theoretically capable, but not presently achieving despite the hundreds of billions of dollars being spent.
In conclusion, I ask you to add to your inquiries the subject of health information technology safety. That includes the need for HHS to develop a robust, transparent national reporting system for safety problems created by the technology, and a system to ensure that bad health IT is either fixed in a timely manner or removed from the marketplace.
Sincerely,
Scot Silverstein, MD
-----------------------------------------------------------------
Scot M. Silverstein, MD
Adjunct faculty in Healthcare Informatics and IT (Sept. 2007-present)
Assistant Professor of Healthcare Informatics and IT, and Director, Institute for Healthcare Informatics (2005-7)
Drexel University
College of Information Science and Technology
3141 Chestnut St., Philadelphia, PA 19104-2875
Email: sms88 AT drexel DOT edu