We have reached the third and final column on the breach risks regarding personal health information (PHI), specifically as it pertains to the growing threat of attacks on medical devices and systems. (Click here for Part 1 and Part 2 of this series.) Healthcare has entered a perfect storm of security risk. As technology evolves and more devices are connected, the points in which PHI is collected and stored are growing rapidly. But healthcare as an industry is ill prepared. Many providers consider software security a low priority and the smaller units in the information chain--doctors, diagnostic centers, and clinics--lack the knowledge and resources to address security properly.
When the full range of issues facing healthcare is considered, a complex and confusing picture gets more complicated. I have catalogued these issues pulled from my work assessing various types of medical devices as follows:
* Cryptographic problems
* Operational issues with device lifecycle
* Communications security
* Authentication and authorization issues
* Software update issues
* Lack of obfuscation controls
* Physical and platform security issues
In my previous two columns, we discussed the subject writ large and addressed issues individually through communications security. In this column we will address the remaining issues.
Authentication and authorization issues are a classic mix of the human element and poor implementation. If you have been to a hospital lately, your eyes quickly are drawn to the amount of identification and security cards attached to each employee. Establishing identity is clearly a complex problem for healthcare providers that extends to the security of medical devices. The need to authenticate a user to a device is often viewed as an inhibitor to efficiency, or a nuisance to others, such as doctors. In cases in which a device sits in an emergency treatment room where multiple people need to access the device, requiring each one to authenticate is impractical.
Of course, the previous points assume there is any physical security on the device; in my experience, there are still a number of older devices in circulation with no physical controls. Devices existing in areas perceived to be more secure, such as surgical suites, may be assumed to be already secure by their location. In other instances, authentication is compromised by poor implementation. For example, I have encountered cases in which device serial numbers--stamped on the device--were used as the authenticators.
Software update issues are a representative microcosm of the larger problem. Many medical devices were designed and built when security was not a central concern or when the advanced controls that we know about today were not technically feasible. Because many pre-date the connection to the network, they are not designed to be updated in a method that is practical and cost-effective. Some devices have been around for decades, and use system software that is no longer supported by the vendors who provided it. Finally, healthcare providers do not make software updates a priority for their already over-utilized IT groups. Smaller providers simply lack the knowledge to execute such updates even if they were available.
My experience has shown that medical devices lack the application of obfuscation controls to instrument or otherwise transform binary code to create barriers for reverse engineering-based attacks. Common obfuscation controls include: anti-debugging controls, tamper-proofing controls, and white-box cryptographic controls. To date, device manufacturers have been slow to adopt such controls into their security control regime, leaving devices open to such attacks. These controls create an additional barrier to compromise, and as devices are increasingly connected to the network, provide another level of protection from a common attack vector.
In the majority of the assessments I have performed, physical and platform security issues are abundant and readily exploitable. These vulnerabilities go back to the common theme of this series: these devices were not designed with security as a priority. Access to the physical medical devices is rarely problematic and achieved with minimal skills. Once accessed, the device usually yields easy paths to exploitation, such as Joint Test Action Group (JTAG) ports that are a common access point for hardware attacks. It is common to encounter unhardened operating systems and insecure boot processes. Quality control is another factor and can inadvertently create unlikely attack vectors. In one instance, analysis discovered a keyboard device driver that was available on the medical device when it was not really needed. The driver--an unnecessary component--could be used to build an exploit point to access PHI.
Ultimately, the issues and challenges discussed in this series extend beyond patient confidentiality into the far more serious realm of patient safety. The same techniques used to access confidential data could be used to alter the performance of devices with potentially harmful consequences: that risk is why I am confident progress is being made to tackle the problem.
While healthcare has some unique challenges, the majority of the issues we have discussed have been effectively addressed in other industries, and there is no reason to believe they can’t be addressed in healthcare. As the sophistication and connectivity of medical devices continue to evolve, device manufacturers are stepping up to the challenge. The human factors that are prominent in this conversation may prove to be much more of a limiting factor, but increasing enforcement of HIPAA regulations will undoubtedly force change.