Applying everyday principles to product development can reduce hazards and increase patient safety.
Engineers developing medical equipment may not have much control over the hazards inherent in devices they are designing. X-ray imagers and anesthesiology equipment will always pose some risks to patient safety. But if a design fails, consequences can be catastrophic. So one of every medical designers' tasks is to reduce the risks and probabilities of failure. And almost every design decision can affect these two factors.
Here are some strategies that may help reduce risk and failures in medical equipment. (The same concepts can be applied to all forms of engineering.)
According to some estimates, nearly half of all project costs stem from rework to correct inadequate features or add ones that were left out of the design process. Missing and inadequate requirements also account for an estimated 75% or more of the software bugs in medical equipment. So it is important the design team start with a list of valid, must-have requirements.
Interviews are the most common way design teams gather requirements. Interviews let all stakeholders quickly provide the various bits of information designers use to construct a list of requirements. For example, doctors can explain traditional methods and the range of measurements or outputs expected from the equipment. Patients can provide feedback on the comfort and convenience of using the device. And healthcare managers should be able to address costs and market scope.
Although interviews are an important first step, they can be useless if pertinent questions aren't put to the right people. Another problem with interviews is that those being interviewed may not know what they want until after seeing and understanding a set of options. This is where prototypes can play a helpful role.
Focus groups are useful because they let people discuss opinions with their peers, and the group is usually reacting to a relatively fleshed-out concept or set of options. However, as with interviews, focus groups may not capture all the requirements needed for a successful device. For example, focus group members typically won't give negative feedback if they believe the moderator is involved with the design. They don't want to hurt anyone's feelings.
Software modeling lets design teams simulate devices so end users can evaluate the controls and outputs. This lets users more fully understand what the designers have in mind so they can give more informed feedback as to what is wrong, what works well, and what might need to be changed before taking on the expense of building a physical prototype. Teams often then take the next step and develop a physical “mock-up” of the controls and display panels so users have a more realistic experience with the latest design.
Functional prototyping, the next logical step after software modeling, involves a working model built using off-the-shelf development tools. It lets users operate the device in its normal environment. Prototypes should use as few custom parts as possible to reduce development costs and time. After all, because the design team is still looking for user feedback and gathering requirements, the design will likely change. There's no need to spend much time refining features that may need significant rework.
Defining the design process
Design inputs and outputs should not only be clearly defined in requirements documents, but these documents should be mapped to source code. This ensures that all requirements are covered by code, and that all code is mapped to requirements. Often, requirements that have not been implemented are simple to detect. It can be more difficult to find implementations not covered by requirements documentation. Such gaps in requirements may result in incorrect assumptions and miscommunications between engineering groups. Well-mapped requirements also ensure traceability so that when a requirement changes, there is a clear mapping to affected source code.
Design reviews are extremely important at all phases of development. Requirements, architecture, and specifications should go through a formal review process, but source code reviews by peers are also essential to produce high quality code. Best practices include having developers walk through the code for an audience of their peers. For lower risk items, tools for static code analysis can also be used for automated code review.
The importance of a smooth design transfer is sometimes overlooked, and miscommunications in this process can be one of the sources of software bugs. Most design teams consist of domain experts responsible for algorithm and concept development as well as implementation engineers responsible for converting the design into a form that can be commercialized. The transition from one team to another is typically done with specification documents, but source code may also be transferred if a prototype has been developed. By using high-level design tools like state charts and other graphical representations of code, design teams can deliver executable specifications used to derive final implementations.
Design changes should be tracked, justified, and validated against the entire system. To ensure small code changes do not have large and unintended effects, design teams should have an automated test suite in place that runs as an acceptance test against any code changes. In addition, regardless of the size of your design team, you should set up source code control systems to track history and changes.
Selecting system architecture
Another way to reduce risk in medical devices is by choosing system architectures with various layers of redundant protection for the most hazardous elements. For example, designers can choose whether control elements will be carried out by software or hardware. Dedicated hardware is considered more reliable but also more difficult to design for complex tasks. Software can be easier to put in place and update. And software is well suited for features such as networking and data storage. But software bugs can be difficult to identify and correct.
When designing complex digital or mixed-signal hardware, application-specific integrated circuits (ASIC) are commonly chosen for mass produced devices. They provide the reliability of hardware circuits without the complexities of manufacturing and assembly. However, fabricating ASICs can be prohibitively expensive, so unless mass production is a certainty, use field-programmable gate arrays (FPGA) instead.
FPGAs have the reliability of ASICs and are almost as easily changed as software. And although unit costs are higher compared to ASICs, overall production costs are lower for most designs. In addition, FPGAs can be repeatedly reprogrammed, making them a good choice for designs with requirements likely to change.
When it comes to executing software, complex code is more likely to contain bugs than simple code. This often makes 8-bit microcontrollers the more reliable choice. These controllers are usually programmed in “C” or “assembly” and almost never run operating systems. Instead, they carry out simple tasks such as updating a display or monitoring buttons. Though they're useful and relatively easy to program, the scope of what 8-bit chips can do is limited by their relatively small memory.
More complex systems often call for cooperative multitasking, communications drivers, and other high-end features. This means they need more powerful processors with more memory. Most often, these systems use 32-bit processors with real-time operating systems (RTOS) containing drivers and middleware like TCP/IP stacks and file systems. But with these features comes more complexity and additional risk of failure. Most designers add watchdog timers and other failure mitigation techniques to detect system failures and then recover gracefully.
The most complex systems, those that need computationally intense algorithms or extremely rich user interfaces, require desktop-computing capabilities. Although desktop-computing failures like e-mail or browser crashes can be common and inconvenient, users need only reboot the computer and continue to work in most cases. But this is not nearly the level of reliability designers need for critical medical devices. Therefore, if a desktop PC is needed, designers should add hardware that will monitor and correct for failures to minimize patient risk.
For example, a touch-panel display running a desktop operating system can connect via Ethernet to a 32-bit processor running an RTOS. The RTOS checks for failure and adds reliability. Including an FPGA in this signal path would further improve reliability. The FPGA could monitor signals to ensure nothing went outside of the safe and acceptable operating range. With this third layer of protection, simply powering the device ensures outputs remained within ranges specified in hardware.
Focusing on verification and validation
Although designers need to test all aspects of their code, they should focus their most rigorous software testing on high-risk areas. High-risk code can be identified several ways. Code complexity analysis, for example, can help determine which code is statistically most likely to fail. When coupled with code coverage tools, it ensures testing all paths of the most complex code. In addition, coding situations identified as high risk for failure should undergo the most rigorous testing. Some high-risk areas, for example, concern user interfaces (keys pressed too quickly), kernel-driver data transfers (buffer over and under flows), data conversions (pointer casts and loss of precision), and multithreaded portions of code.
Good designers reuse parts of the design process in validating and verifying code. The simplest way to do this is to construct the test based on the requirements documents rather than the code. In fact, it is best if someone outside the design team puts together the tests. Models or prototypes developed during the design process can be used as comparison for acceptance tests. Furthermore, any models used to design algorithms can be used in a hardware-in-the-loop (HIL) setup to serve as a verification tool for device acceptance.
Editor's note: This article first appeared in the Oct. 8 issue of Machine Design, a sister publication of Medical Design.
WHY FPGAS ARE EASIER TO VALIDATE THAN MICROPROCESSOR
When developing an embedded medical device, validating and verifying it can take longer than the time it took to develop the firmware. And even after testing each component, a completed microprocessor-based device needs to be put through extensive testing to demonstrate the safety of the device as a whole. This is necessary because seemingly independent subsystems in software can conflict and cause catastrophic failures through common bugs like resource contention and race conditions. That's because a processor can execute only one instruction at a time, so resources such as memory, peripherals, and registers must be shared by several processes to handle any type of multi-tasking between parallel processes.
On FPGAs, however, independent subsystems are truly independent because true parallelism is possible on FPGAs. Each tick of the clock can result in latching many parallel registers and executing many paths of combinatorial logic. Therefore, tested FPGA code is traditionally deemed more reliable than tested processor code. OptiMedica Corp discovered this when it developed an FPGA-based photocoagulator. Management found out FPGA chips provide the reliability of hardware and does not require the same level of code reviews as processor-based devices when obtaining FDA approval.
FDA's 21CFR Part 820 outlines design controls that must be followed by companies designing medical devices. For more information, visit http://tiny.cc/EFkg6.