As open and receptive as the FDA may
be, engineers need to establish the
validity of our computational models, and
we need to do so BEFORE we submit
results to the FDA.
I recently returned from a workshop on Computer Methods for Cardiovascular Devices sponsored by the Federal Drug Administration, the National Heart, Lung, and Blood Institute, and the National Science Foundation. The workshop provided an audience of regulatory, academic, and industrial professionals a chance to catch up on state-of-the-art trends and exchange ideas about the use of computational methods to support regulatory filings for medical devices. Surprisingly, a general theme emerged for me during the workshop: We are not providing the FDA with adequate validation of our computational models!
For years I’ve helped companies demonstrate the safety and effectiveness of their products. I’ve written several FEA reports that have been reviewed and accepted by the agency—including cases where we’ve argued to forgo expensive and time-consuming durability testing in lieu of providing computational results to support safety claims. It has been my experience that the FDA has been very open to such an approach, provided there is an adequate demonstration of the validity of the FEA models.
However, from what I heard at the workshop, the typical submission of FEA results does not include adequate validation. It is not clear to me if this problem is due to companies not knowing how to perform validation—or not knowing what data to provide for validation of their FEA models. Maybe companies are reluctant to share data that the FDA has not specifically asked for, or maybe they have unreasonable expectations about what computational models can replace in terms of physical testing. Whatever the reason, it is clear that if we want to leverage FEA to streamline the development and regulatory approval process, we need to take a proactive role at demonstrating how well our models describe our products.
In more than 10 years of experience in the field, I have yet to run across a device or a specified test or loading scenario that I could not analyze using Abaqus FEA software and achieve excellent agreement between experiment and computer simulation. Many times the endeavor to match experiment and analysis reveals critical insight into the mechanics of the product involved or nuances associated with the loading conditions that lead to important improvements. With advanced contact, strong nonlinear capabilities, and the extensibility of user subroutines, Abaqus provides a platform to model almost any physical scenario—giving the engineer and product designer a highly capable toolkit for validating any device.
As open and receptive as the FDA may be, engineers need to establish the validity of our computational models, and we need to do so BEFORE we submit results to the FDA. This effort needs to begin early in the development process—before we make decisions based on computational data. Otherwise, how can we expect the FDA to accept that our results have emerged from a rigorous engineering methodology?
How much and what type of validation is necessary in any given case depends on how a model is going to be used. Conversely, the confidence we have in a computational model depends on how extensively it has been applied and shown to agree with reality. There are numerous opportunities during the development process to establish the validity of our computational models. Radial force testing of different stent designs, for example, provides an excellent opportunity to confirm our model’s ability to predict reality.
The time is right for advancing the use of computational models for demonstrating the safety of our products. We need to be proactive and utilize models that are well grounded in experimental data. How far we are able to leverage these results with the FDA will depend on how good of a job we do at convincing them that they represent actual experience