Custom CAE code numerically defines the human
tongue to optimize a surgical implant.
Numerical CAE methods can help optimize complex medical devices such as implants intended to reduce the effects of sleep apnea. The tongue is the most complex muscle system in the human body. As such, its behavior comprises a multitude of dynamic and sophistically interconnected physiological phenomenon. Simulations can help device developers see how the tongue generates loads and how an implant surgically placed into the tongue will affect a patient’s speech and swallowing.
In a recent project, our team used various software packages to generate a numerical model of the human tongue. Our goal was to simulate the dynamics of sleep apnea and optimize the implant design. We preprocessed the model with the NURBS surface modeler Rhinoceros 3D (Rhino) and CAE program MSC.Patran, solved with MSC.Marc and postprocessed with CEI EnSight as well as MSC. Patran.
First, some background is needed. Sleep apnea is caused by the soft tissue of the tongue collapsing during sleep and blocking the airway. If left untreated, it can cause serious health problems including high blood pressure, weight gain, impotency, and headaches. An estimated 12 million Americans suffer from sleep apnea, with another six to eight million remaining undiagnosed. It primarily strikes overweight males over 40, but can affect anyone at any age. Devising a numerical model of the human tongue was difficult at best. The tongue contains thousands of individual muscle fibers each of which have the potential of weaving through each other (referred to as interdigitation) as well as activating in vastly different manners. This physiology is what allows the tongue to move in such complex ways in comparison to the standard muscle pair combinations found in the body, such as the bicep and tricep. In addition, the tongue muscles connect to multiple other anatomical structures such as the hyoid bone, jaw and skull. Adding to this complexity are tongue contacts, which can occur with the hard and soft palate and other components of the upper airway.
Because of the complexity, as well as the fundamental limitations of computational analysis, it was necessary for us to approach the problem from a macroscopic viewpoint. It would have been computationally infeasible for us to model each muscle fiber and the microscopic details of how tongue muscles slide along each other.
Mechanically speaking, muscles contract due to a stress generated across them. So, mathematically, it became necessary for us to induce a stress in an activated muscle element that would subsequently result in forces, and then displacements, of that element. (Recall that stress is defined as a force over a given area. Larger muscles, therefore, have greater area and are stronger, or able to exert greater force.) This general idea is fundamentally simple enough. The tricky part with the tongue, however, was developing a tractable method to induce stresses in the mesh of a multi-interdigitating muscle model so that the virtual tongue would move in a way that corresponds with biophysical reality.
Our first step was to build the basic tongue geometry. We imported scan data into Rhino and then tweaked it to the final shape. Because the model was symmetric about the sagittal plane, we could simplify matters by modeling one-half of the tongue. We also built the main individual tongue muscles in Rhino using data from sources such as the Visible Human Project, scans, slices, and even best guesses based on years of expert experience. Additionally, we created skeletal geometry to act as constraints. The appropriate surfaces were stitched together to get the prerequisite solid model for meshing.
Continue to next page
Next came importing the Rhino 3D CAD model into MSC.Patran and creating glued contacts for boundary conditions. (The jaw is a fixed, rigid body; the tongue nodes are “glued” to that surface.) Another boundary condition was a sliding surface. The tongue is not entirely fixed, so we created the so-called “chin” for the meshed nodes to slide on. It comprised soft tissue that went from the jaw to the throat. We also connected the back of the tongue to the skull.
We then used the Patran automated hex mesher to generate the finite element representation of the entire tongue volume. It comprised 79,491 elements and 88,107 nodes. We then grouped muscles together as chunks of already existing elements via a piece of custom PCL code (the Patran command language), which found each element in the full mesh and identified if its centroid lay within the muscle’s solid geometry.
As a natural order of the way our model was constructed, it was possible for an individual finite element to reside within multiple muscles. In fact, it was found that some elements were contained within as many as six different muscle groups. Conceivably then, these elements could be activated simultaneously in six different directions depending on the fiber direction of each muscle as well as how much each particular muscle is firing.
To account for muscle activation on a macroscopic level, we created the concept of a “fiber block,” a sixsided, tri-parametric solid that enclosed each muscle group. The fiber block has its own coordinate space. (Imagine a Cartesian coordinate system that has been specifically warped and stretched to encompass the more complex piece of geometry representing the actual muscle). Custom PCL scripts once again were utilized to partition the block into continuously smooth “fiber curves,” which were subsequently used to generate fiber vectors that nicely matched how tongue fibers orient biologically. The vectors are evaluated at the centroid of each element in the muscle and used in formulating the direction in which a muscle is activated.
A closer look at Genioglossus
Genioglossus is a large, fanshaped muscle located in the center of the tongue. When activated, its fibers pull the tongue down toward the chin. It is a major contributor to speech and swallowing, and is one of the muscles most affected by a surgical implant. For these reasons, we single out this muscle specifically here, but it is important to note that these same concepts were used for defining muscle activation in each of the 13 muscles present in our FEA model.
So, with the fiber vectors in place for each muscle element telling it the direction in which it is to contract, it was then necessary to define for the solver the spatial and temporal distribution of activation. In other words, where, when, and how much each muscle is being activated. After considering more complex approaches, we found that a basic Gaussian, or bell curve, equation was the most intuitive to the user and provided all the capability and flexibility needed to define an activation profile.
The equation was structured in such a way that the peak and width of the bell were easily spatially controllable in the tri-parametric muscle coordinate system. Marc TABLE inputs could then be used to modify the bell curve parameters through time, thus creating the temporal effect needed to mimic biological activation profiles.
In order to communicate the complex activation profiles between the preprocessor (Patran) and the solver (Marc) a custom-built fiber definition file was generated that identified such items as element ID, muscle group containment, and fiber vector.
Continue to next page
The constitutive material used for the tongue was implemented via a standard Ogden model. It is a quasi-incompressible, elastomeric model for rubber-type materials which is often used to represent tissue in biomedical simulations. The model defines the stress/ strain curve of a particular tissue. We implemented the muscle activations through the HYPELA2 user subroutine. User subroutines are a vehicle Marc provides which allows advanced solver manipulation through direct Fortran coding.
The final solved models revealed how Genioglossus can be activated along its fanshaped pattern to create motions critical to speech and swallowing. Under the hood, the custom code is inducing internal stresses, which in turn produce forces which ultimately result in macroscopic displacements of the full tongue body. Speech therapists find this model particularly useful because, in reality, it’s impossible to have a subject activate only a single muscle in their tongue. Simulated study of individual muscles, therefore, provides researchers with insight they would not normally have available to them in a standard physical laboratory environment.
The final finite element model contained 13 different muscles that could individually simulate activation through space and time. Such simulations aided in the development of a sleep apnea implant by helping developers understand what happens when someone says a common vowel pattern, such as “i-u”, or makes a swallowing motion, with and without the device. In turn, this helped optimize the device. Tagged MRIs and ultrasounds validated results.
The author acknowledges these key individual in the modeling project:
Maureen Stone, speech scientist at the University of Maryland Dental School, for important data including MRIs from physical studies for test correlation.
Reiner Wilhelms-Tricario from Haskins Laboratories for serving as the mathematician and sharing his knowledge on the different types of mathematical approaches to muscle modeling.
Paul Buscemi, senior director study development at WuXi Apptec, Inc., for adding geometrical tongue definitions.