Interview: Hamid Motallebzadeh of Mass General Brigham on Multi-Physics and Multi-Scale Biomechanics of Auditory Systems and Artificial Intelligence in Medicine

Posted on 16 June 2022 by Jessica James

 

We recently had the chance to catch up with Dr. Hamid Motallebzadeh of Harvard Medical School and Mass Eye and Ear at Mass General Brigham, to learn about his fascinating work in computational biomechanics and the ear. Hamid is a long-term user of Simpleware software who has found success in creating models from anatomical data for a range of different applications. From talking to Hamid about his research, it’s clear that it has great potential benefits to everything from improving the effectiveness of implants to applying AI to study the ear.

Hamid Motallebzadeh, PhD
Harvard Medical School and Mass Eye and Ear

Can we begin by you telling us a bit about yourself and your career to date?

I am Hamid Motallebzadeh, PhD, an instructor in Otolaryngology–Head and Neck Surgery (OHNS) at Harvard Medical School, and an investigator at Massachusetts Eye and Ear. My background is in computational biomechanics, and my work focuses of on acoustics, hearing mechanics, and newborn hearing screening. Specifically, my research approach includes multi-physics and multi-scale biomechanics of auditory systems and artificial intelligence in medicine.

What are the main questions your research is tackling?

We develop computational models to explain experimental data, as well as to interpret and predict system behavior under circumstances that cannot be studied experimentally due to measurement limitations. My current projects supported by NIH (National Institutes of Health) involves generating synthetic datasets from numerical simulations, particularly finite-element models that could be used to train machine-learning algorithms to infer the middle-ear status.

How did you first start using Simpleware? What does the software help with?

To develop finite-element models, the first step is to build a geometry of the system of interest. The geometry of auditory systems is usually reconstructed by segmenting CT or micro-CT images. When I started my postdoctoral program at Harvard Medical School in 2016, I investigated several software to perform segmentation on clinical images. Simpleware was one of these software programs, and because of its advanced functionality, user-friendly environment and great customer service, my group decided to purchase its license. Since then, we have been using it to reconstruct models of biological systems, not only for computational simulations but also geometrical analysis and illustrations.

Human middle-ear model developed using Simpleware software.

Can you tell us more about the head model you created and why it is important?

A full model of head is one of our ongoing projects that we plan to conduct several studies with, including bone-conduction hearing pathway and hearing implants and their interaction with surrounding bone. Due to the complex wave propagation modes through the skull and inter-subject variabilities, systematic experimental measurements of this nature are challenging, if not impossible. In addition, the active bone modelling and remodeling process cannot be investigated on cadaveric specimens, and the finite-element modeling approach is a promising approach to study the long-term and active osseointegration process of bone-anchored hearing implantation.

What opportunities does a model like this open up? What could it be used for in your research?

I have designed several studies that involves using the finite-element model of the full head. One is to map the 3D vibrational transmission patterns through the skull. The long-term goal is to design more effective implants and surgical implantation. Another project is to monitor and quantify the osseointegration process non-invasively after the surgery to identify the completion of the bone-implant integration to attach the external processing unit. This study employs computational models of mechanostatic theories of bone modelling and remodeling at high-frequency vibrations adopted specifically for hearing implants.

Vibrational pattern of normal human middle-ear model at 200 Hz.

Where do you see your research heading in the next few years? Are there any particular challenges you’re anticipating?

My current focus is on developing finite-element models including the inherent anatomical variations. The idea is to mimic normal and pathological conditions of the middle ear and generate a large dataset of synthetic data. The dataset is used to train machine-learning algorithms to infer the middle-ear status from clinical data. There are a number of challenges regarding the development of a population of 3D geometries of middle-ears. The fine structural characteristics such as eardrum thickness are in the order of voxel dimensions of the μCT images, thus segmentation is usually done manually which is time-consuming and depends on the experience of the modeler. 

The interfaces of different components are not usually clear and automatic segmentation algorithms do not work properly. Another challenge is importing the geometry into the finite-element software. Although Simpleware has powerful controlled meshing capabilities to generate compatible mesh, generating CAD geometries from meshes inside finite-element environment is always challenging. I look forward to seeing machine-learning methods implemented in Simpleware to perform segmentation, and also capabilities for developing parameterized geometries in the next versions of Simpleware.

Learn More

Any Questions?

Do you have any questions about Simpleware software or need additional information?