Explainable AI for reliable computer assisted surgery
Mentored by Ronald Tetzlaff, Stefanie Speidel
at Chair of Fundamentals of Electrical Engineering (Institute of Circuits and Systems), TU Dresden and Chair of Translational Surgical Oncology (National Center for Tumor Diseases Dresden), UKD/TU Dresden
The societal acceptance and the potential usefulness of modern AI methods in critical applications such as computer-assisted surgery is dependent on understandable and reliable model outputs. Traditional neural network-based AI methods for image classification or segmentation basically act as a black box. This is a major shortcoming when AI methods and human medical specialists are supposed to cooperate.
To keep the human in the loop, we aim to develop novel methods that foster human trust and understanding by enabling the interpretability of network results which is a crucial component for translating such systems into the clinic.
The Ph.D. topic investigates new methods for visualizing neural network decisions in the context of computer- and robot-assisted surgery, in particular intraoperative decision support based on laparoscopic video streams. We aim to contribute active research on the evaluation of faithfulness and robustness of the developed method in this context as well as the impact on the performance of the coworking medical specialist.
The research challenges that are crucial in this context are for example:
- How can we enable interpretability of intraoperative decision support such as semantic segmentation, monocular depth estimation, tissue classification and event prediction using XAI?
- Which metrics can be applied to compare the interpretability of various XAI methods when evaluating machine learning models in a surgical context?
- Does the application of XAI methods lead to overtrust scenarios in this human-machine interaction context and if yes, how could this be prevented?
- How can we enable XAI for intraoperative decision support taking patient data as well as surgical factors into account?
You will be working in the research group of Prof. Dr. phil.nat.habil. Ronald Tetzlaff at TU Dresden provides access to a high-performance computing lab and in-depth expertise in generating and benchmarking XAI methods. You will be working with passionate colleges providing active research on applying your developed explainability methods in similar human-machine interaction settings.
Additionally, you will be supervised by Prof. Stefanie Speidel at the Chair of Translational Surgical Oncology within the National Center for Tumor Diseases Dresden. The research focus of the group is machine learning for computer- and robot-assisted surgery to improve patient outcome.
To conduct this research, you should hold an excellent university degree (MSc or an equivalent) in electrical engineering or related technical disciplines (such as computer science, physics or mathematics). Excellent analytical skills and practical experience in one or more of the research areas, Neural Networks, Explainable AI and medical image processing are beneficial. You are able to collaborate well in an interdisciplinary and international environment.