Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Future Blog Post

less than 1 minute read

Published:

This post will show up by default. To disable scheduling of future posts, edit config.yml and set future: false.

Blog Post number 4

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 3

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 2

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

Blog Post number 1

less than 1 minute read

Published:

This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.

portfolio

publications

Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems

Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2022

Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool’s pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer.

Recommended citation: Killeen, Benjamin D., Jonas Winter, Wenhao Gu, Alejandro Martin-Gomez, Russell H. Taylor, Greg Osgood, and Mathias Unberath. (2022). "Mixed Reality Interfaces for Achieving Desired Views with Robotic X-ray Systems." In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 11(4) pp. 1130-1135.
Download Paper

Towards Reducing Visual Workload in Surgical Navigation: Proof-of-concept of an Augmented Reality Haptic Guidance System

Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2022

The integration of planning and navigation capabilities into the operating room has enabled surgeons take on more precise procedures. Traditionally, planning and navigation information is presented using monitors in the surgical theatre. But the monitors force the surgeon to frequently look away from the surgical area. Augmented reality technologies have enabled surgeons to visualise navigation information in-situ. However, burdening the visual field with additional information can be distracting. We propose integrating haptic feedback into a surgical tool handle to enable surgical guidance capabilities. This property reduces the amount of visual information, freeing surgeons to maintain visual attention over the patient and the surgical site.

Recommended citation: Zhang, Gesiren, Jan Bartels, Alejandro Martin-Gomez, and Mehran Armand. (2022). "Towards Reducing Visual Workload in Surgical Navigation: Proof-of-concept of an Augmented Reality Haptic Guidance System." In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 11(4) pp. 1073-1080.
Download Paper

Towards 2D/3D Registration of the Preoperative MRI to Intraoperative Fluoroscopic Images for Visualisation of Bone Defects

Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2022

In this work, we propose a 2D/3D registration pipeline that aims to register preoperative MRI with intraoperative 2D fluoroscopic images. To showcase the feasibility of our approach, we use the core decompression procedure as a surgical example to perform 2D/3D femur registration. The proposed registration pipeline is evaluated using digitally reconstructed radiographs (DRRs) to simulate the intraoperative fluoroscopic images. The resulting transformation from the registration is later used to create overlays of preoperative MRI annotations and planning data to provide intraoperative visual guidance to surgeons.

Recommended citation: Ku, Ping-Cheng, Alejandro Martin-Gomez, Cong Gao, Robert Grupp, Simon C. Mears, and Mehran Armand. (2022). "Towards 2D/3D Registration of the Preoperative MRI to Intraoperative Fluoroscopic Images for Visualisation of Bone Defects." In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 11(4) pp. 1096-1105.
Download Paper

Towards Visualising Early-stage Osteonecrosis using Intraoperative Imaging Modalities

Published in Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 2022

This work introduces a novel framework that enables the localisation of necrotic lesions in Computed Tomography (CT) as a step toward localising and visualising necrotic lesions in intra-operative images. The proposed framework enables the automatic segmentation of femur, pelvis, and necrotic lesions in MRI. An additional step performs semi-automatic segmentation of these anatomies, excluding the necrotic lesions, in CT. A final step performs pairwise registration of the corresponding anatomies, allowing for the localisation and visualisation of the necrosis in CT. This framework was tested on MRIs and CTs containing early-stage ONFH.

Recommended citation: Liu, Mingxu, Alejandro Martin-Gomez, Julius K. Oni, Simon C. Mears, and Mehran Armand. (2022). "Towards Visualising Early-stage Osteonecrosis using Intraoperative Imaging Modalities." In Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization. 11(4) pp. 1234-1242.
Download Paper

Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process

Published in Journal of Imaging, 2022

This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer-Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.

Recommended citation: Navab, Nassir, Alejandro Martin-Gomez, Matthias Seibold, Michael Sommersperger, Tianyu Song, Alexander Winkler, Kevin Yu, and Ulrich Eck. (2022). "Medical Augmented Reality: Definition, Principle Components, Domain Modeling, and Design-Development-Validation Process." In Journal of Imaging. 9(1),4.
Download Paper

Medical Visualizations with Dynamic Shape and Depth Cues

Published in 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2023

Navigated surgery enables physicians to perform complex tasks assisted by virtual representations of surgical tools and anatomical structures visualized using intraoperative medical images. Integrating Augmented Reality (AR) in these scenarios enriches the virtual information presented to the surgeon by utilizing a wide range of visualization techniques. In this work, we introduce a novel approach to conveying additional depth and shape information of the augmented content using dynamic visualization techniques. Compared to existing works, these techniques allow users to gather information not only from pictorial but also from kinetic depth cues.

Recommended citation: Martin-Gomez, Alejandro, Felix Merkl, Alexander Winkler, Christian Heiliger, Sebastian Andress, Tianyu Song, Ulrich Eck, Konrad Karcz, and Nassir Navab. (2023). "Medical Visualizations with Dynamic Shape and Depth Cues." In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). pp. 813-814.
Download Paper

Magnifying Augmented Mirrors for Accurate Alignment Tasks

Published in 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2023

Limited mobility in augmented reality applications restricts spatial understanding along with augmentation placement and visibility. Systems can counteract by providing perspectives by tracking and augmenting mirrors without requiring user movement. However, the decreased visual size of mirrored objects reduces accuracy for precision tasks. We propose Magnifying Augmented Mirrors: digitally zoomed mirror images mapped back onto their surface, producing magnified reflections. In a user study (N=14) conducted in virtual reality, we evaluated our method on a precision alignment task. Although participants needed time for acclimatization, they achieved the most accurate results using a magnified mirror..

Recommended citation: Kern, Vanessa, Constantin Kleinbeck, Kevin Yu, Alejandro Martin-Gomez, Alexander Winkler, Nassir Navab, and Daniel Roth. (2023). "Magnifying Augmented Mirrors for Accurate Alignment Tasks." In 2023 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). pp. 685-686.
Download Paper

Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-time Virtual iOCT Volume Slicing

Published in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023

In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of intraoperative Optical Coherent Tomography (iOCT) volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs).

Recommended citation: Dehghani, Shervin, Michael Sommersperger, Peiyao Zhang, Alejandro Martin-Gomez, Benjamin Busam, Peter Gehlbach, Nassir Navab, M. Ali Nasseri, and Iulian Iordachita. (2023). "Robotic Navigation Autonomy for Subretinal Injection via Intelligent Real-time Virtual iOCT Volume Slicing." In 2023 IEEE International Conference on Robotics and Automation (ICRA). pp. 4724-4731.
Download Paper

Injured Avatars: The Impact of Embodied Anatomies and Virtual Injuries on Well-being and Performance

Published in IEEE Transactions on Visualization and Computer Graphics, 2023

Human cognition relies on embodiment as a fundamental mechanism. Virtual avatars allow users to experience the adaptation, control, and perceptual illusion of alternative bodies. Although virtual bodies have medical applications in motor rehabilitation and therapeutic interventions, their potential for learning anatomy and medical communication remains underexplored. For learners and patients, anatomy, procedures, and medical imaging can be abstract and difficult to grasp. Experiencing anatomies, injuries, and treatments virtually through one's own body could be a valuable tool for fostering understanding. This work investigates the impact of avatars displaying anatomy and injuries suitable for such medical simulations. We ran a user study utilizing a skeleton avatar and virtual injuries, comparing to a healthy human avatar as a baseline. We evaluate the influence on embodiment, well-being, and presence with self-report questionnaires, as well as motor performance via an arm movement task. Our results show that while both anatomical representation and injuries increase feelings of eeriness, there are no negative effects on embodiment, well-being, presence, or motor performance. These findings suggest that virtual representations of anatomy and injuries are suitable for medical visualizations targeting learning or communication without significantly affecting users' mental state or physical control within the simulation.

Recommended citation: Kleinbeck, Constantin, Hannah Schieber, Julian Kreimeier, Alejandro Martin-Gomez, Mathias Unberath, and Daniel Roth. (2023). "Injured Avatars: The Impact of Embodied Anatomies and Virtual Injuries on Well-being and Performance." In IEEE Transactions on Visualization and Computer Graphics. 29(11) pp. 4503-4513.
Download Paper

3D Slicer-AR-Bridge: 3D Slicer AR Connection for Medical Image Visualization and Interaction with AR-HMD

Published in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2023

In this work, we introduce 3D Slicer-AR-Bridge, a novel framework taking advantage of open-source medical image processing software 3D Slicer and advanced AR developing toolkits, to enable fast generation of surgical AR applications with high performance, high extensibility, and transparent data flow. The bridge is first built upon a seamless medical data sharing interface supporting volumes, segmentations, models, and annotations, between 3D Slicer and Unity, together with corresponding visualization and interaction methods.

Recommended citation: Li, Haowei, Yuxing Yang, Yuqi Ji, Wuke Peng, Alejandro Martin-Gomez, Wenqing Yan, Long Qian, Hui Ding, Zhe Zhao, and Guangzhi Wang. (2023). "3D Slicer-AR-Bridge: 3D Slicer AR Connection for Medical Image Visualization and Interaction with AR-HMD." In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). pp. 399-404.
Download Paper

A Closer Look at Dynamic Medical Visualization Techniques

Published in 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2023

In navigated surgery, physicians perform complex tasks assisted by virtual representations of anatomical structures and surgical tools. Integrating Augmented Reality (AR) in these scenarios enriches the information presented to the surgeon through a range of visualization techniques. Their selection is a crucial task as they represent the primary interface between the system and the surgeon. In this work, we present a novel approach to conveying augmented content using dynamic visualization techniques, allowing users to gather depth and shape information from both pictorial and kinetic cues.

Recommended citation: Martin-Gomez, Alejandro, Felix Merkl, Alexander Winkler, Christian Heiliger, Ulrich Eck, Konrad Karcz, and Nassir Navab. (2023). "A Closer Look at Dynamic Medical Visualization Techniques." 2023 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). pp. 99-108.
Download Paper

Feasibility Study of Using Augmented Mirrors for Alignment Task during Orthopaedic Procedures in Mixed Reality

Published in 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2023

Accurate depth estimation poses a significant challenge in egocentric Augmented Reality (AR), particularly for precision-dependent tasks in the medical field, such as needle or tool insertions during percutaneous procedures. Augmented Mirrors (AMs) provide a unique solution to this problem by offering additional non-egocentric view-points that enhance spatial understanding of an AR scene. Despite the perceptual advantages of using AMs, their practical utility has yet to be thoroughly tested. In this work, we present results from a pilot study involving five participants tasked with simulating epidural injection procedures in an AR environment, both with and without the aid of an AM.

Recommended citation: Zhang, Xiaorui, Andreas Keller, Mehran Armand, and Alejandro Martin Gomez. (2023). "Feasibility Study of Using Augmented Mirrors for Alignment Task during Orthopaedic Procedures in Mixed Reality." In 2023 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct). pp. 650-651.
Download Paper

ClimbAR: Collaborative Augmented Reality for Climbing Applications

Published in 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2024

In this work we present ClimbAR an open-source, collaborative, real-time, augmented reality application for the Hololens 2 that al-lows boulderers to set climbing holds virtually to better understand and plan routes.

Recommended citation: Cast, John Dallas, Alejandro Martin-Gomez, and Mathias Unberath. (2024). "ClimbAR: Collaborative Augmented Reality for Climbing Applications." In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). pp. 795-796.
Download Paper

Evaluating the Feasibility of Using Augmented Reality for Tooth Preparation

Published in 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), 2024

In this work, we explore the feasibility of using Augmented Reality (AR) Head-Mounted Displays (HMDs) to assist dentists during tooth preparation using two different visualization techniques.

Recommended citation: Kihara, Takuya, Andreas Keller, Takumi Ogawa, Mehran Armand, and Alejandro Martin-Gomez. (2024). "Evaluating the Feasibility of Using Augmented Reality for Tooth Preparation." In 2024 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW). pp. 689-690.
Download Paper

Segment Any Medical Model Extended

Published in SPIE Medical Imaging, 2024

In this work, we introduce SAMM Extended (SAMME), a platform that integrates new Segment Anything Model (SAM) variant models, adopts faster communication protocols, accommodates new interactive modes, and allows for fine-tuning of subcomponents of the models. These features can expand the potential of foundation models like SAM, and the results can be translated to applications such as image-guided therapy, mixed reality interaction, robotic navigation, and data augmentation.

Recommended citation: Liu, Yihao, Jiaming Zhang, Andrés Diaz-Pinto, Haowei Li, Alejandro Martin-Gomez, Amir Kheradmand, and Mehran Armand. (2024). "Segment Any Medical Model Extended." Medical Imaging 2024: Image Processing. vol. 12926, pp. 411-422.
Download Paper

Autokinesis Reveals a Threshold for Perception of Visual Motion

Published in Neuroscience, 2024

In natural viewing conditions, the brain can optimally integrate retinal and extraretinal signals to maintain a stable visual perception. These mechanisms, however, may fail in circumstances where extraction of a motion signal is less viable such as impoverished visual scenes. This can result in a phenomenon known as autokinesis in which one may experience apparent motion of a small visual stimulus in an otherwise completely dark environment. In this study, we examined the effect of autokinesis on visual perception of motion in human observers. We used a novel method with optical tracking in which the visual motion was reported manually by the observer. Experiment results show at lower speeds of motion, the perceived direction of motion was more aligned with the effect of autokinesis, whereas in the light or at higher speeds in the dark, it was more aligned with the actual direction of motion. These findings have important implications for understanding how the stability of visual representation in the brain can affect accurate perception of motion signals.

Recommended citation: Liu, Yihao, Jing Tian, Alejandro Martin-Gomez, Qadeer Arshad, Mehran Armand, and Amir Kheradmand. (2024). "Autokinesis Reveals a Threshold for Perception of Visual Motion." In Neuroscience . 543 (2024): 101-107.
Download Paper

On the Fly Robotic-Assisted Medical Instrument Planning and Execution Using Mixed Reality

Published in ArXiv, 2024

In this work, we introduce a novel framework using mixed reality technologies to ease the use of Robotic-Assisted Medical Systems (RAMS). The proposed framework achieves real-time planning and execution of medical instruments by providing 3D anatomical image overlay, human-robot collision detection, and robot programming interface. These features, integrated with an easy-to-use calibration method for head-mounted display, improve the effectiveness of human-robot interactions.

Recommended citation: Ai, Letian, Yihao Liu, Mehran Armand, Amir Kheradmand, and Alejandro Martin-Gomez. (2024). "On the Fly Robotic-Assisted Medical Instrument Planning and Execution Using Mixed Reality." arXiv preprint. arXiv:2404.05887.
Download Paper

Uncertainty-Aware Shape Estimation of a Surgical Continuum Manipulator in Constrained Environments using Fiber Bragg Grating Sensors

Published in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024

In this work, we introduce a novel method capable of directly estimating a Continuum Dexterous Manipulator shape from Fiber Bragg Grating sensor wavelengths using a deep neural network. In addition, we propose the integration of uncertainty estimation to address the critical issue of uncertainty in neural network predictions. Neural network predictions are unreliable when the input sample is outside the training distribution or corrupted by noise. Recognizing such deviations is crucial when integrating neural networks within surgical robotics, as inaccurate estimations can pose serious risks to the patient.

Recommended citation: Schwarz, Alexander, Arian Mehrfard, Golchehr Amirkhani, Henry Phalen, Justin H. Ma, Robert B. Grupp, Alejandro Martin Gomez, and Mehran Armand. (2024). "Uncertainty-Aware Shape Estimation of a Surgical Continuum Manipulator in Constrained Environments using Fiber Bragg Grating Sensors." In 2024 IEEE International Conference on Robotics and Automation (ICRA). pp. 5913-5919.
Download Paper

STTAR: Surgical Tool Tracking using off-the-shelf Augmented Reality Head-Mounted Displays

Published in IEEE Transactions on Visualization and Computer Graphics, 2024

This work presents a framework that uses the built-in cameras of Augmented Reality (AR) Head-Mounted Displays (HMDs) to enable accurate tracking of retro-reflective markers without the need to integrate any additional electronics into the HMD. The proposed framework can simultaneously track multiple tools without having previous knowledge of their geometry and only requires establishing a local network between the headset and a workstation. Our results show that the tracking and detection of the markers can be achieved with an accuracy of 0.09 ± 0.06 mm on lateral translation, 0.42 ± 0.32 mm on longitudinal translation and 0.80 ± 0.39◦ for rotations around the vertical axis.

Recommended citation: Martin-Gomez, Alejandro, Haowei Li, Tianyu Song, Sheng Yang, Guangzhi Wang, Hui Ding, Nassir Navab, Zhe Zhao, and Mehran Armand. (2024). "STTAR: Surgical Tool Tracking using off-the-shelf Augmented Reality Head-Mounted Displays." In IEEE Transactions on Visualization and Computer Graphics. 30(7) pp. 3578-3593.
Download Paper

talks

teaching

EN.601.454/654 Augmented Reality

Graduate and Undergraduate course, Johns Hopkins University, Department of Computer Science, 2022

Course Description:

This course introduces students to the field of Augmented Reality. It reviews its basic definitions, principles, and applications. It then focuses on Medical Augmented Reality and its particular requirements. The course also discusses the main issues of calibration, tracking, multi-modal registration, advanced visualization, and display technologies. Homework in this course will relate to the mathematical methods used for calibration, tracking, and visualization in medical augmented reality.

EN.601.453/653 Applications of Augmented Reality

Graduate and Undergraduate course, Johns Hopkins University, Department of Computer Science, 2023

Course Description:

This course is designed to expand the student's augmented reality knowledge and introduce relevant topics necessary for developing more meaningful applications and conducting research in this field. The course addresses the fundamental concepts of visual perception and introduces non-visual augmented reality modalities, including auditory, tactile, gustatory, and olfactory applications. The following sessions discuss the importance of integrating user-centered design concepts to design meaningful augmented reality applications. A later module introduces the basic requirements to design and conduct user studies and guidelines on interpreting and evaluating the results from the studies. During the course, students conceptualize, design, implement, and evaluate the performance of augmented reality solutions for their use in industrial applications, teaching, and training, or healthcare settings. Homework in this course will relate to applying the theoretical methods used for designing, implementing, and evaluating augmented reality applications.

EN.601.454/654 Introduction to Augmented Reality

Graduate and Undergraduate course, Johns Hopkins University, Department of Computer Science, 2023

Course Description:

This course introduces students to the field of Augmented Reality. It reviews its basic definitions, principles, and applications. The course explains how fundamental concepts of computer vision are applied to facilitate the development of Augmented Reality applications. It then focuses on describing the principal components and particular requirements to implement a solution using this technology. The course also discusses the main issues of calibration, tracking, multi-modal registration, advanced visualization, and display technologies. Homework in this course will relate to the mathematical methods used for calibration, tracking, and visualization in augmented reality.

EN.601.453/653 Applications of Augmented Reality

Graduate and Undergraduate course, Johns Hopkins University, Department of Computer Science, 2024

Course Description:

This course is designed to expand the student's augmented reality knowledge and introduce relevant topics necessary for developing more meaningful applications and conducting research in this field. The course addresses the fundamental concepts of visual perception and introduces non-visual augmented reality modalities, including auditory, tactile, gustatory, and olfactory applications. The following sessions discuss the importance of integrating user-centered design concepts to design meaningful augmented reality applications. A later module introduces the basic requirements to design and conduct user studies and guidelines on interpreting and evaluating the results from the studies. During the course, students conceptualize, design, implement, and evaluate the performance of augmented reality solutions for their use in industrial applications, teaching, and training, or healthcare settings. Homework in this course will relate to applying the theoretical methods used for designing, implementing, and evaluating augmented reality applications.