Course Content
Enhancing Surgical Guidance with Deep Learning towards depth-resolved, artifact-free fluorescence imaging
0/2
Therapeutic Efficacy Prediction for AI-driven Angiogenesis Analysis via Ensemble Learning with Intravital Imaging and Vasculature Records
0/2
AI framework with 3D CNN-assisted UNETR and RNN for blood input compute with partial volume corrections for automated parametric FDG brain PET mapping
0/2
BiRAT: Platform for Biomedical Image Data Registry and Processing
0/2
Generation of spatially variable b matrices using convolutional neural networks.
0/2
Enhanced automated brain segmentation in mouse model of nerve agent exposure with transfer learning and the 3D U-Net
0/2
Enhancing readability and detection of Age-related macular degeneration using optical coherence tomography (OCT) imaging: An AI approach
0/1
Intelligent Imaging: Harnessing AI for Enhanced Detection and Diagnosis
About Lesson
Abstract Body:

Fluorescence guided surgery (FGS) has recently been shown the ability to provide improved delineation of malignant tissues by selectively highlighting cancer cells in real-time. By enhancing the tissue classification capabilities, FGS promises a more complete resection and/or better healthy tissue preservation. However, fluorescence imaging relies on capturing light signals that are affected by highly heterogeneous tissue optical properties of the treated site, impacting both the localization accuracy, as well as apparent fluorescence lifetime in the case of nanosecond time-resolved FGS. It is crucial to acknowledge that detected fluorescence signals are greatly influenced by a complex interplay of factors that affect light transport, which poses challenges in image interpretation by the surgeon, thereby diminishing the value and promise of FGS. By capturing the light transport at picosecond time scales using novel SPAD sensors and utilizing deep learning methods trained on validated Monte Carlo (MC) simulations [2], [4], we aim to improve the accuracy of fluorescence concentration and lifetime. Recovered time-resolved light transport also allows us to augment the view by depth information with sub-millimeter accuracy.

First, we simulated the effect of optical properties on fluorescence signals using MC environment MCX [7], in scenarios of varying photon absorption (μa), scattering (μs), fluorophore topology, and lifetime. The absorption coefficient in tissues was modeled based on the absorption characteristics of hemoglobin, water, fat, and other biomolecules [1], [3]. The scattering coefficient was attributed to either Rayleigh or Mie scattering, contingent upon the dimensions of the scattering centers. By varying the concentration of different biomolecules, we modulated μa, and by adjusting the ratio of the two scattering types, we altered μs. While reviewing the training dataset we demonstrate how light transport affects.

We further employed deep learning methods to recover artifact-free fluorophore concentration maps, and true fluorescence lifetimes [5]. ConvLSTM network was designed to use here which is a combination of spatial and temporal model [9]. Additionally, a Unet architecture served as the generator within this framework, ensuring the retention of essential spatial information for enhanced accuracy in depth map reconstruction. Training was conducted on 5000 different simulated scenarios. The adeptly trained model demonstrated its capability to reconstruct fluorescence depth maps, effectively denoising and delineating clear boundaries between tumor and healthy tissues. We achieved an MSE error of less than 0.2 compared to the ground truth which means good pixel-to-pixel agreements. Moreover, the mean depth prediction error is less than 0.4mm.

Finally, the DL-assisted image reconstruction was tested experimentally with a SwissSPAD2 Single Photon Avalanche Diode (SPAD) sensor combined with a picosecond diode laser, on a variety of realistic tissue phantoms and on ex vivo tissue [8]. Our model could well resolve phantom with heterogenous optical properties up to 10mm and the depth error is less than 0.5 mm compared to the real depth.  

In summary, we proposed an innovative deep learning network in combination with a novel time-resolved imaging setup to mitigate light transport artifacts in FGS while augmenting the presented images with accurate depth information. We expect that this paradigm shift will play a crucial role in ML-assisted tissue classification during cancer surgery and emerging robotic FGS approaches.

Author

Shiru Wang
Dartmouth College
0% Complete