Collaborative Computational Project in Synergistic Reconstruction for Biomedical Imaging

Workshop on modern image reconstruction algorithms and practices for medical imaging

Monday, September 12, 2022 - 13:00 to Tuesday, September 13, 2022 - 16:00

G08 David Davis Lecture Theatre,
Roberts Building ,
Torrington Place, London, LONDON, WC1E 7JE.


Monday 13:00-18:00 (registration open from 12:30)
Tuesday 10:00-16:00 (coffee from 9:30)

On 12 and 13 September we will hold a two-day workshop at UCL (University College London) on modern image reconstruction algorithms and practices for medical imaging, concentrating on PET, MRI, and CT. The workshop is organised on the occasion of the visit of Prof Jeffrey Fessler from the University of Michigan, to UCL and is intended to bring together researchers, present exciting new research directions, and foster future collaborative projects. The workshop is sponsored CCP SyneRBI.

The workshop will consist of invited talks by members of the SyneRBI community, mixed in with breaks for networking and refreshments.

The meeting is intended primarily as an in-person event, providing favourable COVID regulations and conditions, though there would be certain provisions for presenters and attendees who cannot attend in person.

Early Career Researchers who need assistance with travel and subsistence can send an email to with a motivation for why they need it and how they are related to SyneRBI, evidence of support from supervisor or similar, and an estimate of the amount. Deadline for this is 29th of August. We will review these requests in our executive committee.

Zeljko Kereta, UCL Dept of Computer Science (UCL)
Kris Thielemans, Institute of Nuclear Medicine and Centre for Medical Image Computing (UCL)

Kjell Erlandsson (UCL) Utilizing multiplexing in multi-pinhole SPECT
Multi-pinhole (MPH) collimators are often used in small animal SPECT systems. By using magnification, pinhole collimators can provide improved spatial resolution. On the other hand, the sensitivity can be relatively low. This can be compensated for by using multiple pinholes, usually combined with large monolithic detectors, with the FOV of each pinhole covering part of the detector area. The individual FOVs can in principle overlap on the detector. This is known as multiplexing (MX) and leads to increased sensitivity and improved sampling. However, it also results in ambiguity regarding the LOR corresponding to each detected event, which can lead to artefacts in the reconstructed images. For this reason, MX has usually been avoided in MPH system design. However, it has been shown that artefact-free images with improved SNR can be obtained with a combination of MX and non-MX data. An alternative way to utilize MPH collimation is by using minification in combination with high-resolution detectors, allowing for an increased number of pinholes. We have used this principle to design a stationary system for Molecular Breast Imaging (MBI), based on CZT detectors with depth-of-interaction (DOI) capability, and we have shown that, by using MX, it is possible to increase the effective sensitivity by a factor of two.
PDF video
Pawel Markiewicz (UCL) DPUK PET/MR harmonisation using test-retest scans and two amyloid tracers
The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) provides a complete assessment of patients’ brains for early diagnosis of Alzheimer’s disease (AD) and essential imaging biomarkers for stratification or as a response biomarker in clinical trials. Such trials are important to clarify which patient groups will benefit from emerging biological therapies, such as aducanumab and other antibodies against amyloid or tau deposits.

In the UK, the Medical Research Council, therefore, funded the installation of multiple hybrid PET/MR scanners, creating a unique national network that includes 8 PET/MR scanning sites within the Dementias Platform UK (DPUK) to perform advanced brain research and clinical trials. The aim of this ongoing study is to harmonise scanning across these scanners and quantify the measurement variability within each site (repeatability) and across sites (reproducibility).

Very good test-retest was observed for most participants (less than 10 points on the centiloid scale, median 4), similar to previously reported for PET/CT. However, problems with attenuation correction can occur and cause larger test-retest differences. Thus, careful inspection of individual attenuation maps is recommended.

PDF video
Daniel Deidda (NPL) Triple modality reconstruction PET-SPECT-CT: application to Y90 SIRT
The introduction of triple modality scanners and the recent implementation of the first clinical triple modality scanner in STIR enables investigation of the possibility of triple modality image reconstruction. Such a tool represents an important step toward the improvement of dosimetry for theragnostic, where the exploitation of multi-modality imaging can have an impact on treatment planning and follow-up. The hybrid Kernelized expectation maximisation (HKEM) was used as it allows the use of multiple side information.

To give a demonstration of triple modality image reconstruction data from a NEMA phantom filled with Yttrium-90 (Y90) was acquired and reconstructed. The data were acquired with the Mediso AnyScan SPECT/PET/CT.

A clinical assessment of the triple modality reconstruction was carried out using 10 SPECT/PET/CT datasets from patients treated with selective internal radiation therapy (SIRT) and Y90 micro-spheres. The data was acquired at Oxford University Hospitals, using GE discovery 710 for SPECT/CT and GE discovery 670 for PET/CT.

PDF video
Matthias Ehrhardt (Bath) Randomized Image Reconstruction
A popular algorithm for image reconstruction with non-differentiable priors is the primal-dual hybrid gradient (PDHG) algorithm proposed by Chambolle and Pock in 2011. In some scenarios it is beneficial to employ a stochastic version of this algorithm where not all dual updates are executed simultaneously. In this talk we focus on the theory and applications of a stochastic version of PDHG coined SPDHG. It turns out that SPDHG has qualitative and quantitative convergence properties along the same lines as the deterministic PDHG. Numerical results in various medical imaging applications (PET, parallel MRI, motion-corrected CT) show a significant speed up of the proposed method.
PDF video
Zeljko Kereta (UCL) On Stochastic Variance Reduction for Penalised PET Reconstruction
PET image reconstruction algorithms are often accelerated during early iterations with the use of subsets. However, these methods may exhibit limit cycle behaviour at later iterations due to variations between subsets. Convergence can be achieved via the relaxed step size sequence, but the heuristic selection of parameters impacts the quality of the image sequence and algorithm convergence rates.

In this talk, we demonstrate the adaption and application of a class of stochastic variance reduction gradient algorithms (SAGA, SVRG, and SVREM) for PET image reconstruction and numerically compare convergence performance to BSREM. These algorithms require the retention in memory of recently computed subset gradients, which are utilised in subsequent updates. The impact of the number of subsets, different preconditioners and step size methods on the convergence of regions of interest values within the reconstructed images is explored. We observe that when using constant preconditioning, the studied methos demonstrate reduced variations in voxel values between subsequent updates and are less reliant on step size hyper-parameter selection than BSREM reconstructions. Furthermore, they can converge significantly faster to the penalised maximum likelihood solution, particularly in low count data.

Evangelos Papoutselis (STFC) Tomography reconstruction using the Core Imaging Library
In this talk we present the Core Imaging Library (CIL), an open source, object-oriented framework suitable for inverse problems in imaging with a particular focus on tomography reconstruction. CIL was developed by the Collaborative Computational Project in Tomographic Imaging (CCPi) (, a UK academic network, which unites expertise in the field of Computed Tomography (CT). The goal of CIL is to provide a user-friendly interface that covers all the tomography steps such as data loading, pre-processing, reconstruction, post-processing and visualization. Specifically, we focus on the reconstruction of real size tomography datasets acquired from specific scanner geometries, e.g., parallel, cone-beam, which require large memory and computation power. The CIL optimization framework provides both analytic and iterative reconstruction algorithms with a flexible mix & match setup for different regularizers and fitting terms depending on the specific application. We will give an overview of the functionality of CIL and present different case studies in X-ray CT, Dynamic CT, Hyperspectral CT, Neutron CT and PET.
PDF video
Andrew Reader (KCL) Deep Learning for PET Image Reconstruction
Image reconstruction for positron emission tomography (PET) has developed over many decades, starting out with filtered backprojection methods, with advances coming from improved modelling of the data statistics and improved modelling of the overall physics of the data acquisition / imaging process. However, high noise and limited spatial resolution have remained issues in PET, and state-of-the-art reconstruction has started to exploit other medical imaging modalities (such as MRI) to assist in reducing noise and enhancing PET spatial resolution. With the present motivation to reduce injected radiation doses and/or reduce scanning times, there is ever greater need to exploit as much as information possible (e.g. from other data) when reconstructing from low-count data. This talk will briefly summarise a few of the deep learning PET image reconstruction methods which are able to harness and exploit more information. Some deep learning research progress in the areas of joint PET-MR reconstruction, self-supervision and multiplexed (multi-tracer) PET imaging will then be discussed.
PDF video
Subhadip Mukherjee (Cambridge) Deep learning for image reconstruction in X-ray CT
Deep learning (DL) has shown enormous promise for solving imaging inverse problems in general, and the image reconstruction task in X-ray CT in particular. The goal of this talk would be to introduce some of the recent notable DL-based approaches for X-ray CT while highlighting our work on data-driven adversarial regularization (AR). Specifically, I will focus on (i) learned convex regularization that comes with classical well-posedness guarantees and (ii) a new optimal transport-based unrolled AR framework that leads to computationally efficient and provably stable reconstruction. Both approaches are trained in an unsupervised manner, i.e., they do not require any paired training examples for training. Time permitting, I will touch upon our recent work on (provably convergent) stochastic unrolling that needs significantly less computational resources as compared to conventional full-batch unrolling for CT.
PDF video
Andreas Hauptmann Addressing the scalability issue in CBCT: A multiscale approach
Recent advances in deep learning for tomographic reconstructions have shown great potential to create accurate and high-quality images with a considerable speed-up of reconstruction time. Unfortunately, the training of these networks can be computationally highly demanding. This is especially so when the projection model is used iteratively within the architecture, which renders the training task prohibitive. In this talk, we discuss how to obtain scalable learned iterative reconstructions for fully three-dimensional CBCT. We achieve this by introducing a multi-scale scheme, such that early iterates are only computed on a course scale. This way training and reconstruction times are only governed by the last full-resolution scale. Results are presented for experimental CBCT measurements of a biological specimen.
 PDF video
Jennifer Steeden (UCL) Machine Learning for Image Reconstruction in Magnetic Resonance Imaging
Magnetic Resonance is an extremely valuable imaging technique however it is slow, often resulting in scan times in the order of 1 hour. There has been vast amounts of research in speeding up MRI scanning, including efficient non-Cartesian trajectories and data undersampling. There are many methods to remove the resulting undersampling artefacts, however these algorithms can be time consuming, often limiting their usability in the clinical environment.

Recently, Machine Learning (ML) has been demonstrated for reconstruction of MRI data. In ML the algorithms are trained to perform tasks by learning patterns from data rather than by explicit programming, but once trained, the resultant execution of the algorithm is extremely fast. In this talk I will discuss our work on ML for reconstruction of rapid, cardiac MRI data, focussing on image based networks. I will demonstrate clinical translation of these techniques and clinical benefits.

 PDF video
Andreas Kofler (PTB) Data-Driven Regularization for Dynamic Cardiac MRI
In this talk, we are going to consider different data-driven regularization techniques, ranging from more-classical methods, e.g. dictionary learning and sparse coding, to more recent approaches, e.g. unrolled neural networks, and combinations thereof. The talk will have a particular focus on dynamic cardiac MRI reconstruction.
Jeff Fessler (Univ Michigan) Joint Optimization of Learning-Based Image Reconstruction and Sampling for MRI
Machine learning approaches to medical image reconstruction are of considerable recent interest, especially supervised approaches that use a corpus of training data. Accelerated MRI scans, where fewer k-space points than image voxels are acquired, is a natural setting for such reconstruction methods. Recently, machine learning methods for optimizing the k-space sampling have also had growing interest. This talk will summarize recent work where we jointly optimize non-Cartesian k-space sampling, heeding physical constraints like gradient slew rate, and a learning-based image reconstruction method that originates from a large-scale optimization approach.

Joint work with Guanhua Wang , Tianrui Luo , Jon-Fredrik Nielsen, Douglas C Noll

 PDF video