Skip to main content

Manual delineation of only one image in unseen databases is sufficient for accurate performance in automated multiple sclerosis lesion segmentation

Research Abstract

Background: Convolutional Neural Network (CNN) methods are being proposed for automated white matter lesion segmentation increasing the performance of typical state-of-the-art methods. However, their accuracy decreases significantly when using them on other image domains that those used for training, showing lack of adaptability to unseen imaging data and limiting its applicability in non-specialized hospitals. Aim: To analyze the effect of domain adaptation on multiple sclerosis (MS) lesion segmentation, investigating how transferable a CNN model is when applied to other unseen image domains. Methods: An automated lesion segmentation method based on a 11-layer CNN classifier was firstly fully-trained using 35 T1-w and FLAIR scans from the MS lesion segmentation challenges (MICCAI 2008 and 2016). Then, domain adaptation was independently evaluated on two different datasets composed of 60 and 61 T1w and FLAIR images from a clinical hospital and from the public ISBI2015 challenge, respectively. For each unseen dataset, the same source model was fine-tuned re-training only the last layers but using a single image (we tested the use of images with different lesion load). The Dice overlap (DSC) coefficient between the resulting segmentations and manual lesion annotations was compared with respect to the same model when was fully trained on the target domain and with respect to other methods such as LST. Results: On the clinical dataset, the performance of the model fully-trained with data from the target domain was DSC=0.53. When using the source model without readaptation, the performance dropped to DSC=0.25, while when adapting the source model using a single image the performance ranged between DSC=[0.30-0.48] depending on the lesion load of the image used. In all cases, showed a significant increase in the accuracy with respect to LST (DSC=0.29). On the ISBI2015 challenge, our fully-trained CNN method was ranked 3rd among 59 methods, showing human like segmentation performance. Interestingly, adapted models trained with only one image still yielded a remarkably higher performance than other state-of-the-art methods like LST or lesionToads, showing also a very similar performance to other CNN models trained on larger number of images. Conclusions: Domain adaptation allows to use pre-trained CNNs on unseen clinical settings. A manual delineation of the lesions in only one image is sufficient to obtain accurate automated lesion segmentation performances. Disclosure: S. Valverde: nothing to disclose. M. Salem: nothing to disclose. M. Cabezas: nothing to disclose. D. Pareto: has received speaking honoraria fron Novartis and Biogen. J.C. Vilanova: nothing to disclose. Lluís Ramió-Torrentà: has received compensation for consulting services and speaking honoraria from from Biogen, Novartis, Bayer, Merck, Sanofi, Genzyme, Teva Pharmaceutical Industries Ltd, Almirall, Mylan. A. Rovira serves on scientific advisory boards for Biogen Idec, Novartis, Genzyme, and OLEA Medical, and has received speaker honoraria from Bayer, Genzyme, Sanofi-Aventis, Bracco, Merck-Serono, Teva Pharmaceutical Industries Ltd, OLEA Medical, Stendhal, Novartis and Biogen Idec. A. Oliver: nothing to disclose. J. Salvi: nothing to disclose. X. Lladó: nothing to disclose.

Research Authors
Sergi Valverde, Mostafa Salem, Mariano Cabezas, Deborah Pareto, Joan C. Vilanova, Lluís Ramió-Torrentà, Àlex Rovira, Joaquim Salvi, Arnau Oliver, Xavier Lladó
Research Department
Research Journal
Multiple Sclerosis Journal - ECTRIMS (JCR CN IF:5.649 Q1(23/199)), Berlin. Germany
Research Pages
pp. 121-327
Research Publisher
NULL
Research Rank
3
Research Vol
Vol. 24
Research Website
NULL
Research Year
2018

<i>Lesion synthesis for extending MRI training datasets and improving automatic multiple sclerosis lesion segmentation</i>

Research Abstract
Background: Image synthesis is gaining attention in many domains, including medical imaging. For instance, the generation of synthetic lesions can be used as a solution to the lack of large datasets with multiple sclerosis (MS) lesions manually annotated, which is one of the main limitations to train robust and generalisable supervised machine learning algorithms. Objectives: To propose a fully convolutional neural network (CNN) model for MS lesion synthesis in magnetic resonance images. Materials and methods: T2-FLAIR and T1-w images from a dataset of 65 patients with a clinically isolated syndrome or early relapsing-remitting MS were used to train a CNN able to synthesise new lesions. The inputs of the CNN were processed images without lesions, while the original images with lesions were the outputs of the CNN. To obtain the input images, we computed a white matter hyperintensity (WMH) mask along with several intensity level masks that encoded the intensity profiles of the WMH voxels. Then, the WMH mask was filled with intensities resembling white matter. The CNN architecture to perform the image synthesis consisted of two encoders (one per each modality) that learned the latent representation for the input modalities, and two decoders, that allowed to generate new lesions in both modalities. To evaluate the synthesis, we tested a state-of-the-art MS lesion segmentation approach (Valverde et al. 2017) on an in-house dataset and the public ISBI2015 challenge dataset, showing the performance in different scenarios such as using synthetic images for data augmentation. Results: For the in-house dataset, when adding to a single original image several synthetic ones, the performance of the lesion segmentation increased the sensitivity from 41% to 50% and the positive predictive value (PPV) from 53% to 65%. Repeating the experiment on the ISBI2015 dataset, the sensitivity increased from 44% to 51% and the PPV from 76% to 78%. With the inclusion of few original images and the synthetic data, we were able to increase the detection performance to that of the segmentation algorithm fully trained using the entire available training set, yielding a comparable human expert rater performance. Conclusions: The proposed CNN was able to generate T1-w and T2-FLAIR images with synthetic MS lesions. The combination of original images with synthetic ones of the same domain increased the lesion segmentation accuracy, reducing also the number of manually annotated images of the database. Disclosure: M. Salem: nothing to disclose. S. Valverde: nothing to disclose. M. Cabezas: nothing to disclose. D. Pareto: has received speaking honoraria fron Novartis and Biogen. A. Oliver: nothing to disclose. J. Salvi: nothing to disclose. A. Rovira serves on scientific advisory boards for Novartis, Sanofi-Genzyme, Icometrix, SyntheticMR, Bayer, Biogen and OLEA Medical, and has received speaker honoraria from Bayer, Sanofi-Genzyme, Bracco, Merck-Serono, Teva Pharmaceutical Industries Ltd, Novartis, Roche and Biogen. X. Lladó: nothing to disclose.
Research Authors
<b>Mostafa Salem</b>, Sergi Valverde, Mariano Cabezas, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Department
Research Journal
Multiple Sclerosis Journal - ECTRIMS (JCR CN IF:5.649 Q1(23/199)), Stockholm. Sweden
Research Pages
pp. 463-463
Research Publisher
SAGE PUBLICATIONS LTD
Research Rank
3
Research Vol
Vol. 25
Research Website
NULL
Research Year
2019

<i>Detecting the appearance of new T2-w multiple sclerosis lesions in longitudinal studies using deep convolutional neural networks</i>

Research Abstract
Background: Magnetic Resonance Imaging (MRI) has become one of the most important clinical tools for diagnosing and monitoring multiple sclerosis (MS). In particular, new T2 lesions on brain MRI are considered a good biomarker for monitoring and predicting treatment response. Therefore, building automated and accurate methods for the detection of new T2 lesions is a need. Objectives: To propose a fully convolutional neural network (CNN) to detect new T2 lesions in longitudinal brain MRI images. Materials and methods: One year apart multi-channel 3T brain MRI were obtained in 60 MS patients, including transverse T2-FLAIR, PD-w, T2 and T1 images. 36 of those patients presented new T2 lesions that were visually and semi-automatically annotated by expert neuroradiologists. The remaining 24 cases had no new lesions. All Images were pre-processed and co-registered by affine registration. Afterwards, a fully CNN where the inputs were the basal and follow-up images was trained to detect new MS lesions. The first part of the network was a U-Net block that automatically learned the deformation fields (DFs) which nonlinearly registered the basal image to the follow-up space. The learnt DFs together with the basal and follow-up images were then feed to a second block, another U-Net that performed the final detection and segmentation of the new T2 lesions. Results: We performed a leave-one-out cross-validation strategy using the 36 patients with new T2 lesions. The model obtained 82.67% of true positive fraction (TPF), 15.06% of false positive fraction (FPF), and a mean detection and segmentation Dice similarity coefficient of 0.79 and 0.52, respectively. Our model had significantly better results (p 0.05) than those of other state-of-the-art approaches such as Sweeney et al. (2013), Cabezas et al. (2016) and Salem et al. (2018). Regarding the 24 cases with no new T2 lesions, a trained model with all the 36 cases provided only 2 false positive detections. The proposed CNN model was faster in testing time than other state-of-the-art methods since there is no need to perform a non-rigid registration. Conclusions: The proposed CNN approach provides better results than other state-of-the-art methods both in terms of sensitivity and specificity. In addition, the end-to-end learning framework avoids the use of complex processes such as the non-rigid registration and the definition of hand-crafted image features. Disclosure: M. Salem: nothing to disclose. S. Valverde: nothing to disclose. M. Cabezas: nothing to disclose. D. Pareto: has received speaking honoraria fron Novartis and Biogen. A. Oliver: nothing to disclose. J. Salvi: nothing to disclose. A. Rovira serves on scientific advisory boards for Novartis, Sanofi-Genzyme, Icometrix, SyntheticMR, Bayer, Biogen and OLEA Medical, and has received speaker honoraria from Bayer, Sanofi-Genzyme, Bracco, Merck-Serono, Teva Pharmaceutical Industries Ltd, Novartis, Roche and Biogen. X. Lladó: nothing to disclose.
Research Authors
<b>Mostafa Salem</b>, Sergi Valverde, Mariano Cabezas, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Department
Research Journal
Multiple Sclerosis Journal - ECTRIMS (JCR CN IF:5.649 Q1(23/199)), Stockholm. Sweden
Research Pages
pp. 462-463
Research Publisher
SAGE PUBLICATIONS LTD
Research Rank
3
Research Vol
Vol. 25
Research Website
NULL
Research Year
2019

<i>Projector Calibration Using Passive Stereo and Triangulation</i>

Research Abstract
In the past, 3D shape reconstruction process was based on passive stereo which do not require direct control of any illumination source, instead relying entirely on light. Nowadays, 3D shape reconstruction is based on active stereo which replace one camera with a projector. The projector plays an important part in solving the correspondence problem. It projects coded patterns on the scanned object. By capturing the deformed pattern using cameras, the correspondences between image pixels and projector (columns-rows) can be found easily. To do that, the projector must be calibrated. In this work, the problem of projector calibration is solved by passive stereo and triangulation. Our system consists of two cameras, projector, and planner board. A checkerboard pattern is projected on the board and then captured by the two cameras. Using triangulation, the corresponding 3D points of the projected pattern is computed. In this way, having the 2D projected points in the projector frame and its 3D correspondences (calculated using triangulation) the system can be calibrated using a standard camera calibration method. A data projector has been calibrated by this method and accurate results have been achieved.
Research Authors
Yousef B. Mahdy, Khaled F. Hussain, and <b>Mostafa Salem</b>
Research Department
Research Journal
International Journal of Future Computer and Communication
Research Pages
pp. 385-390
Research Publisher
NULL
Research Rank
1
Research Vol
vol. 2, no. 5
Research Website
<a href= "https://doi.org/10.7763/IJFCC.2013.V2.191"> <font color="blue"> DOI: 10.7763/IJFCC.2013.V2.191</font></a>
Research Year
2013

<i>Projector Calibration Using Passive Stereo and Triangulation</i>

Research Abstract
In the past, 3D shape reconstruction process was based on passive stereo which do not require direct control of any illumination source, instead relying entirely on light. Nowadays, 3D shape reconstruction is based on active stereo which replace one camera with a projector. The projector plays an important part in solving the correspondence problem. It projects coded patterns on the scanned object. By capturing the deformed pattern using cameras, the correspondences between image pixels and projector (columns-rows) can be found easily. To do that, the projector must be calibrated. In this work, the problem of projector calibration is solved by passive stereo and triangulation. Our system consists of two cameras, projector, and planner board. A checkerboard pattern is projected on the board and then captured by the two cameras. Using triangulation, the corresponding 3D points of the projected pattern is computed. In this way, having the 2D projected points in the projector frame and its 3D correspondences (calculated using triangulation) the system can be calibrated using a standard camera calibration method. A data projector has been calibrated by this method and accurate results have been achieved.
Research Authors
Yousef B. Mahdy, Khaled F. Hussain, and <b>Mostafa Salem</b>
Research Department
Research Journal
International Journal of Future Computer and Communication
Research Member
Research Pages
pp. 385-390
Research Publisher
NULL
Research Rank
1
Research Vol
vol. 2, no. 5
Research Website
<a href= "https://doi.org/10.7763/IJFCC.2013.V2.191"> <font color="blue"> DOI: 10.7763/IJFCC.2013.V2.191</font></a>
Research Year
2013

<i>Projector Calibration Using Passive Stereo and Triangulation</i>

Research Abstract
In the past, 3D shape reconstruction process was based on passive stereo which do not require direct control of any illumination source, instead relying entirely on light. Nowadays, 3D shape reconstruction is based on active stereo which replace one camera with a projector. The projector plays an important part in solving the correspondence problem. It projects coded patterns on the scanned object. By capturing the deformed pattern using cameras, the correspondences between image pixels and projector (columns-rows) can be found easily. To do that, the projector must be calibrated. In this work, the problem of projector calibration is solved by passive stereo and triangulation. Our system consists of two cameras, projector, and planner board. A checkerboard pattern is projected on the board and then captured by the two cameras. Using triangulation, the corresponding 3D points of the projected pattern is computed. In this way, having the 2D projected points in the projector frame and its 3D correspondences (calculated using triangulation) the system can be calibrated using a standard camera calibration method. A data projector has been calibrated by this method and accurate results have been achieved.
Research Authors
Yousef B. Mahdy, Khaled F. Hussain, and <b>Mostafa Salem</b>
Research Department
Research Journal
International Journal of Future Computer and Communication
Research Pages
pp. 385-390
Research Publisher
NULL
Research Rank
1
Research Vol
vol. 2, no. 5
Research Website
<a href= "https://doi.org/10.7763/IJFCC.2013.V2.191"> <font color="blue"> DOI: 10.7763/IJFCC.2013.V2.191</font></a>
Research Year
2013

<i>One-shot domain adaptation in multiple sclerosis lesion segmentation using convolutional neural networks</i>

Research Abstract
In recent years, several convolutional neural network (CNN) methods have been proposed for the automated white matter lesion segmentation of multiple sclerosis (MS) patient images, due to their superior performance compared with those of other state-of-the-art methods. However, the accuracies of CNN methods tend to decrease significantly when evaluated on different image domains compared with those used for training, which demonstrates the lack of adaptability of CNNs to unseen imaging data. In this study, we analyzed the effect of intensity domain adaptation on our recently proposed CNN-based MS lesion segmentation method. Given a source model trained on two public MS datasets, we investigated the transferability of the CNN model when applied to other MRI scanners and protocols, evaluating the minimum number of annotated images needed from the new domain and the minimum number of layers needed to re-train to obtain comparable accuracy. Our analysis comprised MS patient data from both a clinical center and the public ISBI2015 challenge database, which permitted us to compare the domain adaptation capability of our model to that of other state-of-the-art methods. In both datasets, our results showed the effectiveness of the proposed model in adapting previously acquired knowledge to new image domains, even when a reduced number of training samples was available in the target dataset. For the ISBI2015 challenge, our one-shot domain adaptation model trained using only a single case showed a performance similar to that of other CNN methods that were fully trained using the entire available training set, yielding a comparable human expert rater performance. We believe that our experiments will encourage the MS community to incorporate its use in different clinical settings with reduced amounts of annotated data. This approach could be meaningful not only in terms of the accuracy in delineating MS lesions but also in the related reductions in time and economic costs derived from manual lesion labeling.
Research Authors
Sergi Valverde, <b>Mostafa Salem</b>, Mariano Cabezas, Deborah Pareto, Joan C. Vilanova, Lluís Ramió-Torrentà, Àlex Rovira, Joaquim Salvi, Arnau Oliver, Xavier Lladó
Research Department
Research Journal
NeuroImage: Clinical [Quality index: (JCR N IF 3.943, Q1(3/14)]
Research Pages
pp. 101638
Research Publisher
Elsevier
Research Rank
1
Research Vol
vol. 21
Research Website
<a href= "https://doi.org/10.1016/j.nicl.2018.101638"> <font color="blue"> DOI: 10.1016/j.nicl.2018.101638</font></a>
Research Year
2018

<i>A supervised framework with intensity subtraction and deformation field features for the detection of new T2-w lesions in multiple sclerosis</i>

Research Abstract
Introduction Longitudinal magnetic resonance imaging (MRI) analysis has an important role in multiple sclerosis diagnosis and follow-up. The presence of new T2-w lesions on brain MRI scans is considered a prognostic and predictive biomarker for the disease. In this study, we propose a supervised approach for detecting new T2-w lesions using features from image intensities, subtraction values, and deformation fields (DF). Methods One year apart multi-channel brain MRI scans were obtained for 60 patients, 36 of them with new T2-w lesions. Images from both temporal points were preprocessed and co-registered. Afterwards, they were registered using multi-resolution affine registration, allowing their subtraction. In particular, the DFs between both images were computed with the Demons non-rigid registration algorithm. Afterwards, a logistic regression model was trained with features from image intensities, subtraction values, and DF operators. We evaluated the performance of the model following a leave-one-out cross-validation scheme. Results In terms of detection, we obtained a mean Dice similarity coefficient of 0.77 with a true-positive rate of 74.30% and a false-positive detection rate of 11.86%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.56. The performance of our model was significantly higher than state-of-the-art methods. Conclusions The performance of the proposed method shows the benefits of using DF operators as features to train a supervised learning model. Compared to other methods, the proposed model decreases the number of false-positives while increasing the number of true-positives, which is relevant for clinical settings.
Research Authors
<b>Mostafa Salem</b>, Mariano Cabezas, Sergi Valverde, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Department
Research Journal
NeuroImage: Clinical [Quality index: (JCR N IF 3.943, Q1(3/14)]
Research Pages
pp. 607-615
Research Publisher
Elsevier
Research Rank
1
Research Vol
vol. 17
Research Website
<a href= "https://doi.org/10.1016/j.nicl.2017.11.015"> <font color="blue"> DOI: 10.1016/j.nicl.2017.11.015</font></a>
Research Year
2017

A fully convolutional neural network for new T2-w lesion detection in multiple sclerosis

Research Abstract

Introduction: Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new T2-w lesions on brain MR scans is considered a predictive biomarker for the disease. In this study, we propose a fully convolutional neural network (FCNN) to detect new T2-w lesions in longitudinal brain MR images. Methods: One year apart, multichannel brain MR scans (T1-w, T2-w, PD-w, and FLAIR) were obtained for 60 patients, 36 of them with new T2-w lesions. Modalities from both temporal points were preprocessed and linearly coregistered. Afterwards, an FCNN, whose inputs were from the baseline and follow-up images, was trained to detect new MS lesions. The first part of the network consisted of U-Net blocks that learned the deformation fields (DFs) and nonlinearly registered the baseline image to the follow-up image for each input modality. The learned DFs together with the baseline and follow-up images were then fed to the second part, another U-Net that performed the final detection and segmentation of new T2-w lesions. The model was trained end-to-end, simultaneously learning both the DFs and the new T2-w lesions, using a combined loss function. We evaluated the performance of the model following a leave-one-out cross-validation scheme. Results: In terms of the detection of new lesions, we obtained a mean Dice similarity coefficient of 0.83 with a true positive rate of 83.09% and a false positive detection rate of 9.36%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.55. The performance of our model was significantly better compared to the state-of-the-art methods (p < 0.05). Conclusions: Our proposal shows the benefits of combining a learning-based registration network with a segmentation network. Compared to other methods, the proposed model decreases the number of false positives. During testing, the proposed model operates faster than the other two state-of-the-art methods based on the DF obtained by Demons.

Research Authors
Mostafa Salem, Sergi Valverde, Mariano Cabezas, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Department
Research Image
Research Journal
NeuroImage: Clinical [Quality index: (JCR N IF 3.943, Q1(3/14)]
Research Pages
102149
Research Publisher
Elsevier
Research Rank
1
Research Vol
Vol. 25
Research Website
https://www.sciencedirect.com/science/article/pii/S2213158219304954
Research Year
2020

Multiple Sclerosis Lesion Synthesis in MRI Using an Encoder-Decoder U-NET

Research Abstract

Magnetic resonance imaging (MRI) synthesis has attracted attention due to its various applications in the medical imaging domain. In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine learning algorithms, therefore, avoiding the problem of the lack of available ground truth. We propose a two-input two-output fully convolutional neural network model for MS lesion synthesis in MRI images. The lesion information is encoded as discrete binary intensity level masks passed to the model and stacked with the input images. The model is trained end-to-end without the need for manually annotating the lesions in the training set. We then perform the generation of synthetic lesions on healthy images via registration of patient images, which are subsequently used for data augmentation to increase the performance for supervised MS lesion detection algorithms. Our pipeline is evaluated on MS patient data from an in-house clinical dataset and the public ISBI2015 challenge dataset. The evaluation is based on measuring the similarities between the real and the synthetic images as well as in terms of lesion detection performance by segmenting both the original and synthetic images individually using a state-of-the-art segmentation framework. We also demonstrate the usage of synthetic MS lesions generated on healthy images as data augmentation. We analyze a scenario of limited training data (one-image training) to demonstrate the effect of the data augmentation on both datasets. Our results significantly show the effectiveness of the usage of synthetic MS lesion images. For the ISBI2015 challenge, our one-image model trained using only a single image plus the synthetic data augmentation strategy showed a performance similar to that of other CNN methods that were fully trained using the entire training set, yielding a comparable human expert rater performance.

Research Authors
Mostafa Salem, Mariano Cabezas, Sergi Valverde, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Date
Research Department
Research Image
Research Journal
IEEE Access [Quality index: JCR CSIS IF 4.098, Q1(23/155)]
Research Pages
25171 - 25184
Research Publisher
IEEE
Research Rank
1
Research Vol
vol. 7
Research Website
https://ieeexplore.ieee.org/document/8645628
Research Year
2019
Subscribe to