Skip to main content

<i>Projector Calibration Using Passive Stereo and Triangulation</i>

Research Abstract
In the past, 3D shape reconstruction process was based on passive stereo which do not require direct control of any illumination source, instead relying entirely on light. Nowadays, 3D shape reconstruction is based on active stereo which replace one camera with a projector. The projector plays an important part in solving the correspondence problem. It projects coded patterns on the scanned object. By capturing the deformed pattern using cameras, the correspondences between image pixels and projector (columns-rows) can be found easily. To do that, the projector must be calibrated. In this work, the problem of projector calibration is solved by passive stereo and triangulation. Our system consists of two cameras, projector, and planner board. A checkerboard pattern is projected on the board and then captured by the two cameras. Using triangulation, the corresponding 3D points of the projected pattern is computed. In this way, having the 2D projected points in the projector frame and its 3D correspondences (calculated using triangulation) the system can be calibrated using a standard camera calibration method. A data projector has been calibrated by this method and accurate results have been achieved.
Research Authors
Yousef B. Mahdy, Khaled F. Hussain, and <b>Mostafa Salem</b>
Research Department
Research Journal
International Journal of Future Computer and Communication
Research Pages
pp. 385-390
Research Publisher
NULL
Research Rank
1
Research Vol
vol. 2, no. 5
Research Website
<a href= "https://doi.org/10.7763/IJFCC.2013.V2.191"> <font color="blue"> DOI: 10.7763/IJFCC.2013.V2.191</font></a>
Research Year
2013

<i>Projector Calibration Using Passive Stereo and Triangulation</i>

Research Abstract
In the past, 3D shape reconstruction process was based on passive stereo which do not require direct control of any illumination source, instead relying entirely on light. Nowadays, 3D shape reconstruction is based on active stereo which replace one camera with a projector. The projector plays an important part in solving the correspondence problem. It projects coded patterns on the scanned object. By capturing the deformed pattern using cameras, the correspondences between image pixels and projector (columns-rows) can be found easily. To do that, the projector must be calibrated. In this work, the problem of projector calibration is solved by passive stereo and triangulation. Our system consists of two cameras, projector, and planner board. A checkerboard pattern is projected on the board and then captured by the two cameras. Using triangulation, the corresponding 3D points of the projected pattern is computed. In this way, having the 2D projected points in the projector frame and its 3D correspondences (calculated using triangulation) the system can be calibrated using a standard camera calibration method. A data projector has been calibrated by this method and accurate results have been achieved.
Research Authors
Yousef B. Mahdy, Khaled F. Hussain, and <b>Mostafa Salem</b>
Research Department
Research Journal
International Journal of Future Computer and Communication
Research Member
Research Pages
pp. 385-390
Research Publisher
NULL
Research Rank
1
Research Vol
vol. 2, no. 5
Research Website
<a href= "https://doi.org/10.7763/IJFCC.2013.V2.191"> <font color="blue"> DOI: 10.7763/IJFCC.2013.V2.191</font></a>
Research Year
2013

<i>Projector Calibration Using Passive Stereo and Triangulation</i>

Research Abstract
In the past, 3D shape reconstruction process was based on passive stereo which do not require direct control of any illumination source, instead relying entirely on light. Nowadays, 3D shape reconstruction is based on active stereo which replace one camera with a projector. The projector plays an important part in solving the correspondence problem. It projects coded patterns on the scanned object. By capturing the deformed pattern using cameras, the correspondences between image pixels and projector (columns-rows) can be found easily. To do that, the projector must be calibrated. In this work, the problem of projector calibration is solved by passive stereo and triangulation. Our system consists of two cameras, projector, and planner board. A checkerboard pattern is projected on the board and then captured by the two cameras. Using triangulation, the corresponding 3D points of the projected pattern is computed. In this way, having the 2D projected points in the projector frame and its 3D correspondences (calculated using triangulation) the system can be calibrated using a standard camera calibration method. A data projector has been calibrated by this method and accurate results have been achieved.
Research Authors
Yousef B. Mahdy, Khaled F. Hussain, and <b>Mostafa Salem</b>
Research Department
Research Journal
International Journal of Future Computer and Communication
Research Pages
pp. 385-390
Research Publisher
NULL
Research Rank
1
Research Vol
vol. 2, no. 5
Research Website
<a href= "https://doi.org/10.7763/IJFCC.2013.V2.191"> <font color="blue"> DOI: 10.7763/IJFCC.2013.V2.191</font></a>
Research Year
2013

<i>One-shot domain adaptation in multiple sclerosis lesion segmentation using convolutional neural networks</i>

Research Abstract
In recent years, several convolutional neural network (CNN) methods have been proposed for the automated white matter lesion segmentation of multiple sclerosis (MS) patient images, due to their superior performance compared with those of other state-of-the-art methods. However, the accuracies of CNN methods tend to decrease significantly when evaluated on different image domains compared with those used for training, which demonstrates the lack of adaptability of CNNs to unseen imaging data. In this study, we analyzed the effect of intensity domain adaptation on our recently proposed CNN-based MS lesion segmentation method. Given a source model trained on two public MS datasets, we investigated the transferability of the CNN model when applied to other MRI scanners and protocols, evaluating the minimum number of annotated images needed from the new domain and the minimum number of layers needed to re-train to obtain comparable accuracy. Our analysis comprised MS patient data from both a clinical center and the public ISBI2015 challenge database, which permitted us to compare the domain adaptation capability of our model to that of other state-of-the-art methods. In both datasets, our results showed the effectiveness of the proposed model in adapting previously acquired knowledge to new image domains, even when a reduced number of training samples was available in the target dataset. For the ISBI2015 challenge, our one-shot domain adaptation model trained using only a single case showed a performance similar to that of other CNN methods that were fully trained using the entire available training set, yielding a comparable human expert rater performance. We believe that our experiments will encourage the MS community to incorporate its use in different clinical settings with reduced amounts of annotated data. This approach could be meaningful not only in terms of the accuracy in delineating MS lesions but also in the related reductions in time and economic costs derived from manual lesion labeling.
Research Authors
Sergi Valverde, <b>Mostafa Salem</b>, Mariano Cabezas, Deborah Pareto, Joan C. Vilanova, Lluís Ramió-Torrentà, Àlex Rovira, Joaquim Salvi, Arnau Oliver, Xavier Lladó
Research Department
Research Journal
NeuroImage: Clinical [Quality index: (JCR N IF 3.943, Q1(3/14)]
Research Pages
pp. 101638
Research Publisher
Elsevier
Research Rank
1
Research Vol
vol. 21
Research Website
<a href= "https://doi.org/10.1016/j.nicl.2018.101638"> <font color="blue"> DOI: 10.1016/j.nicl.2018.101638</font></a>
Research Year
2018

<i>A supervised framework with intensity subtraction and deformation field features for the detection of new T2-w lesions in multiple sclerosis</i>

Research Abstract
Introduction Longitudinal magnetic resonance imaging (MRI) analysis has an important role in multiple sclerosis diagnosis and follow-up. The presence of new T2-w lesions on brain MRI scans is considered a prognostic and predictive biomarker for the disease. In this study, we propose a supervised approach for detecting new T2-w lesions using features from image intensities, subtraction values, and deformation fields (DF). Methods One year apart multi-channel brain MRI scans were obtained for 60 patients, 36 of them with new T2-w lesions. Images from both temporal points were preprocessed and co-registered. Afterwards, they were registered using multi-resolution affine registration, allowing their subtraction. In particular, the DFs between both images were computed with the Demons non-rigid registration algorithm. Afterwards, a logistic regression model was trained with features from image intensities, subtraction values, and DF operators. We evaluated the performance of the model following a leave-one-out cross-validation scheme. Results In terms of detection, we obtained a mean Dice similarity coefficient of 0.77 with a true-positive rate of 74.30% and a false-positive detection rate of 11.86%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.56. The performance of our model was significantly higher than state-of-the-art methods. Conclusions The performance of the proposed method shows the benefits of using DF operators as features to train a supervised learning model. Compared to other methods, the proposed model decreases the number of false-positives while increasing the number of true-positives, which is relevant for clinical settings.
Research Authors
<b>Mostafa Salem</b>, Mariano Cabezas, Sergi Valverde, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Department
Research Journal
NeuroImage: Clinical [Quality index: (JCR N IF 3.943, Q1(3/14)]
Research Pages
pp. 607-615
Research Publisher
Elsevier
Research Rank
1
Research Vol
vol. 17
Research Website
<a href= "https://doi.org/10.1016/j.nicl.2017.11.015"> <font color="blue"> DOI: 10.1016/j.nicl.2017.11.015</font></a>
Research Year
2017

A fully convolutional neural network for new T2-w lesion detection in multiple sclerosis

Research Abstract

Introduction: Longitudinal magnetic resonance imaging (MRI) has an important role in multiple sclerosis (MS) diagnosis and follow-up. Specifically, the presence of new T2-w lesions on brain MR scans is considered a predictive biomarker for the disease. In this study, we propose a fully convolutional neural network (FCNN) to detect new T2-w lesions in longitudinal brain MR images. Methods: One year apart, multichannel brain MR scans (T1-w, T2-w, PD-w, and FLAIR) were obtained for 60 patients, 36 of them with new T2-w lesions. Modalities from both temporal points were preprocessed and linearly coregistered. Afterwards, an FCNN, whose inputs were from the baseline and follow-up images, was trained to detect new MS lesions. The first part of the network consisted of U-Net blocks that learned the deformation fields (DFs) and nonlinearly registered the baseline image to the follow-up image for each input modality. The learned DFs together with the baseline and follow-up images were then fed to the second part, another U-Net that performed the final detection and segmentation of new T2-w lesions. The model was trained end-to-end, simultaneously learning both the DFs and the new T2-w lesions, using a combined loss function. We evaluated the performance of the model following a leave-one-out cross-validation scheme. Results: In terms of the detection of new lesions, we obtained a mean Dice similarity coefficient of 0.83 with a true positive rate of 83.09% and a false positive detection rate of 9.36%. In terms of segmentation, we obtained a mean Dice similarity coefficient of 0.55. The performance of our model was significantly better compared to the state-of-the-art methods (p < 0.05). Conclusions: Our proposal shows the benefits of combining a learning-based registration network with a segmentation network. Compared to other methods, the proposed model decreases the number of false positives. During testing, the proposed model operates faster than the other two state-of-the-art methods based on the DF obtained by Demons.

Research Authors
Mostafa Salem, Sergi Valverde, Mariano Cabezas, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Department
Research Image
Research Journal
NeuroImage: Clinical [Quality index: (JCR N IF 3.943, Q1(3/14)]
Research Pages
102149
Research Publisher
Elsevier
Research Rank
1
Research Vol
Vol. 25
Research Website
https://www.sciencedirect.com/science/article/pii/S2213158219304954
Research Year
2020

Multiple Sclerosis Lesion Synthesis in MRI Using an Encoder-Decoder U-NET

Research Abstract

Magnetic resonance imaging (MRI) synthesis has attracted attention due to its various applications in the medical imaging domain. In this paper, we propose generating synthetic multiple sclerosis (MS) lesions on MRI images with the final aim to improve the performance of supervised machine learning algorithms, therefore, avoiding the problem of the lack of available ground truth. We propose a two-input two-output fully convolutional neural network model for MS lesion synthesis in MRI images. The lesion information is encoded as discrete binary intensity level masks passed to the model and stacked with the input images. The model is trained end-to-end without the need for manually annotating the lesions in the training set. We then perform the generation of synthetic lesions on healthy images via registration of patient images, which are subsequently used for data augmentation to increase the performance for supervised MS lesion detection algorithms. Our pipeline is evaluated on MS patient data from an in-house clinical dataset and the public ISBI2015 challenge dataset. The evaluation is based on measuring the similarities between the real and the synthetic images as well as in terms of lesion detection performance by segmenting both the original and synthetic images individually using a state-of-the-art segmentation framework. We also demonstrate the usage of synthetic MS lesions generated on healthy images as data augmentation. We analyze a scenario of limited training data (one-image training) to demonstrate the effect of the data augmentation on both datasets. Our results significantly show the effectiveness of the usage of synthetic MS lesion images. For the ISBI2015 challenge, our one-image model trained using only a single image plus the synthetic data augmentation strategy showed a performance similar to that of other CNN methods that were fully trained using the entire training set, yielding a comparable human expert rater performance.

Research Authors
Mostafa Salem, Mariano Cabezas, Sergi Valverde, Deborah Pareto, Arnau Oliver, Joaquim Salvi, Àlex Rovira, Xavier Lladó
Research Date
Research Department
Research Image
Research Journal
IEEE Access [Quality index: JCR CSIS IF 4.098, Q1(23/155)]
Research Pages
25171 - 25184
Research Publisher
IEEE
Research Rank
1
Research Vol
vol. 7
Research Website
https://ieeexplore.ieee.org/document/8645628
Research Year
2019

A Hadoop Extension for Analysing Spatiotemporally Referenced Events

Research Abstract
A spatiotemporally referenced event is a tuple that contains both a spatial reference and a temporal reference. The spatial reference is typically a point coordinate, and the temporal reference is a timestamp. The event payload can be the reading of a sensor (IoT systems), a user comment (geo-tagged social networks), a news article (gdelt), etc. Spatiotemporal event datasets are ever growing, and the requirements for their processing goes beyond traditional client-sever GIS architectures. Rather, Hadoop-like architectures shall be used. Yet, Hadoop does not provide the types and operations necessary for processing such datasets. In this paper, we propose a Hadoop extension (indeed a SpatialHadoop extension) capable of performing analytics on big spatiotemporally referenced event dataset. The extension includes data types and operators that are integrated into the Hadoop core, to be used as natives. We further optimize the querying by means of a spatiotemporal index. Experiments on the gdelt event dataset demonstrate the utility of the proposed extension.
Research Authors
Mohamed S Bakli, Mahmoud A Sakr, Taysir Hassan A Soliman
Research Department
Research Journal
International Conference on Advanced Intelligent Systems and Informatics.
Research Member
Research Pages
(pp.905-914)
Research Publisher
Springer International Publishing
Research Rank
3
Research Vol
(Vol 639)
Research Website
https://link.springer.com/chapter/10.1007/978-3-319-64861-3_85
Research Year
2017

A Hadoop Extension for Analysing Spatiotemporally Referenced Events

Research Abstract
A spatiotemporally referenced event is a tuple that contains both a spatial reference and a temporal reference. The spatial reference is typically a point coordinate, and the temporal reference is a timestamp. The event payload can be the reading of a sensor (IoT systems), a user comment (geo-tagged social networks), a news article (gdelt), etc. Spatiotemporal event datasets are ever growing, and the requirements for their processing goes beyond traditional client-sever GIS architectures. Rather, Hadoop-like architectures shall be used. Yet, Hadoop does not provide the types and operations necessary for processing such datasets. In this paper, we propose a Hadoop extension (indeed a SpatialHadoop extension) capable of performing analytics on big spatiotemporally referenced event dataset. The extension includes data types and operators that are integrated into the Hadoop core, to be used as natives. We further optimize the querying by means of a spatiotemporal index. Experiments on the gdelt event dataset demonstrate the utility of the proposed extension.
Research Authors
Mohamed S Bakli, Mahmoud A Sakr, Taysir Hassan A Soliman
Research Department
Research Journal
International Conference on Advanced Intelligent Systems and Informatics.
Research Pages
(pp.905-914)
Research Publisher
Springer International Publishing
Research Rank
3
Research Vol
(Vol 639)
Research Website
https://link.springer.com/chapter/10.1007/978-3-319-64861-3_85
Research Year
2017

A spatiotemporal algebra in Hadoop for moving objects

Research Abstract
Spatiotemporal data represent the real-world objects that move in geographic space over time. The enormous numbers of mobile sensors and location tracking devices continuously produce massive amounts of such data. This leads to the need for scalable spatiotemporal data management systems. Such systems shall be capable of representing spatiotemporal data in persistent storage and in memory. They shall also provide a range of query processing operators that may scale out in a cloud setting. Currently, very few researches have been conducted to meet this requirement. This paper proposes a Hadoop extension with a spatiotemporal algebra. The algebra consists of moving object types added as Hadoop native types, and operators on top of them. The Hadoop file system has been extended to support parameter passing for files that contain spatiotemporal data, and for operators that can be unary or binary. Both the types and operators are accessible for the MapReduce jobs. Such an extension allows users to write Hadoop programs that can perform spatiotemporal analysis. Certain queries may call more than one operator for different jobs and keep these operators running in parallel. This paper describes the design and implementation of this algebra, and evaluates it using a benchmark that is specific to moving object databases.
Research Authors
Mohamed S. Bakli, Mahmoud A. Sakr, Taysir Hassan A. Soliman
Research Department
Research Journal
Geo-spatial Information Science
Research Pages
(PP.102-114)
Research Publisher
Taylor & Francis
Research Rank
1
Research Vol
(Vol 21 - No 2)
Research Website
https://www.tandfonline.com/doi/full/10.1080/10095020.2017.1413798
Research Year
2018
Subscribe to