Skip to main content

Image Enhancement using E-spline Functions

Research Abstract
Exponential spline polynomials (E-splines) represent the best smooth transition between continuous and discrete domains. As they are constructed from convolution of exponential segments, there are many degrees of freedom to optimally choose the most convenient E-spline, suitable for a specific application. In this paper, the parameters of these Esplines were optimally chosen, to enhance the performance of image de-noising as well as image zooming schemes. The proposed technique is based on minimizing the total variation function of the detail coefficients of the E-spline based wavelet decomposition. In image de-noising schemes, apart from Espline parameter estimations, the thresholding levels of their detail coefficients, are also optimally chosen. In zooming applications, the quality of interpolated images are further improved and sharpened by applying ICA technique to them, in order to remove any dependency. Illustrative examples are given to verify image enhancement of the proposed e-spline scheme, when compared with the existing approaches.
Research Authors
M. F. Fahmy, G. Fahmy and O. F. Fahmy
Research Department
Research Journal
IEEE International Symposium for Signal Processing and Information Technology, Athens, Dec, 2013
Research Member
Research Pages
NULL
Research Publisher
NULL
Research Rank
3
Research Vol
NULL
Research Website
NULL
Research Year
2013

A Lifting Based System for Compression and Classification trade off in the JPEG2000 framework

Research Abstract
In this paper, we propose a novel design for a lifting based wavelet system that achieves the optimal trade off between compression and classification performances. In addition, it can also achieve a superior compression performance compared to existing wavelet kernels. The proposed system is based on bi-orthogonal filters and can operate in a scalable compression framework. In the proposed system, the trade off point between compression and classification is determined by the system, however the user can also fine-tune the relative performance using two controllers (one for compression and one for classification). Extensive simulations have been performed to demonstrate the superior compression and/or classification performance of our system in the context of the recent image compression standard, namely (JPEG2000). Our simulation result shows that the lifting based kernels, generated from the proposed system, are capable of achieving superior compression performance compared to the default kernels adopted in the JPEG2000 standard (with a classification rate of 70%). The generated kernels can also achieve a comparable compression quality with the JPEG2000 kernels whilst also provide a 99% classification rate. In other words the proposed lifting based system achieves the trade off between compression and classification performance in the wavelet domain.
Research Authors
G. Fahmy, S. Panchanathan
Research Department
Research Journal
Journal of Visual Communication and Image Representation, vol. 15, issue 2, pp. 145-162, June 2004
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2004

Towards an Automated Dental Identification System (ADIS)

Research Abstract
Forensic odontology has long been carried out by forensic experts of law enforcement agencies for postmortem identification. We address the problem of developing an automated system for postmortem identification using dental records (dental radiographs). This automated dental identification system (ADIS) can be used by law enforcement agencies as well as military agencies throughout the United States to locate missing persons using databases of dental x rays of human remains and dental scans of missing or wanted persons. Currently, this search and identification process is carried out manually, which makes it very time-consuming in mass disasters. We propose a novel architecture for ADIS, define the functionality of its components, and describe the techniques used in realizing these components. We also present the performance of each of these components using a database of dental images.
Research Authors
G. Fahmy, D. Nassar, E. Haj-Said, H. Chen, O. Nomir, J. Zhou, R. Howell, H. Ammar, M. Abdel-Mottaleb and A. Jain
Research Department
Research Journal
Journal of Electronic Imaging, vol. 14.issue 4, 043018, December 2005.
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2005

Teeth Segmentation in Digitized Dental X-Ray Films using Mathematical Morphology

Research Abstract
Automating the process of postmortem identification of individuals using dental records is receiving increased attention. Teeth segmentation from dental radiographic films is an essential step for achieving highly automated postmortem identification. In this paper, we offer a mathematical morphology approach to the problem of teeth segmentation.We also propose a grayscale contrast stretching transformation to improve the performance of teeth segmentation. We compare and contrast our approach with other approaches proposed in the literature based on a theoretical and empirical basis. The results show that in addition to its capability of handling bitewing and periapical dental radiographic views, our approach exhibits the lowest failure rate among all approaches studied.
Research Authors
Eyad Haj Said, Diaa M. Nassar, G. Fahmy and Hany Ammar
Research Department
Research Journal
IEEE Transactions for security and information forensics, vol. 1, pp. 178-189, June 2006
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2006

Texture Characterization for Joint Compression and Classification Based on Human Perception

Research Abstract
Today’s multimedia applications demand sophisticated compression and classification techniques in order to store, transmit, and retrieve audio-visual information efficiently. Over the last decade, perceptually based image compression methods have been gaining importance. These methods take into account the abilities (and the limitations) of human visual perception (HVP) when performing compression. The upcoming MPEG 7 standard also addresses the need for succinct classification and indexing of visual content for efficient retrieval. However, there has been no research that has attempted to exploit the characteristics of the human visual system to perform both compression and classification jointly. One area of HVP that has unexplored potential for joint compression and classification is spatial frequency perception. Spatial frequency content that is perceived by humans can be characterized in terms of three parameters, which are:1) magnitude; 2) phase; and 3) orientation. While the magnitude of spatial frequency content has been exploited in several existing image compression techniques, the novel contribution of this paper is its focus on the use of phase coherence for joint compression and classification in the wavelet domain. Specifically, this paper describes a human visual system-based method for measuring the degree to which an image contains coherent (perceptible) phase information, and then exploits that information to provide joint compression and classification. Simulation results that demonstrate the efficiency of this method are presented.
Research Authors
G. Fahmy, J. Black and S. Panchanathan
Research Department
Research Journal
IEEE Transactions on Image Processing, vol. 16, pp. 1389-1696 June 2006
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2006

Nonblind and Quasiblind Natural Preserve Transform Watermarking

Research Abstract
This paper describes a new image watermarking technique based on the Natural Preserving Transform (NPT). The proposed watermarking scheme uses NPT to encode a gray scale watermarking logo image or text, into a host image at any location. NPT brings a unique feature which is uniformly distributing the logo across the host image in an imperceptible manner. The contribution of this paper lies is presenting two efficient non-blind and quasi-blind watermark extraction techniques. In the quasi blind case, the extraction algorithm requires little information about the original image that is already conveyed by the watermarked image. Moreover, the proposed scheme does not introduce visual quality degradation into the host image while still being able to extract a logo with a relatively large amount of data. The performance and robustness of the proposed technique are tested by applying common image-processing operations such as cropping, noise degradation, and compression. A quantitative measure is proposed to objectify performance; under this measure, the proposed technique outperforms most of the recent techniques in most cases. We also implemented the proposed technique on a hardware platform, digital signal processor (DSK 6713). Results are illustrated to show the effectiveness of the proposed technique, in different noisy environments.
Research Authors
G. Fahmy, M. F. Fahmy and U. S. Mohamed
Research Department
Research Journal
EURASIP Journal of advances on Signal Processing, volume 2010, ID. 452548
Research Member
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2010

Nonblind and Quasiblind Natural Preserve Transform Watermarking

Research Abstract
This paper describes a new image watermarking technique based on the Natural Preserving Transform (NPT). The proposed watermarking scheme uses NPT to encode a gray scale watermarking logo image or text, into a host image at any location. NPT brings a unique feature which is uniformly distributing the logo across the host image in an imperceptible manner. The contribution of this paper lies is presenting two efficient non-blind and quasi-blind watermark extraction techniques. In the quasi blind case, the extraction algorithm requires little information about the original image that is already conveyed by the watermarked image. Moreover, the proposed scheme does not introduce visual quality degradation into the host image while still being able to extract a logo with a relatively large amount of data. The performance and robustness of the proposed technique are tested by applying common image-processing operations such as cropping, noise degradation, and compression. A quantitative measure is proposed to objectify performance; under this measure, the proposed technique outperforms most of the recent techniques in most cases. We also implemented the proposed technique on a hardware platform, digital signal processor (DSK 6713). Results are illustrated to show the effectiveness of the proposed technique, in different noisy environments.
Research Authors
G. Fahmy, M. F. Fahmy and U. S. Mohamed
Research Department
Research Journal
EURASIP Journal of advances on Signal Processing, volume 2010, ID. 452548
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2010

Nonblind and Quasiblind Natural Preserve Transform Watermarking

Research Abstract
This paper describes a new image watermarking technique based on the Natural Preserving Transform (NPT). The proposed watermarking scheme uses NPT to encode a gray scale watermarking logo image or text, into a host image at any location. NPT brings a unique feature which is uniformly distributing the logo across the host image in an imperceptible manner. The contribution of this paper lies is presenting two efficient non-blind and quasi-blind watermark extraction techniques. In the quasi blind case, the extraction algorithm requires little information about the original image that is already conveyed by the watermarked image. Moreover, the proposed scheme does not introduce visual quality degradation into the host image while still being able to extract a logo with a relatively large amount of data. The performance and robustness of the proposed technique are tested by applying common image-processing operations such as cropping, noise degradation, and compression. A quantitative measure is proposed to objectify performance; under this measure, the proposed technique outperforms most of the recent techniques in most cases. We also implemented the proposed technique on a hardware platform, digital signal processor (DSK 6713). Results are illustrated to show the effectiveness of the proposed technique, in different noisy environments.
Research Authors
G. Fahmy, M. F. Fahmy and U. S. Mohamed
Research Department
Research Journal
EURASIP Journal of advances on Signal Processing, volume 2010, ID. 452548
Research Member
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2010

Modified Efficient Fast Multiplication-Free Integer Transformation for the 2-D DCT H.265 Standard

Research Abstract
In this paper, efficient one-dimensional (1-D) fast integer transform algorithm of the DCT matrix for the H.265 standard is proposed. Based on the symmetric property of the integer transform matrix and the matrix operations, along with using the dyadic symmetry modification on the standard matrix, the efficient fast 1-D integer transform algorithm is developed. Therefore, the computational complexities of the proposed fast integer transform are smaller than those of the direct method. In addition to computational complexity reduction the proposed algorithms provides transformation quality improvement. With lower complexity and better transformation quality, the proposed fast algorithm is suitable to accelerate the quality-demanding video coding computations
Research Authors
M. N. Haggag, M. El-Sharkawy, and G. Fahmy
Research Department
Research Journal
Journal of Software Engineering and Applications, vol. 3, no. 8 , August 2010
Research Pages
NULL
Research Publisher
NULL
Research Rank
1
Research Vol
NULL
Research Website
NULL
Research Year
2010

E-spline Based Image Interpolators

Research Abstract
Exponential spline polynomials (E-splines) represent the best smooth transition between continuous and discrete domains. As they are constructed from convolution of exponential segments, there are many degrees of freedom to optimally choose the most convenient E-spline; suitable for a specific application. In this paper, the parameters of these Esplines are optimally chosen, to sharpen the performance of an interpolated high resolution systems HR derived from a given low resolution decimated one whether noisy or noiseless. The proposed technique is based on minimizing the aliasing effects due to the high frequency bands of the HR images. Illustrative examples are given to verify image enhancement of the proposed E-spline scheme, when compared with the existing approaches.
Research Authors
M.F. Fahmy, G. Fahmy, and O. F. Fahmy
Research Department
Research Journal
IEEE International Symposium for Signal Processing and Information Technology, India, Dec, 2014
Research Member
Research Pages
NULL
Research Publisher
NULL
Research Rank
3
Research Vol
NULL
Research Website
NULL
Research Year
2014
Subscribe to