Skip to main content

Innovative Multi-Level Secure Steganographic
Scheme based on Pixel Value Difference

Research Abstract
Abstract. Steganography is one of the branches of information security field, it aims to hide information in unremarkable cover media so as not to arouse an eavesdropper’s suspicion. The secret message is hidden in such a way that no significant degradation can be detected in the quality of the original image. The aim of this paper is to introduce an efficient steganographic scheme to hide data over gray scale images. This scheme is based on the property of the human eye, which is more sensitive to the change in the smooth area than the edge area using pixel value difference, besides employing the LSB substitution technique as a fundamental stage. The experimental results show that the proposed method could successfully achieve the goals of the high embedding capacity and maintaining the visual quality, in addition, provides more secure data hiding using selective pixel positions determined by a secret image (i.e. key). Moreover, based on that, the secret message is replaced with dynamic LSBs, our scheme can effectively resist several image steganalysis techniques.
Research Authors
Marghny H. Mohamed, Naziha M. Al-Aidroos, and Mohamed A. Bamatraf
Research Department
Research Journal
International Journal in Foundations of Computer Science & Technology (IJFCST)
Research Rank
1
Research Vol
Vol. 2, No.6
Research Year
2012

Data Hiding by LSB Substitution Using Genetic Optimal Key-Permutation

Research Abstract
Abstract. The least significant bit (LSB) embedding method is one of the most commonly used techniques; it targets the LSB's of the host image to hide the data. This paper deals with three main steganography challenges (i.e. capacity, imperceptibility, and security). This is achieved by hybrid data hiding scheme incorporates LSB technique with a key- permutation method. The paper also proposes an optimal key permutation method using genetic algorithms for best key selection. Both normal and optimized methods are tested with standard images, varying both data size as well as key space. Final experimental results show decrement in computation time when increasing number of keys, at the same time system security improves.
Research Authors
Marghny Mohamed, Fadwa Al-Afari and Mohamed Bamatraf
Research Department
Research Journal
International Arab Journal of e-Technology
Research Rank
2
Research Vol
Vol. 2, No. 1
Research Year
2011

Hori-Vertical Distributed Frequent Itemsets Mining Algorithm on Heterogeneous Distributed Shared Memory System

Research Abstract
Abstract. The big challenge in discovering association rules is to find the largest frequent itemsets. Sequential algorithms do not have analytical ability, especially in terms of run-time performance, for such very large databases. Therefore, we must rely on high performance parallel and distributed computing. We present a new parallel algorithm for frequent itemset mining, called HoriVertical algorithm. The algorithm passes the database only one time and starts a new stage with the finished itemsets while some other itemsets in the same stage have not been finished yet. Also, the new algorithm is based on partitioning the database vertically and horizontally. We present the result on the performance of our algorithm on various databases, and compare it against well-known algorithms.
Research Authors
Margahny H. Mohamed ,Hosam E. Refaat
Research Department
Research Journal
IJCSNS International Journal of Computer Science and Network Security
Research Pages
No.11
Research Rank
1
Research Vol
VOL.10
Research Year
2010

Rules extraction from constructively trained neural networks based on genetic algorithms

Research Abstract
Abstract. The application of neural networks in the data mining has become wider. Although neural networks may have complex structure, long training time, and the representation of results is not comprehen- sible, neural networks have high acceptance ability for noisy data, high accuracy and are preferable in data mining. On the other hand, It is an open question as to what is the best way to train and extract symbolic rules from trained neural networks in domains like classification. In this paper, we train the neural networks by constructive learning and present the analysis of the convergence rate of the error in a neural network with and without threshold which have been learnt by a constructive method to obtain the simple structure of the network. The response of ANN is acquired but its result is not in understandable form or in a black box form. It is frequently desirable to use the model backwards and identify sets of input variable which results in a desired output value. The large numbers of variables and nonlinear nature of many materials models that can help finding an optimal set of difficult input variables. We will use a genetic algorithm to solve this problem. The method is evaluated on different public- domain data sets with the aim of testing the predictive ability of the method and compared with standard classifiers, results showed comparatively high accuracy
Research Authors
Marghny H.Mohamed
Research Department
Research Journal
Neurocomputing
Research Rank
1
Research Year
2011

Image Retrieval Based on Content

Research Abstract
Content-based image retrieval systems have become a reliable tool for many image database applications.There are several advantages of image retrieval techniques compared to other simple retrieval approaches such as text-based retrieval techniques. Histogrambased algorithms are considered to be e ective for retrieval color images. This paper proposes a contentbased image retrieval technique that uses CIELuv color space, multi-precision (segmentation), and similarity matching. A multi-precision means a certain image is divided into a number of sub-blocks, each with its associated color histogram. Experimental results showed that, spatial distribution information recorded by multi-precision color histograms helps to make similarity matching more precise.
Research Authors
Yousef B.Mahdy, Khaled M.Shaaban, and Ali S.Abd El-Rahim1
Research Department
Research Journal
ICGST International Journal on Graphics, Vision and Image Processing
Research Rank
1
Research Year
2006

Support Vector Machines with Weighted Powered Kernels for Data Classification

Research Abstract
Abstract. Support Vector Machines (SVMs) are a popular data classification method with many diverse applications. The SVMs performance depends on choice a suitable kernel function for a given problem. Using an appropriate kernel; the data are transform into a space with higher dimension in which they are separable by an hyperplane. A major challenges of SVMs are how to select an appropriate kernel and how to find near optimal values of its parameters. Usually a single kernel is used by most studies, but the real world applications may required a combination of multiple kernels. In this paper, a new method called, weighted powered kernels for data classification is proposed. The proposed method combined three kernels to produce a new combined kernel (WPK). The method used Scatter Search approach to find near optimal values of weights, alphas and kernels parameters which associated with each kernel. To evaluate the performance of the proposed method, 11 benchmark are used. Experiments and comparisons prove that the method given acceptable outcomes and has a competitive performance relative to a single kernel and some other published methods
Research Authors
Mohammed H. Afif, Abdel-Rahman Hedar,
Taysir H. Abdel Hamid, and Yousef B. Mahdy
Research Department
Research Journal
Advanced Machine Learning Technologies and Applications
Communications in Computer and Information Science
Research Pages
pp 369-378
Research Rank
1
Research Vol
Volume 322
Research Year
2012

Support Vector Machines with Weighted Powered Kernels for Data Classification

Research Abstract
Abstract. Support Vector Machines (SVMs) are a popular data classification method with many diverse applications. The SVMs performance depends on choice a suitable kernel function for a given problem. Using an appropriate kernel; the data are transform into a space with higher dimension in which they are separable by an hyperplane. A major challenges of SVMs are how to select an appropriate kernel and how to find near optimal values of its parameters. Usually a single kernel is used by most studies, but the real world applications may required a combination of multiple kernels. In this paper, a new method called, weighted powered kernels for data classification is proposed. The proposed method combined three kernels to produce a new combined kernel (WPK). The method used Scatter Search approach to find near optimal values of weights, alphas and kernels parameters which associated with each kernel. To evaluate the performance of the proposed method, 11 benchmark are used. Experiments and comparisons prove that the method given acceptable outcomes and has a competitive performance relative to a single kernel and some other published methods
Research Authors
Mohammed H. Afif, Abdel-Rahman Hedar,
Taysir H. Abdel Hamid, and Yousef B. Mahdy
Research Department
Research Journal
Advanced Machine Learning Technologies and Applications
Communications in Computer and Information Science
Research Pages
pp 369-378
Research Rank
1
Research Vol
Volume 322
Research Year
2012

Support Vector Machines with Weighted Powered Kernels for Data Classification

Research Abstract
Abstract. Support Vector Machines (SVMs) are a popular data classification method with many diverse applications. The SVMs performance depends on choice a suitable kernel function for a given problem. Using an appropriate kernel; the data are transform into a space with higher dimension in which they are separable by an hyperplane. A major challenges of SVMs are how to select an appropriate kernel and how to find near optimal values of its parameters. Usually a single kernel is used by most studies, but the real world applications may required a combination of multiple kernels. In this paper, a new method called, weighted powered kernels for data classification is proposed. The proposed method combined three kernels to produce a new combined kernel (WPK). The method used Scatter Search approach to find near optimal values of weights, alphas and kernels parameters which associated with each kernel. To evaluate the performance of the proposed method, 11 benchmark are used. Experiments and comparisons prove that the method given acceptable outcomes and has a competitive performance relative to a single kernel and some other published methods
Research Authors
Mohammed H. Afif, Abdel-Rahman Hedar,
Taysir H. Abdel Hamid, and Yousef B. Mahdy
Research Department
Research Journal
Advanced Machine Learning Technologies and Applications
Communications in Computer and Information Science
Research Pages
pp 369-378
Research Rank
1
Research Vol
Volume 322
Research Year
2012

SS-SVM (3SVM): A New Classification Method for Hepatitis Disease Diagnosis

Research Abstract
Abstract.In this paper, a new classification approach combining support vector machine with scatter search approach for hepatitis disease diagnosis is presented, called 3SVM. The scatter search approach is used to find near optimal values of SVM parameters and its kernel parameters. The hepatitis dataset is obtained from UCI. Experimental results and comparisons prove that the 3SVM gives better outcomes and has a competitive performance relative to other published methods found in literature, where the average accuracy rate obtained is 98.75%.
Research Authors
Mohammed H. Afif, Abdel-Rahman Hedar, Taysir H. Abdel Hamid, Yousef B. Mahdy
Research Department
Research Journal
International Journal of Advanced Computer Science and Applications
Research Pages
No. 2
Research Rank
1
Research Vol
Vol. 4
Research Year
2013

SS-SVM (3SVM): A New Classification Method for Hepatitis Disease Diagnosis

Research Abstract
Abstract.In this paper, a new classification approach combining support vector machine with scatter search approach for hepatitis disease diagnosis is presented, called 3SVM. The scatter search approach is used to find near optimal values of SVM parameters and its kernel parameters. The hepatitis dataset is obtained from UCI. Experimental results and comparisons prove that the 3SVM gives better outcomes and has a competitive performance relative to other published methods found in literature, where the average accuracy rate obtained is 98.75%.
Research Authors
Mohammed H. Afif, Abdel-Rahman Hedar, Taysir H. Abdel Hamid, Yousef B. Mahdy
Research Department
Research Journal
International Journal of Advanced Computer Science and Applications
Research Pages
No. 2
Research Rank
1
Research Vol
Vol. 4
Research Year
2013
Subscribe to