Image moments are image descriptors widely utilized in several image processing, pattern recognition, computer vision, and multimedia security applications. In the era of big data, the computation of image moments yields a huge memory demand, especially for large moment order and/or high-resolution images (i.e., megapixel images). The state-of-the-art moment computation methods successfully accelerate the image moment computation for digital images of a resolution smaller than 1K × 1K pixels. For digital images of higher resolutions, image moment computation is problematic. Researchers utilized GPU-based parallel processing to overcome this problem. In practice, the parallel computation of image moments using GPUs encounters the non-extended memory problem, which is the main challenge. This paper proposed a recurrent-based method for computing the Polar Complex Exponent Transform (PCET) moments of fractional orders. The proposed method utilized the symmetry of the image kernel to reduce kernel computation. In the proposed method, once a kernel value is computed in one quaternion, the other three corresponding values in the remaining three quaternions can be trivially computed. Moreover, the proposed method utilized recurrence equations to compute kernels. Thus, the required memory to store the pre-computed memory is saved. Finally, we implemented the proposed method on the GPU parallel architecture. The proposed method overcomes the memory limit due to saving the kernel's memory. The experiments show that the proposed parallel-friendly and memory-efficient method is superior to the state-of-the-art moment computation methods in memory consumption and runtimes. The proposed method computes the PCET moment of order 50 for an image of size 2K × 2K pixels in 3.5 seconds while the state-of-the-art method of comparison needs 7.0 seconds to process the same image, the memory requirements for the proposed method and the method of comparison for the were 67.0 MB and 3.4 GB, respectively. The method of comparison could not compute the image moment for any image with a resolution higher than 2K × 2K pixels. In contrast, the proposed method managed to compute the image moment up to 16K × 16K pixels image.
Hyper vectors are holographic and randomly processed with independent and identically distributed tools. A hyper vector includes whole data merged as well as spread completely on its pieces as an encompassing portrayal. So, no spot is more dependable to store any snippet of data compared to others. Hyper vectors are joined with tasks likened to expansion, and changed the structure of numerical processing on vector regions. Hyper vectors are intended to analyze the closeness utilizing a separation metric over the vector region. These activities are nothing but hyper vectors in which it can be joined into intriguing processing conduct with novel highlights which make them vigorous and proficient. This paper focuses on a utilization of hyper dimensional processing for distinguishing the language of text tests for encoding sequential letters into hyper vectors. Perceiving the language of a given book is the initial phase in all sorts of language handling. Examples: text examination, arrangement, and interpretation. High dimension vector models are mainstream in Natural Language Processing and are utilized to catch word significance from word insights. In this research work, the first task is high dimensional computing classification, based on Arabic datasets which contain three datasets such as Arabiya, Khaleej and Akhbarona. High dimensional computing is applied to obtain the results from the previous dataset when it is applied to N-gram encoding. When utilizing SANAD single-label Arabic news articles datasets with 12 N-gram encoding, the accuracy of high computing is 0.9665%. The high dimensional computing with 6 N-gram encoding while utilizing RTA dataset, provides the accuracy of 0.6648%. ANT dataset with 12 N-gram encoding in high dimensional computing gives the accuracy 0.9248%. The second task is applying high dimensional computing on Arabic language recognition for Levantine dialects three dataset is utilized. The first dataset is SDC Shami Dialects Corpus which contains Jordanian, Lebanese, Palestinian and Syrian. The same provides an accuracy of 0.8234% while it is applied to high dimensional computing with 7 N-gram encoding. PADIC (Parallel Arabic dialect corpus) is the second dataset which contains Syria and Palestine Arabic dialects that provide an accuracy of 0.7458% when applied high dimensional computing with 5 N-gram encoding. The high dimensional computing when applied to third dataset MADAR (Multi-Arabic dialect applications and resources) with 6 N-gram encoding provides the accuracy rate of 0.7800%.
Indoor localization methods can help many sectors, such as healthcare centers, smart homes, museums, warehouses, and retail malls, improve their service areas. As a result, it is crucial to look for low-cost methods that can provide exact localization in indoor locations. In this context, image-based localization methods can play an important role in estimating both the position and the orientation of cameras regarding an object. Image-based localization faces many issues, such as image scale and rotation variance. Also, image-based localization’s accuracy and speed (latency) are two critical factors. This paper proposes an efficient 6-DoF deep-learning model for image-based localization. This model incorporates the channel attention module and the Scale Pyramid Module (SPM). It not only enhances accuracy but also ensures the model’s real-time performance. In complex scenes, a channel attention module is employed to distinguish between the textures of the foregrounds and backgrounds. Our model adapted an SPM, a feature pyramid module for dealing with image scale and rotation variance issues. Furthermore, the proposed model employs two regressions (two fully connected layers), one for position and the other for orientation, which increases outcome accuracy. Experiments on standard indoor and outdoor datasets show that the proposed model has a significantly lower Mean Squared Error (MSE) for both position and orientation. On the indoor 7-Scenes dataset, the MSE for the position is reduced to 0.19 m and 6.25° for the orientation. Furthermore, on the outdoor Cambridge landmarks dataset, the MSE for the position is reduced to 0.63 m and 2.03° for the orientation. According to the findings, the proposed approach is superior and more successful than the baseline methods.
COVID-19 is one of the most chronic and serious infections of recent years due to its worldwide spread. Determining who was genuinely affected when the disease spreads more widely is challenging. More than 60% of affected individuals report having a dry cough. In many recent studies, diagnostic models were developed using coughing and other breathing sounds. With the development of technology, body sounds are now collected using digital techniques for respiratory and cardiovascular tests. Early research on identifying COVID-19 utilizing speech and diagnosing signs yielded encouraging findings. The gathering of extensive, multi-group, airborne acoustical sound data is used in the developed framework to conduct an efficient assessment to test for COVID-19. An effective classification model is created to assess COVID-19 utilizing deep learning methods. The MIT-Covid-19 dataset is used as the input, and the Weiner filter is used for pre-processing. Following feature extraction done by Mel-frequency cepstral coefficients, the classification is performed using the CNN-LSTM approach. The study compared the performance of the developed framework with other techniques such as CNN, GRU, and LSTM. Study results revealed that CNN-LSTM outperformed other existing approaches by 97.7%.
Evaluating and forecasting stability across different conditions is essential since smart grid stabilization is among the most significant characteristics that could be employed to assess the functionality of smart grid design. Some intelligent methods to foresee stability are required to mitigate unintended instability in a smart grid design. This is due to the rise in domestic and commercial constructions and the incorporation of green energy into smart grids. It is currently hard to forecast the stability of the smart grid. In this framework, a smart grid with reliable mechanisms is being implemented to meet the fluctuating energy demands as well as providing more availability. The involvement of consumers and producers is one of the many factors influencing the grid's stability. This study suggested a novel approach for locating stability statistics in smart grid systems utilizing machine learning frameworks was presented. This paper outlined the multi-layer perceptron-Extreme Learning Machine (MLP-ELM) methodology to predict the sustainability of the smart grid. Additionally, this utilized the principal component analysis (PCA) approach for extracting features. In addition to an empirical assessment and a comparison to various approaches, this article presents an implementation result for smart grid stability. Simulation findings demonstrate that the suggested MLP-ELM approach outperforms traditional machine learning techniques, with accuracy reaching up to 95.8%, precision at 90%, recall at 88%, and F-measure at 89%.