Skip to main content

Assiut University Forum for high school students and its equivalent to introduce study programs at university colleges and Assiut Private University

1تحت رعاية، وحضور الدكتور أحمد المنشاوي رئيس جامعة أسيوط، وجامعة أسيوط الأهلية، واللواء الدكتور هشام أبو النصر محافظ أسيوط، شهدت جامعة أسيوط؛ اليوم الأربعاء الموافق ٢٤ من يوليو؛ انطلاق فعاليات "ملتقى جامعة أسيوط الأول"؛ لطلاب الثانوية العامة، وما يعادلها؛ للتعريف بالبرامج الدراسية، بكليات جامعة أسيوط، وجامعة أسيوط الأهلية"، تحت إشراف الدكتور أحمد عبد المولى نائب رئيس الجامعة لشئون التعليم والطلاب، والدكتور نوبي محمد حسن نائب رئيس الجامعة الأهلية للشئون الأكاديمية، والدكتور محمد جابر قاسم وكيل كلية التربية لشئون التعليم والطلاب، ومنسق الملتقى

وقدّم الدكتور عبد الرحمن حيدر؛ منسق صندوق دعم المبتكرين والنوابغ (ISF)، بجامعة أسيوط، ومستشار رئيس الجامعة لشئون تكنولوجيا المعلومات والذكاء الاصطناعي، تعريفاً بالصندوق، الذي يعمل تحت إشراف وزارة التعليم العالي والبحث العلمي، وبالتعاون مع عدة وزارات وهيئات، ويقدم مجموعة واسعة من البرامج المتخصصة، إلى جانب إتاحة المنح الدراسية، والمسابقات، للمبتكرين والنوابغ لتحقيق التنمية للفرد والمجتمع، وأهمها: حافز الابتكار IC، وصناع التغيير Enactus، وأوليمبياد الشركات الناشئة، وiGp لمشاريع التخرج النابعة من السوق والصناعة، وeGP لمشاريع التخرج المؤدية لشركات ناشئة، Bio-

وفي ختام اللقاء وجه القائمون ع أعمال الملتقي الدعوه للطلاب وأولياء الأمور لزيارة المعرض التعريفي المنقعد ضمن فعاليات الملتقي لعرض برامج كليات أسيوط والجامعات الاهليه

Parallel framework for memory-efficient computation of image descriptors for megapixel images

Research Abstract

Image moments are image descriptors widely utilized in several image processing, pattern recognition, computer vision, and multimedia security applications. In the era of big data, the computation of image moments yields a huge memory demand, especially for large moment order and/or high-resolution images (i.e., megapixel images). The state-of-the-art moment computation methods successfully accelerate the image moment computation for digital images of a resolution smaller than 1K × 1K pixels. For digital images of higher resolutions, image moment computation is problematic. Researchers utilized GPU-based parallel processing to overcome this problem. In practice, the parallel computation of image moments using GPUs encounters the non-extended memory problem, which is the main challenge. This paper proposed a recurrent-based method for computing the Polar Complex Exponent Transform (PCET) moments of fractional orders. The proposed method utilized the symmetry of the image kernel to reduce kernel computation. In the proposed method, once a kernel value is computed in one quaternion, the other three corresponding values in the remaining three quaternions can be trivially computed. Moreover, the proposed method utilized recurrence equations to compute kernels. Thus, the required memory to store the pre-computed memory is saved. Finally, we implemented the proposed method on the GPU parallel architecture. The proposed method overcomes the memory limit due to saving the kernel's memory. The experiments show that the proposed parallel-friendly and memory-efficient method is superior to the state-of-the-art moment computation methods in memory consumption and runtimes. The proposed method computes the PCET moment of order 50 for an image of size 2K × 2K pixels in 3.5 seconds while the state-of-the-art method of comparison needs 7.0 seconds to process the same image, the memory requirements for the proposed method and the method of comparison for the were 67.0 MB and 3.4 GB, respectively. The method of comparison could not compute the image moment for any image with a resolution higher than 2K × 2K pixels. In contrast, the proposed method managed to compute the image moment up to 16K × 16K pixels image.

Research Authors
Amr M Abdeltif, Khalid M Hosny, Mohamed M Darwish, Ahmad Salah, Kenli Li
Research Date
Research Department
Research Journal
Big Data Research
Research Pages
100398
Research Publisher
Elsevier
Research Vol
33
Research Website
https://www.sciencedirect.com/science/article/pii/S221457962300031X
Research Year
2023

High dimensional autonomous computing on Arabic language classification

Research Abstract

Hyper vectors are holographic and randomly processed with independent and identically distributed tools. A hyper vector includes whole data merged as well as spread completely on its pieces as an encompassing portrayal. So, no spot is more dependable to store any snippet of data compared to others. Hyper vectors are joined with tasks likened to expansion, and changed the structure of numerical processing on vector regions. Hyper vectors are intended to analyze the closeness utilizing a separation metric over the vector region. These activities are nothing but hyper vectors in which it can be joined into intriguing processing conduct with novel highlights which make them vigorous and proficient. This paper focuses on a utilization of hyper dimensional processing for distinguishing the language of text tests for encoding sequential letters into hyper vectors. Perceiving the language of a given book is the initial phase in all sorts of language handling. Examples: text examination, arrangement, and interpretation. High dimension vector models are mainstream in Natural Language Processing and are utilized to catch word significance from word insights. In this research work, the first task is high dimensional computing classification, based on Arabic datasets which contain three datasets such as Arabiya, Khaleej and Akhbarona. High dimensional computing is applied to obtain the results from the previous dataset when it is applied to N-gram encoding. When utilizing SANAD single-label Arabic news articles datasets with 12 N-gram encoding, the accuracy of high computing is 0.9665%. The high dimensional computing with 6 N-gram encoding while utilizing RTA dataset, provides the accuracy of 0.6648%. ANT dataset with 12 N-gram encoding in high dimensional computing gives the accuracy 0.9248%. The second task is applying high dimensional computing on Arabic language recognition for Levantine dialects three dataset is utilized. The first dataset is SDC Shami Dialects Corpus which contains Jordanian, Lebanese, Palestinian and Syrian. The same provides an accuracy of 0.8234% while it is applied to high dimensional computing with 7 N-gram encoding. PADIC (Parallel Arabic dialect corpus) is the second dataset which contains Syria and Palestine Arabic dialects that provide an accuracy of 0.7458% when applied high dimensional computing with 5 N-gram encoding. The high dimensional computing when applied to third dataset MADAR (Multi-Arabic dialect applications and resources) with 6 N-gram encoding provides the accuracy rate of 0.7800%.

Research Authors
George Samy Rady a, Sara Salah Mohamed b, Mamdouh Farouk Mohamed c, Khaled F. Hussain d
Research Date
Research Department
Research Journal
Computers and Electrical Engineering
Subscribe to