
https://www.facebook.com/share/p/16Q5x1TjuB/
Training Courses Offered in Assiut Governorate The Information Technology Institute announces the opening of registration for intensive training grants in a distinguished technological specialization for Egyptian university graduates from 2016 to 2025 at the institute's headquarters in Assiut Governorate. The courses are as follows: Full Stack Web Development Using MEARN For more information about the specialization, please visit the following link:
https://drive.google.com/.../1FCUYqXP5ub7z8iWXgpv.../view...
2D Graphics Design For more information about the specialization, please visit the following link:
https://drive.google.com/.../13qqxtSv0tEqwf.../view...
- Required documents and registration steps are explained in the registration link. - The training courses will be conducted using a blended learning system. - Registration begins on Saturday, November 1, 2025, and continues until [date missing]. Thursday, November 13, 2025 - Registration is through the following link on the official website of the Information Technology Institute:
https://internal.iti.gov.eg/home
Occlusion artifacts significantly hinder light field (LF) image reconstruction, especially in complex scenes. We propose a spectral normalized U-Net for LF occlusion removal, which begins by stacking LF views and extracting view-dependent features using a local feature encoder. To capture spatial complexity, ResASPP enable multi-scale context aggregation, while channel attention enhances occlusion-related features. Spectral normalization is applied to all convolutional layers to improve training stability and generalization. The encoder-decoder structure with skip connections preserves fine details. Experimental results show our method restores occluded regions more accurately than baselines.
Occlusion removal in light-field images remains a significant challenge, particularly when dealing with large occlusions. An architecture based on end-to-end learning is proposed to address this challenge that interactively combines CSPDarknet53 and the bidirectional feature pyramid network for efficient light-field occlusion removal. CSPDarknet53 acts as the backbone, providing robust and rich feature extraction across multiple scales, while the bidirectional feature pyramid network enhances comprehensive feature integration through an advanced multi-scale fusion mechanism. To preserve efficiency without sacrificing the quality of the extracted feature, our model uses separable convolutional blocks. A simple refinement module based on half-instance initialization blocks is integrated to explore the local details and global structures. The network’s multi-perspective approach guarantees almost total occlusion removal, enabling it to handle occlusions of varying sizes or complexity. Numerous experiments were run on sparse and dense datasets with varying degrees of occlusion severity in order to assess the performance. Significant advancements over the current cutting-edge techniques are shown in the findings for the sparse dataset, while competitive results are obtained for the dense dataset.
Accurate and early breast cancer detection is critical for improving patient outcomes. In this study, we propose PatchCascade-ViT, a novel self-supervised Vision Transformer (ViT) framework for automated BI-RADS classification of mammographic images. Unlike conventional deep learning approaches that rely heavily on annotated datasets, PatchCascade-ViT leverages Self Patch-level Supervision (SPS) to learn meaningful mammographic representations from unlabeled data, significantly enhancing classification performance. Our framework operates through a two-stage cascade classification process. In the first stage, the model differentiates non-cancerous from potentially cancerous mammograms using SelfPatch, an innovative self-supervised learning task that enhances patch-level feature learning by enforcing consistency among spatially correlated patches. The second stage refines the classification by distinguishing Scattered Fibroglandular from Heterogeneously and Extremely Dense breast tissue categories, enabling more precise breast cancer risk assessment. To validate the effectiveness of PatchCascade-ViT, we conducted extensive evaluations on a dataset of 4,368 mammograms across three BI-RADS classes. Our method achieved a system sensitivity of 85.01% and an F1-score of 84.90%, outperforming existing deep learning-based approaches. By integrating self-supervised learning with a cascade vision transformer architecture, PatchCascade-ViT reduces reliance on annotated datasets while maintaining high classification accuracy. These findings demonstrate its potential for enhancing breast cancer screening, aiding radiologists in early detection, and improving clinical decision-making.
Optical character recognition (OCR) is a vital process that involves the extraction of handwritten or printed text from scanned or printed images, converting it into a format that can be understood and processed by machines. The automatic extraction of text through OCR plays a crucial role in digitizing documents, enhancing productivity, and preserving historical records. This paper offers an exhaustive review of contemporary applications, methodologies, and challenges associated with Arabic OCR. A thorough analysis is conducted on prevailing techniques utilized throughout the OCR process, with a dedicated effort to discern the most efficacious approaches that demonstrate enhanced outcomes. To ensure a thorough evaluation, a meticulous keyword-search methodology is adopted, encompassing a comprehensive analysis of articles relevant to Arabic OCR. In addition to presenting cutting-edge techniques and methods, this paper identifies research gaps within the realm of Arabic OCR. We shed light on potential areas for future exploration and development, thereby guiding researchers toward promising avenues in the field of Arabic OCR. The outcomes of this study provide valuable insights for researchers, practitioners, and stakeholders involved in Arabic OCR, ultimately fostering advancements in the field and facilitating the creation of more accurate and efficient OCR systems for the Arabic language.
Table detection in document images is a challenging problem due to diverse layouts, irregular structures, and embedded graphical elements. In this study, we present HTTD (Hierarchical Transformer for Table Detection), a cutting-edge model that combines a Swin-L Transformer backbone with advanced Transformer-based mechanisms to achieve superior performance. HTTD addresses three key challenges: handling diverse document layouts, including historical and modern structures; improving computational efficiency and training convergence; and demonstrating adaptability to non-standard tasks like medical imaging and receipt key detection. Evaluated on benchmark datasets, HTTD achieves state-of-the-art results, with precision rates of 96.98% on ICDAR-2019 cTDaR, 96.43% on TNCR, and 93.14% on TabRecSet. These results validate its effectiveness and efficiency, paving the way for advanced document analysis and data digitization tasks.