Skip to main content

FDSNet: dynamic multimodal fusion stage selection for autonomous driving via feature disagreement scoring

Research Abstract

Robust and efficient 3D perception is critical for autonomous vehicles operating in complex environments. Multi-sensor fusion, such as Camera+LiDAR, Camera+Radar, or all three modalities, significantly enhances scene understanding, However, most existing frameworks fuse data at a fixed stage, categorized as early fusion (raw data level), mid fusion (intermediate feature level), or late fusion (detection output level), neglecting semantic consistency across modalities. This static strategy may result in performance degradation or unnecessary computation under sensor misalignment or noise. In this work, we propose FDSNet (Feature Disagreement Score Network), a dynamic fusion framework that adaptively selects the fusion stage based on measured semantic consistency across sensor modalities. Each sensor stream (Camera, LiDAR, and Radar) independently extracts mid-level features, which are then transformed into a common Bird’s Eye View (BEV) representation, ensuring spatial alignment across modalities. To assess agreement, a Feature Disagreement Score (FDS) is computed at each BEV location by measuring statistical deviation across modality features. These local scores are aggregated into a global FDS value, which is compared against threshold to determine the fusion strategy. A low FDS, indicating strong semantic consistency across modalities, triggers mid-level fusion for computational efficiency, whereas a high FDS value activates late fusion to preserve detection robustness under cross-modal disagreement. We evaluate FDSNet on the nuScenes dataset across multiple configurations: Camera+Radar, Camera+LiDAR, and Camera+Radar+LiDAR. Experimental results demonstrate that FDSNet achieves consistent improvements over recent multimodal baselines, with gains of up to +3.0% in NDS and +2.6% in mAP on the validation set, and +2.1% in NDS and +1.6% in mAP on the test set, highlighting that dynamic stage selection provides both robustness and quantifiable advantages over static fusion strategies.

Research Authors
Asaad Mohammed, Hosny M. Ibrahim & Nagwa M. Omar
Research Date
Research Department
Research File
Research Image
Overview of the proposed FDSNet framework for adaptive sensor fusion in 3D object detection. The framework computes a Feature Disagreement Score (FDS) based on BEV features from Camera and LiDAR/ Radar branches. Depending on the FDS value, the system dynamically selects between mid-level (feature fusion) and late-level (result fusion) strategies, enabling robust performance across varying sensor reliability and environmental conditions.
Research Journal
Scientific Reports
Research Website
https://doi.org/10.1038/s41598-025-25693-y
Research Year
2025

Optimizing RetinaNet anchors using differential evolution for improved object detection

Research Abstract

Object detection is a fundamental task in computer vision. It has two primary types: one-stage detectors known for their high speed and efficiency, and two-stage detectors, which offer higher accuracy but are often slower due to their complex architecture. Balancing these two aspects has been a significant challenge in the field. RetinaNet, a premier single-stage object detector, is renowned for its remarkable balance between speed and accuracy. Its success is largely due to the groundbreaking focal loss function, which adeptly addresses the issue of class imbalance prevalent in object detection tasks. This innovative approach significantly enhances detection accuracy while maintaining high speed, making RetinaNet an ideal choice for a wide range of real-world applications. However, its performance decreases when applied to datasets containing objects with unique characteristics, such as objects with elongated or squat shapes. In such cases, the default anchor parameters may not fully meet the requirements of these specialized objects. To overcome this limitation, we present an enhancement to the RetinaNet model to improve its ability to handle variations in objects across different domains. Specifically, we propose an optimization algorithm based on Differential Evolution (DE) that adjusts anchor scales and ratios while determining the most appropriate number of these parameters for each dataset based on the annotated data. Through extensive experiments on datasets spanning diverse domains such as the Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI), the Unconstrained Face Detection Dataset (UFDD), the TomatoPlantFactoryDataset, and the widely used Common Objects in Context (COCO) 2017 benchmark, we demonstrate that our proposed method significantly outperforms both the original RetinaNet and anchor-free methods by a considerable margin.

Research Authors
Asaad Mohammed, Hosny M. Ibrahim & Nagwa M. Omar
Research Date
Research Department
Research File
Research Image
Research Journal
Scientific Reports
Research Website
https://doi.org/10.1038/s41598-025-02888-x
Research Year
2025

"Build Your CV & Boost Interview Performance"

In a fruitful collaboration with the Faculty of Computers and Information at Assiut University, the Information Technology Institute (ITI) participated in a workshop titled:

"Build Your CV & Boost Interview Performance"

The workshop aimed to equip participants with the skills and knowledge to present themselves professionally and meet the demands of the job market.

The workshop covered several important topics, including:

How to write a professional CV that reflects skills and experience

✔️ Avoiding common CV mistakes

✔️ Effective preparation for job interviews

✔️ Improving performance and building confidence during interviews

The workshop witnessed excellent interaction and productive discussions, reflecting the institute's commitment to supporting students and graduates and enhancing their opportunities to secure suitable employment and build successful career paths.

1

Educational and recreational trip to Cairo

 
 
 
 
 
 

 

 

 

An Educational and Recreational Trip ? Under the patronage of: Prof. Dr. Tayseer Hassan Abdel Hamid – Dean of the College Prof. Dr. Khaled Fathi Hussein – Vice Dean for Education and Student Affairs Dr. Magid Gad El Rab Askar – General Supervisor of the Trip The Student Welfare Department, in cooperation with the College Student Union, organized an educational and recreational trip to Cairo, which included visits to: ?️ The Grand Egyptian Museum ? The Pyramids Area ?️ Al-Muizz Street This trip aimed to support tourism and develop cultural awareness among students. It introduced them to the grandeur of ancient Egyptian civilization and allowed them to view numerous rare artifacts and historical exhibits that tell the stories of thousands of years of Egyptian history. The trip contributed to strengthening national pride and instilling a sense of Egyptian identity. At the end of the trip, the students expressed their happiness with this enriching experience, emphasizing the importance of organizing such activities that combine entertainment and knowledge, contributing to building students' character and developing their cultural awareness. Director of Student Welfare ✍️ Mr. Hossam El-Din Mustafa

Congratulations to the students who achieved top positions in student activities.

Congratulations ?

Professor Dr. Tayseer Hassan Abdel Hamid – Dean of the College

Professor Dr. Khaled Fathi Hussein – Vice Dean for Education and Student Affairs

The Student Welfare Department held a ? ceremony to honor students who achieved top

rankings in various student activities at the university and college levels.

? Tuesday, December 16, 2025

? Professor Dr. Youssef Bassiouni Hall

? Director of the Student Welfare Department: Mr. Hossam El-Din Mustafa

? Wishing all our students continued progress, advancement, and lasting success ?

1

 
 
Subscribe to