Abstract
The growing deployment of service robots in dynamic indoor environments necessitates robust navigation systems capable of coping with unpredictability and sensor noise. This paper presents a multimodal sensor fusion framework integrating LiDAR, vision, and inertial sensors to enhance autonomous navigation in complex indoor settings. By leveraging complementary sensor strengths, the proposed system addresses challenges like occlusions, varying illumination, and cluttered pathways. Experimental validation demonstrates improved localization accuracy and path planning efficiency compared to single-sensor baselines. These findings suggest that sensor fusion is a critical enabler for reliable, scalable deployment of service robots in diverse indoor scenarios.
View more >>