Improving Multimodal Distillation for 3D Semantic Segmentation under Domain Shift

Overview of the multimodal distillation pipeline for 3D domain adaptation. With MuDDoS, adapting from an annotated source dataset to an unannotated target dataset, operating in three steps. Step 1 is a 2D-to-3D distillation using a frozen visual foundation model (DINOv2) to obtain aligned 3D representations on all datasets. Step 2 trains a classification head with source labels. The backbone is frozen to prevent the 3D representations from drifting away and to maintain a good performance on the target dataset. Step 3 is a prediction refinement using self-training obtained via a classical teacher-student scheme.
Overview of the multimodal distillation pipeline for 3D domain adaptation. With MuDDoS, adapting from an annotated source dataset to an unannotated target dataset, operating in three steps. Step 1 is a 2D-to-3D distillation using a frozen visual foundation model (DINOv2) to obtain aligned 3D representations on all datasets. Step 2 trains a classification head with source labels. The backbone is frozen to prevent the 3D representations from drifting away and to maintain a good performance on the target dataset. Step 3 is a prediction refinement using self-training obtained via a classical teacher-student scheme.
Semantic segmentation networks trained under full supervision for one type of lidar fail to generalize to unseen lidars without intervention. To reduce the performance gap under domain shifts, a recent trend is to leverage vision foundation models (VFMs) providing robust features across domains. In this work, we conduct an exhaustive study to identify recipes for exploiting VFMs in unsupervised domain adaptation for semantic segmentation of lidar point clouds. Building upon unsupervised image-to-lidar knowledge distillation, our study reveals that: (1) the architecture of the lidar backbone is key to maximize the generalization performance on a target domain; (2) it is possible to pretrain a single backbone once and for all, and use it to address many domain shifts; (3) best results are obtained by keeping the pretrained backbone frozen and training an MLP head for semantic segmentation. The resulting pipeline achieves state-of-the-art results in four widely-recognized and challenging settings.
Qualitative results on N-K (Top) and N-W (Bottom). The label colors correspond to ground truth label assigned color. Points with a ground-truth not belonging to the shown class are grayed out. The source only model tends to over predict vegetation and sometimes mistakes dense partially occluded object with other classes, e.g., pedestrian instead of motorcycle in the second example. MuDDoS is able to partially or completely recover the correct classes.
Qualitative results on N->K (Top) and N->W (Bottom). The label colors correspond to ground truth label assigned color. Points with a ground-truth not belonging to the shown class are grayed out. The source only model tends to over predict vegetation and sometimes mistakes dense partially occluded object with other classes, e.g., pedestrian instead of motorcycle in the second example. MuDDoS is able to partially or completely recover the correct classes.

Published: BMVC, 2025

Björn Michele
Björn Michele
Ph.D. Student

Related