Abstract: Recent advancements in zero-shot semantic segmentation have demonstrated that the large-scale Contrastive Language-Image Pre-training (CLIP) model can effectively transfer multimodal ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results