Mimic In-Context Learning in Multimodal Tasks
Yuchu Jiang , Jiale Fu , Chenduo Hao , Xinting Hu , Yingzhe Peng , Xin Geng , Xu Yang
Published in CVPR, 2025
MimIC is a novel framework that mimics in-context learning for multimodal tasks by injecting lightweight, query-conditioned shift vectors after each attention head. Applied to Idefics1-9B, MimIC achieves up to +3.46% accuracy improvement on VQAv2, +3.57% on OK-VQA, and +9.00 CIDEr on image captioning, compared to standard 32-shot in-context learning. Moreover, MimIC effectively mitigates hallucination commonly introduced by conventional ICL approaches, while incurring inference overhead comparable to zero-shot inference.
Recommended citation: Jiang Y, Fu J, Hao C, et al. Mimic In-Context Learning for Multimodal Tasks[J]. arXiv preprint arXiv:2504.08851, 2025.
Download Paper