Sitemap

A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.

Pages

Posts

Effective Writing – 18 Specific Ways to Improve Writing

58 minute read

Published:

After delving into the book “Style: The Basics of Clarity and Grace,” I distilled its essence into several key items (just like my once-favorite technical series, “Effective C++”). I hope this can help readers to hone their writing skills with these insights.

portfolio

publications

Fast Large Language Model Collaborative Decoding via Speculation

Jiale Fu* , Yuchu Jiang* , Junkai Chen , Jiaming Fan , Xin Geng , Xu Yang
Published in ICML, 2025

Collaborative decoding via Speculation (CoS) is a novel framework that accelerates the ensemble of any number of LLMs without sacrificing performance. It could reach 1.11x-2.23x over standard ensemble techniques on two-model or three-model pairs.

Recommended citation: Fu J, Jiang Y, Chen J, et al. Fast Large Language Model Collaborative Decoding via Speculation[J]. arXiv preprint arXiv:2502.01662, 2025.
Download Paper

Mimic In-Context Learning in Multimodal Tasks

Yuchu Jiang , Jiale Fu , Chenduo Hao , Xinting Hu , Yingzhe Peng , Xin Geng , Xu Yang
Published in CVPR, 2025

MimIC is a novel framework that mimics in-context learning for multimodal tasks by injecting lightweight, query-conditioned shift vectors after each attention head. Applied to Idefics1-9B, MimIC achieves up to +3.46% accuracy improvement on VQAv2, +3.57% on OK-VQA, and +9.00 CIDEr on image captioning, compared to standard 32-shot in-context learning. Moreover, MimIC effectively mitigates hallucination commonly introduced by conventional ICL approaches, while incurring inference overhead comparable to zero-shot inference.

Recommended citation: Jiang Y, Fu J, Hao C, et al. Mimic In-Context Learning for Multimodal Tasks[J]. arXiv preprint arXiv:2504.08851, 2025.
Download Paper

d2Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching

Yuchu Jiang , Yue Cai , Xiangzhong Luo , Jiale Fu , Jiarui Wang , Chonghan Liu , Xu Yang
Published in ICLR, 2026

Diffusion-based large language models (dLLMs), despite their promising performance, still suffer from inferior inference efficiency. This is because dLLMs rely on bidirectional attention and cannot directly benefit from the standard key-value (KV) cache as autoregressive models (ARMs) do. To tackle this issue, we introduce Dual aDaptive Cache (d²Cache), which is a training-free approximate KV cache framework for accelerating dLLM inference. d²Cache features a two-stage fine-grained selection strategy to identify tokens and adaptively update their KV states at each decoding step, while caching the KV states of the remaining tokens for reuse. Furthermore, d²Cache naturally offers a more reliable decoding alternative, which can enable quasi left-to-right generation and mitigate premature overconfidence in tokens at the end of the sequence. Extensive experimental results on two representative dLLMs (i.e., LLaDA and Dream) demonstrate that d²Cache not only achieves substantial inference speedups, but also yields consistent improvements in generation quality.

Recommended citation: Jiang Y, Cai Y, Luo X, et al. d $^ 2$ Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching[J]. arXiv preprint arXiv:2509.23094, 2025.
Download Paper

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.