d2Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching
Yuchu Jiang , Yue Cai , Xiangzhong Luo , Jiale Fu , Jiarui Wang , Chonghan Liu , Xu Yang
Published in ICLR, 2026
Diffusion-based large language models (dLLMs), despite their promising performance, still suffer from inferior inference efficiency. This is because dLLMs rely on bidirectional attention and cannot directly benefit from the standard key-value (KV) cache as autoregressive models (ARMs) do. To tackle this issue, we introduce Dual aDaptive Cache (d²Cache), which is a training-free approximate KV cache framework for accelerating dLLM inference. d²Cache features a two-stage fine-grained selection strategy to identify tokens and adaptively update their KV states at each decoding step, while caching the KV states of the remaining tokens for reuse. Furthermore, d²Cache naturally offers a more reliable decoding alternative, which can enable quasi left-to-right generation and mitigate premature overconfidence in tokens at the end of the sequence. Extensive experimental results on two representative dLLMs (i.e., LLaDA and Dream) demonstrate that d²Cache not only achieves substantial inference speedups, but also yields consistent improvements in generation quality.
Recommended citation: Jiang Y, Cai Y, Luo X, et al. d $^ 2$ Cache: Accelerating Diffusion-Based LLMs via Dual Adaptive Caching[J]. arXiv preprint arXiv:2509.23094, 2025.
Download Paper
