Announcement_19
🔔 Check out our latest works accepted at (ICML 2026), South Korea 🇰🇷. They include (i) FOCA, a data-efficient vision-language-action model by learning future predicted world in latent space and naturally supports action-free co-training withy synthetic rollouts from video world model; (ii) Token-level Bregman Preference Optimization, a novel Direct Preference Optimization (DPO) method that enables modelling preferences over next-token actions conditioned on state rather than complete output sequences. Codes will be released soon!