Announcement_8
Excited to share our latest work! 🎉:
(i) On Zero-Initialized Attention: Optimal Prompt and Gating Factor Estimation – We introduce a Mixture of Experts (MoE) perspective to explain the mechanism behind LLaMA-Adapter’s prompt learning.
(ii) MGPath – A novel multi-granular prompt learning method for few-shot WSI pathology prediction, leveraging the power of foundation vision-language models.