반응형
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning ModelsTransformer-based Speech Self-Supervised Learning model은 large parameter size와 computational cost를 가짐STaRSpeech temporal relation을 distilling 하여 Speech Self-Supervised Learning model을 compress특히 speech frame 간의 temporal relation을 transfer 하여 lightweight student를 얻음논문 (ICASSP 2024) : Paper Link1. Intro..
Paper/Representation
2025. 8. 27. 17:00
반응형
