반응형

DistilHuBERT: Speech Representation Learning by Layer-Wise Distillation of Hidden-Unit BERT기존 self-supervised speech representation learning method는 large memory와 high pre-training cost가 요구됨DistilHuBERTHuBERT에서 hidden representation을 directly distill 하는 multi-task learning framework이를 통해 HuBERT size를 75% 절감논문 (ICASSP 2022) : Paper Link1. IntroductionWav2Vec과 같은 speech representation에 대한 Self-Sup..
Paper/Representation
2025. 4. 17. 20:15
반응형