반응형
DiceHuBERT: Distilling HuBERT with a Self-Supervised Learning ObjectiveCompact Self-Supervised Learning-based speech foundation model이 필요함DiceHuBERTHuBERT의 iterative self-distillation mechanism을 활용하여 original model을 student model로 directly replaceHuBERT pre-training과 동일한 objective를 사용해 additional module, architectural constraint를 eliminate논문 (INTERSPEECH 2025) : Paper Link1. IntroductionSelf-Sup..
Paper/Representation
2025. 8. 28. 17:05
반응형
