반응형

FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised LearningSelf-supervised learning은 computational cost 측면에서 한계가 있음FitHuBERTTime-Reduction layer를 사용하여 inference time을 개선Hint-based Distillation을 통해 performance degradation을 방지논문 (INTERSPEECH 2022) : Paper Link1. IntroductionLarge-scale speech Self-Supervised Learning (SSL)은 speech-only data를 pre-training에 활용할 ..
Paper/Representation
2025. 5. 8. 17:45
반응형