반응형

LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERTSelf-supervised representation learning은 storage-intensive Transformer로 인해 low-resource setting에서 활용하기 어려움LightHuBERTOnce-for-All Transformer compression framework를 활용하여 structured parameter를 pruningTwo-stage distillation을 통해 HuBERT의 contextualized latent representation을 반영논문 (INTERSPEECH 2..
Paper/Representation
2025. 5. 14. 17:42
반응형