![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/cutchS/btsL3QxtA42/Jka0HvFL8g9D0zhDvh4Q5K/img.png)
ProsodyFlow: High-Fidelity Text-to-Speech through Conditional Flow Matching and Prosody Modeling with Large Speech Language ModelsText-to-Speech에서 diverse, natural prosody를 반영하는 것은 여전히 한계가 있음ProsodyFlowLarge self-supervised speech model과 conditional flow matching을 결합해 prosodic feature를 modelingSpeech LLM을 통해 acoustic feature를 추출하고 해당 feature를 prosody latent space에 mapping 한 다음, conditional flow ..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/NOQjG/btsL2R9W1UA/RgRXYSDa9xGwzZ6eT7esS0/img.png)
StableVC: Style Controllable Zero-Shot Voice Conversion with Conditional Flow MatchingZero-Shot Voice Conversion은 다음의 한계점이 있음- Style과 timbre를 서로 다른 unseen speaker에게 independently transfer 할 수 없음- Autoregressive modeling이나 sampling step으로 인해 추론 속도가 느림- Converted sample의 품질과 similarity는 여전히 만족스럽지 않음StableVCSpeech를 linguistic content, timbre, style로 decompose하고 conditional flow matching module을 사용하..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/upay9/btsL2ISQm8Z/D6d1bsGTaC3M85osmBch01/img.png)
VoiceMixer: Adversarial Voice Style MixupVoice conversion은 source speech와 voice style을 충분히 decompose 하지 못해 여전히 한계가 있음VoiceMixerSelf-supervised representation learning을 활용한 information bottleneck을 통해 content와 style을 decompose 함각 information에 대한 adversarial feedback을 통해 더 나은 generalization을 달성논문 (NeurIPS 2021) : Paper Link1. IntroductionVoice Conversion (VC)는 source speaker의 content information은 유..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/Fd37o/btsL1h2EkVQ/miRonI5HLCsNeH9qkBAnk1/img.png)
Generative Pre-trained Speech Language Model with Efficient Hierarchical TransformerSpeech language model은 여전히 neural audio codec의 long acoustic sequence를 modeling 하는데 한계가 있음Generative Pre-trained Speech Transformer (GPST)Audio waveform을 2가지의 discrete speech representation으로 quantize 하고 hierarchical transformer architecture에 integrate 함End-to-End unsupervised manner로 train 됨으로써 다양한 speaker ident..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/m7dRH/btsL1swibql/FcrMurIeNiUBgwSOp5ZQuk/img.png)
SpeechX: Neural Codec Language Model as a Versatile Speech TransformerAudio-text prompt 기반의 speech model은 text-to-speech 외의 다양한 task를 처리하는 데는 한계가 있음SpeechXZero-shot Text-to-Speech, Speech Editing, Noise Suppression, Target Speaker Extraction 등의 다양한 task를 지원하는 speech modelNeural codec language modeling과 task-dependent prompting에 기반한 multi-task learning을 도입논문 (TASLP 2024) : Paper Link1. Introducti..
![](http://i1.daumcdn.net/thumb/C148x148/?fname=https://blog.kakaocdn.net/dn/Z0Sei/btsLMofsF6d/7pikKUHVUw4Xisxx8xkEyk/img.png)
FluentTTS: Text-dependent Fine-grained Style Control for Multi-style TTSNeural text-to-speech model은 local prosodic variation을 flexibly control 할 수 있어야 함FluentTTSUtterance-wise global style embedding을 condition으로 하여 각 text의 fundamental frequency $F0$를 예측함추가적으로 global utterance-wise embedding과 local $F0$ embedding을 input으로 사용하는 multi-style encoder를 통해 multi-style embedding을 추정함논문 (INTERSPEECH 202..