Word-Level Emotional Expression Control in Zero-Shot Text-to-Speech Synthesis대부분의 emotional Text-to-Speech는 word-level control이 어려움WeSConPre-trained zero-shot Text-to-Speech model로부터 emotion, speaking rate를 control 하는 self-training frameworkWord-level expressive synthesis를 guide 하기 위한 transition-smoothing strategy, dynamic speed control mechanism을 도입추론 시에는 dynamic emotional attention bias mechan..
TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling기존의 speech tokenizer는 high frame rate와 auxiliary pre-trained model에 대한 의존성, complex training process와 같은 한계점이 존재함TaDiCodecDiffusion AutoEncoder를 활용해 quantization, reconstruction에 대한 end-to-end optimization을 수행Text guidance를 diffusion decoder에 integrate 하여 optimal compression을 달성논문 (NeurIPS 2025) : Paper Link1. Introduct..
BlockDecoder: Boosting ASR Decoders with Context and Merger ModulesAttention-based Encoder-Decoder model에서 decoder는 Automatic Speech Recognition output을 autoregressively generate 함- 특히 initial layer는 textual context를 build 하고 later layer는 acoustic, textual informaiton을 merge 함BlockDecoderPurely text-based text encoder와 information을 combine 하는 merger를 도입Encoder representation을 reuse 하고 text encod..
Shallow Flow Matching for Coarse-to-Fine Text-to-Speech SynthesisFlow Matching-based Text-to-Speech model을 개선할 수 있음Shallow Flow Matching (SFM)Coarse representation으로부터 Flow Matching path를 따라 intermediate state를 construct해당 state의 temporal position을 adaptively determine 하기 위해 orthogonal projection을 도입논문 (NeurIPS 2025) : Paper Link1. IntroductionVoiceBox, ReFlow-TTS, VoiceFlow와 같은 Flow Matching (F..
FocalCodec: Low-Bitrate Speech Coding via Focal Modulation Networks기존의 neural codec은 high bitrate, semantic/acoustic information loss의 문제가 있음FocalCodecFocal modulation을 기반으로 single binary codebook을 사용하여 speech를 compressSemantic/acoustic information을 preserve 하여 다양한 downstream task에서 우수한 성능을 달성논문 (NeurIPS 2025) : Paper Link1. IntroductionAudioLM, AudioGen과 같은 speech language model은 token-based sp..
SSAMBA: Self-Supervised Audio Representation Learning with Mamba State Space ModelAudio representation learning을 위한 Transformer architecture는 memory, inference time 측면에서 quadratic complexity를 가짐SSAMBAState Space Model인 Mamba를 self-supervised audio representation learning에 도입Bidirectional Mamba를 사용하여 complex audio pattern을 capture 하고 unlabeled dataset으로부터 robust audio representation을 학습논문 (SLT 20..
SSAST: Self-Supervised Audio Spectrogram TransformerAudio task에 Transformer를 적용할 수 있음SSASTSelf-Supervised Learning을 통해 Audio Spectrogram Transformer를 향상Joint discriminative and generative masked spectrogram patch modeling에 기반한 pre-training을 적용논문 (AAAI 2022) : Paper Link1. IntroductionAudio Spectrogram Transformer (AST)와 같은 pure self-attention-based model은 기존 CNN-based model에 비해 많은 training data를..
EmoVoice: LLM-based Emotional Text-to-Speech Model with Freestyle Text PromptingText-to-Speech model은 여전히 emotional expression 측면에서 한계가 있음EmoVoiceLarge Language Model을 활용하여 fine-grained freestyle natural language emotion control을 지원Phoneme token과 audio token을 parallel output 하여 content consistency를 향상논문 (MM 2025) : Paper Link1. IntroductionEmotion-contorllable Text-to-Speech (TTS) model은 emotion..
HierSpeech++: Bridging the Gap Between Semantic and Acoustic Representation of Speech by Hierarchical Variational Inference for Zero-Shot Speech SynthesisZero-shot speech synthesis는 inference speed와 robustness의 한계가 있음HierSpeech++Hierarchical synthesis framework를 활용하여 naturalness를 향상Text representation과 prosody prompt를 기반으로 self-supervised/$F0$ representation을 생성하는 Text-to-Vec framework를 도입하고 16k..
