Codec BPE is an implementation of Acoustic BPE (Shen et al., 2024), extended for RVQ-based Neural Audio Codecs such as EnCodec (DΓ©fossez et al., 2022), DAC (Kumar et al., 2023), Mimi (DΓ©fossez et al., 2024), and FunCodec (Du et al., 2024). Built on top of the HuggingFace Tokenizers library.
Codec BPE flattens multi-level codes from Residual Vector Quantizers (RVQ) and converts them into unicode strings for tokenization into compressed token sequences. For example, a single Codec BPE token might represent a 4-gram of codes from 4 codebooks representing a single acoustic unit, a 6-gram comprising a whole acoustic unit and half of the next one, or even an 8-gram represnting two whole acoustic units. Depending on the codec, vocab size and type of audio, this can yield savings of 2-5x in sequence length compared to directly modeling the flattened codebooks.
Codec BPE can also be used with single-level codecs such as XCodec2 (Ye et al., 2025), WavTokenizer (Ji et al., 2024), SimVQ (Zhu et al., 2024), MagiCodec (Song et al., 2025), and NeuCodec (Julian et al., 2025). In this case, a single Codec BPE token could represent one or more codes where each code represents a whole acoustic unit.
Using Codec BPE allows efficient audio language modeling with multi-level codecs to be done with vanilla LLM architectures, meaning no custom architecture is needed to deal with modeling the RVQ. Your model will already be compatible with the full ecosystem of training and inference tools available for HuggingFace Transformers, such as vLLM and Ollama!
2025-12-01
- Added support for NeuCodec, a new high-quality single-level codec with a 50 Hz framerate! NeuCodec extends XCodec2 with inference speedups, an upsampling decoder, and a commercially permissive license. Use
--codec_model neuphonic/neucodecwhen encoding audio withcodec_bpe.audio_to_codesto encode using the NeuCodec model. See here for a usage example.
2025-06-22
- Added support for MagiCodec, a new streaming single-level codec with a 50 Hz framerate! Use
--codec_model MagiCodec-50Hz-Basewhen encoding audio withcodec_bpe.audio_to_codesto encode using the MagiCodec model. See here for a usage example.
Older updates
- See CHANGELOG.md for a complete list of updates.
pip install codec-bpeIf you want to use the --codec_type funcodec or --codec_model alibaba-damo/... options with codec_bpe.audio_to_codes, run:
pip install codec-bpe[funcodec]If you want to use the --codec_type xcodec2 or --codec_model HKUSTAudio/xcodec2 options with codec_bpe.audio_to_codes, run:
pip install codec-bpe[xcodec2]If you want to use the --codec_type wavtokenizer or --codec_model wavtokenizer-* options with codec_bpe.audio_to_codes, run:
pip install codec-bpe[wavtokenizer]
# WavTokenizer is not an installable package so you need to clone the repository into your working directory manually:
cd your/working/dir
git clone https://github.com/jishengpeng/WavTokenizer.git
# Note: WavTokenizer requirements are all version pinned and include both training and inference dependencies.
# I recommend either using a dedicated environment or cherry-picking the requirements you need for inference and installing them manually.
# For example, I had no issue running inference with latest versions of torch, numpy, and transformers.
pip install -r WavTokenizer/requirements.txtIf you want to use the --codec_type simvq or --codec_model simvq_* options with codec_bpe.audio_to_codes, run:
pip install codec-bpe[simvq]
# SimVQ is not an installable package so you need to clone the repository into your working directory manually:
cd your/working/dir
git clone https://github.com/youngsheen/SimVQ.git
pip install -r SimVQ/requirements.txtIf you want to use the --codec_type magicodec or --codec_model MagiCodec-50Hz-Base options with codec_bpe.audio_to_codes, run:
pip install codec-bpe[magicodec]
# MagiCodec is not an installable package so you need to clone the repository into your working directory manually:
cd your/working/dir
git clone https://github.com/Ereboas/MagiCodec.git
cd MagiCodec
# Follow setup instructions for MagiCodec [here](https://github.com/Ereboas/MagiCodec#env-setup)If you want to use the --codec_type neucodec or --codec_model neuphonic/neucodec options with codec_bpe.audio_to_codes, run:
pip install codec-bpe[neucodec]| Model | Sample Rate (kHz)* | Framerate (Hz)* | Max Codebooks | Codebook Size | Max Bandwidth (kbps)* | Training Domain |
|---|---|---|---|---|---|---|
| π€ EnCodec 24khz | 24 | 75 | 32 | 1024 | 24 | General |
| π€ DAC 44khz | 44.1 | 86.1328125 | 9 | 1024 | 7.8 | General |
| π€ DAC 24khz | 24 | 75 | 32 | 1024 | 24 | General |
| π€ DAC 16khz | 16 | 50 | 12 | 1024 | 6 | General |
| π€ Mimi | 24 | 12.5 | 32 | 2048 | 4.4 | Speech |
| π€ XCodec2 | 16 | 50 | 1 | 65536 | 0.8 | Speech |
| π€ FunCodec zh_en-general-16k-nq32ds640 | 16 | 25 | 32 | 1024 | 8 | General |
| π€ FunCodec zh_en-general-16k-nq32ds320 | 16 | 50 | 32 | 1024 | 16 | General |
| π€ FunCodec en-libritts-16k-nq32ds640 | 16 | 25 | 32 | 1024 | 8 | Audiobooks |
| π€ FunCodec en-libritts-16k-nq32ds320 | 16 | 50 | 32 | 1024 | 16 | Audiobooks |
| π€ WavTokenizer-small-600-24k-4096 | 24 | 40 | 1 | 4096 | 0.48 | Speech |
| π€ WavTokenizer-small-320-24k-4096 | 24 | 75 | 1 | 4096 | 0.9 | Speech |
| π€ WavTokenizer-medium-speech-320-24k-4096 | 24 | 75 | 1 | 4096 | 0.9 | Speech |
| π€ WavTokenizer-medium-music-audio-320-24k-4096 | 24 | 75 | 1 | 4096 | 0.9 | General |
| π€ WavTokenizer-large-600-24k-4096 | 24 | 40 | 1 | 4096 | 0.48 | General |
| π€ WavTokenizer-large-320-24k-4096 | 24 | 75 | 1 | 4096 | 0.9 | General |
| π€ simvq_4k | 24 | 75 | 1 | 4096 | 0.9 | Speech |
| π€ simvq_8k | 24 | 75 | 1 | 8192 | 0.975 | Speech |
| π€ simvq_65k | 24 | 75 | 1 | 65536 | 1.2 | Speech |
| π€ simvq_262k | 24 | 75 | 1 | 262144 | 1.35 | Speech |
| π€ MagiCodec-50Hz-Base | 16 | 50 | 1 | 131072 | 0.85 | Audiobooks |
| π€ NeuCodec | 16 | 50 | 1 | 65536 | 0.8 | Speech |
| π€ Distill-NeuCodec | 16 | 50 | 1 | 65536 | 0.8 | Speech |
* Sample Rate (kHz) is the sampling rate of the audio input to the codec.
* Framerate (Hz) is the number of timesteps (acoustic units of size num_codebooks) per second output by the codec.
* Bandwidth (kbps) = framerate (Hz) x num_codebooks x log2(codebook_size) / 1000.
Use your codec of choice (e.g., EnCodec, DAC, Mimi, XCodec2, FunCodec, WavTokenizer, SimVQ) to encode your audio into a torch tensor or numpy array of codes of shape (num_codebooks, length), then use the provided converter methods to convert to and from unicode strings.
Note: In the Acoustic BPE paper, a single-level codec was used (HuBERT + k-means), where each encoded timestep consisted of a single code which was converted to a single unicode character. Here, we support multi-level codecs based on Residual Vector Quantizers. If num_codebooks > 1, a flattening pattern is used to interleave all codebooks into a single level before mapping to unicode. For example, if 4 codebooks are used then each encoded timestep would consist of 4 codes (one from each codebook) and would be converted to a unicode 4-gram.
Example: audio language modeling using EnCodec 24 kHz at 3 kbps (4 codebooks):
import torch
import librosa
import soundfile as sf
from transformers import (
EncodecModel,
AutoModelForCausalLM,
AutoProcessor,
AutoTokenizer,
)
from codec_bpe import codes_to_chars, chars_to_codes
# load a Codec BPE tokenizer and compatible language model
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained("output/my_tokenizer")
model = AutoModelForCausalLM.from_pretrained("output/my_model").to(device)
# load the EnCodec model
encodec_modelname = "facebook/encodec_24khz"
encodec_model = EncodecModel.from_pretrained(encodec_modelname).to(device)
encodec_processor = AutoProcessor.from_pretrained(encodec_modelname)
# (1) encode audio using EnCodec
audio, sr = librosa.load("some_audio.mp3", sr=encodec_model.config.sampling_rate, mono=True)
inputs = encodec_processor(raw_audio=audio, sampling_rate=sr, return_tensors="pt").to(device)
with torch.no_grad():
encoded_audio = encodec_model.encode(**inputs, bandwidth=3.0).audio_codes[0, 0]
# (2) convert the audio codes to a unicode string and tokenize it
unicode_str = codes_to_chars(encoded_audio, codebook_size=encodec_model.config.codebook_size)
inputs = tokenizer(unicode_str, return_tensors="pt").to(device)
# (3) generate tokens from the model
outputs = model.generate(**inputs, do_sample=True, max_new_tokens=300)
# (4) detokenize the output back into a unicode string and convert it back to audio codes
unicode_str_2 = tokenizer.decode(outputs[0], skip_special_tokens=False)
encoded_audio_2 = chars_to_codes(
unicode_str_2,
num_codebooks=encoded_audio.shape[0],
codebook_size=encodec_model.config.codebook_size,
return_tensors="pt",
).to(device)
# (5) decode the generated audio using EnCodec
with torch.no_grad():
audio_2 = encodec_model.decode(encoded_audio_2.unsqueeze(0).unsqueeze(0), [None]).audio_values[0, 0]
sf.write("some_audio_output.wav", audio_2.cpu().numpy(), sr)To train a tokenizer from audio files:
-
Use your codec of choice (e.g., EnCodec, DAC, Mimi, XCodec2, FunCodec, WavTokenizer, SimVQ) to encode each audio file into a directory of numpy arrays (.npy files):
# encode audio files using EnCodec 24 kHz at 3 kbps (4 codebooks) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model facebook/encodec_24khz \ --bandwidth 3.0 \ --batch_size 8 # encode audio files using first 4 codebooks of DAC 44kHz python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model descript/dac_44khz \ --n_quantizers 4 \ --batch_size 8 # encode audio files using first 6 codebooks of Mimi (24kHz) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model kyutai/mimi \ --n_quantizers 6 \ --batch_size 8 # encode audio files using XCodec2 (16kHz, there is only 1 codebook) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model HKUSTAudio/xcodec2 \ --batch_size 1 # XCodec2 only supports batch size 1 for now. # encode audio files using FunCodec (16kHz) at 1.5 kbps (6 codebooks) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model alibaba-damo/audio_codec-encodec-zh_en-general-16k-nq32ds640-pytorch \ --bandwidth 1500 \ --batch_size 8 # encode audio files using WavTokenizer at 0.9 kbps (24kHz -> 75Hz, only 1 codebook of 4096 codes) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model wavtokenizer-large-320-24k-4096 \ --batch_size 8 # encode audio files using SimVQ at 0.9 kbps (24kHz -> 75Hz, only 1 codebook of 4096 codes) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model simvq_4k \ --batch_size 8 # encode audio files using SimVQ at 0.9 kbps in tiny chunks of 80ms with a 400ms context to simulate streaming encoding python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model simvq_4k \ --batch_size 128 \ --chunk_size_secs 0.08 \ --context_secs 0.4 # encode audio files using MagiCodec at 0.85 kbps (16kHz -> 50Hz, only 1 codebook of 131072 codes) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model MagiCodec-50Hz-Base \ --batch_size 8 # encode audio files using MagiCodec at 0.85 kbps in tiny chunks of 80ms with a 1s context to simulate streaming encoding python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model MagiCodec-50Hz-Base \ --batch_size 128 \ --chunk_size_secs 0.08 \ --context_secs 1.0 # encode audio files using NeuCodec at 0.8 kbps (16kHz -> 50Hz, only 1 codebook of 65536 codes) python -m codec_bpe.audio_to_codes \ --audio_path path/to/audio \ --codec_model neuphonic/neucodec \ --batch_size 1 # NeuCodec only supports batch size 1 for now.
-
Suppose you want to use the first 4 codebooks of EnCodec 24 kHz, run:
python -m codec_bpe.train_tokenizer \ --codes_path output/codes/encodec_24khz/30.0s_0.0s/mono \ --chunk_size_secs 30 \ --vocab_size 30000 \ --pad_token "<pad>"Here:
chunk_size_secsspecifies the number of timesteps (in seconds) that get converted to unicode and returned to the underlying Tokenizers trainer at a time.vocab_sizespecifies the number of tokens (including the base vocabulary of individual unicode characters) that you want your tokenizer to have. The base vocabulary size isnum_codebooksxcodebook_size. For example, the command above would yield a tokenizer with a base vocabulary of 4096 individual unicode character tokens, each representing a single code from a single codebook, and 25,904 merged "ngram" tokens.
By default, the following additional arguments are automatically initialized from the
codec_info.jsonfile output bycodec_bpe.audio_to_codes:num_codebooksspecifies how many codebooks should be used (in a flattened pattern) when converting each timestep to unicode. For example, EnCodec 24kHz uses 2 codebooks at 1.5 kbps, 4 codebooks at 3 kbps, 8 codebooks at 6 kbps, etc. Note: when encoding the audio files, you should use at least as many codebooks as you plan to specify here.codebook_sizespecifies the size of the codebook. EnCodec 24 kHz uses a codebook size of 1024.codec_frameratespecifies the framerate (number of timesteps per second) of the codec. EnCodec 24 kHz generates 75 timesteps per second.
You may also pass these arguments explicitly. For example:
python -m codec_bpe.train_tokenizer \ --codes_path output/codes/encodec_24khz/30.0s_0.0s/mono \ --num_codebooks 4 \ --codebook_size 1024 \ --codec_framerate 75 \ --chunk_size_secs 30 \ --vocab_size 30000 \ --pad_token "<pad>"This is useful if you are using audio codes that you generated with a tool other than the
codec_bpe.audio_to_codesscript, or if you wish to use a lower number of codebooks for training the tokenizer than you used for encoding the audio files.See train_tokenizer.py for a complete list of supported arguments.
The max_token_codebook_ngrams argument can be used to control how many codes can be merged into a single Codec BPE token. This is useful to avoid repetitive patterns in the audio manifesting as redundant tokens in the vocabulary. For example, if long segments of silence exist in the training audio then you may end up with hundreds of tokens that just represent different lengths of silence.
To avoid this, you can set max_token_codebook_ngrams to the maximum number of codebook ngrams (whole acoustic units) you want to allow a single token to represent. For example, if you set max_token_codebook_ngrams = 2 while num_codebooks is set to 4, then a single Codec BPE token may only hold up to 8 codes:
python -m codec_bpe.train_tokenizer \
--codes_path output/codes/encodec_24khz/30.0s_0.0s/mono \
--chunk_size_secs 30 \
--vocab_size 30000 \
--pad_token "<pad>" \
--max_token_codebook_ngrams 2It is highly recommended to set this argument to a value <= 2 (or <= 4 if num_codebooks is 1) to ensure that your vocab_size budget gets distributed across diverse acoustic patterns in your training data.
If you are using a codec with a very large codebook size (e.g. XCodec2, which has a codebook size of 65536), you may need to adjust the unicode_offset argument for codec_bpe.train_tokenizer to avoid the non-printable surrogate range 0xD800-0xDFFF:
python -m codec_bpe.train_tokenizer \
--codes_path output/codes/xcodec2/30.0s_0.0s/mono \
--chunk_size_secs 30 \
--vocab_size 80000 \
--pad_token "<pad>" \
--max_token_codebook_ngrams 4 \
--unicode_offset 0xE000Setting max_token_codebook_ngrams = 0 will skip tokenizer training and simply output a base vocabulary of num_codebooks x codebook_size tokens, each representing a single code from a single codebook. This is useful if you want to directly model individual codes from the flattened codebooks instead of combining them into n-grams.
You may want to train a new Codec BPE tokenizer and then export its trained vocabulary to an existing Transformers tokenizer. For example, extending the Llama, Mistral, Qwen, etc. tokenizers for multimodal text-audio language modeling.
Suppose you have trained your Codec BPE tokenizer and saved it to output/encodec_bpe_4cb_30k and you want to extend the Mistral-7B-v0.1 tokenizer with its vocabulary, run:
python -m codec_bpe.extend_tokenizer \
--existing_tokenizer mistralai/Mistral-7B-v0.1 \
--codec_bpe_tokenizer output/encodec_bpe_4cb_30k \
--additional_special_tokens "<audio>" "</audio>" # optionalThis will simply add every token in output/encodec_bpe_4cb_30k/tokenizer.json to the mistralai/Mistral-7B-v0.1 tokenizer as a special token and save a copy of the latter. Any additional tokens specified with --additional_special_tokens will be appended to the existing tokenizer's additional special token list.
If the added Codec BPE unicode tokens would conflict with existing tokens in the vocabulary, you can override the default unicode offset using the unicode_offset argument for codec_bpe.train_tokenizer. By default, unicode characters from the CJK Unified Ideographs block are used, following the Acoustic BPE paper. You can set unicode_offset to a different value (e.g. 0xE000) to start from a different unicode block that won't conflict with your existing vocabulary.
