Docs Menu
Docs Home
/

Contextualized Chunk Embeddings

voyage-context-3 is a contextualized chunk embedding model that produces vectors for chunks that capture the full document context without any manual metadata and context augmentation. This leads to higher retrieval accuracies than with or without augmentation. The model is also simpler, faster, and cheaper. It serves as a drop-in replacement for standard embeddings without downstream workflow changes and reduces chunking strategy sensitivity.

To learn more, see the blog post.

Model
Context Length
Dimensions
Description

voyage-context-3

32,000 tokens

1024 (default), 256, 512, 2048

Contextualized chunk embeddings optimized for general-purpose and multilingual retrieval quality.

To learn more, see the blog post.

For a tutorial on using contextualized chunk embeddings, see Semantic Search with Voyage AI Embeddings.

Back

Text Embeddings

On this page