embedding generation and vector storage abstraction
Generates embeddings for documents/nodes using pluggable embedding providers (OpenAI, Hugging Face, local models) and stores them in a unified vector store interface that abstracts over multiple backends (Pinecone, Weaviate, Milvus, FAISS, Chroma, etc.). The abstraction layer enables switching vector stores without changing application code, and handles batching, retry logic, and metadata indexing.
Unique: Provides a unified VectorStore interface that abstracts 10+ vector database backends, enabling zero-code switching between providers. Handles embedding batching, retry logic, and metadata propagation automatically. Supports both cloud and local embedding models through a pluggable EmbedModel interface.
vs alternatives: Broader vector store coverage and more seamless provider switching than LangChain's vectorstore integrations; better abstraction consistency across backends than using raw vector store SDKs directly.