Why not vector RAG?
Vector RAG prefilters chunks by similarity score before your LLM ever sees them. If the relevant passage scores low, it's silently dropped. Context Pool never prefilters — it reads every chunk.
Benchmarks
We measured Context Pool against vector RAG baselines on a synthetic legal contract dataset. The results confirm what the architecture predicts: exhaustive scanning finds answers that similarity prefiltering misses.
New in Context Pool
v1.3.0 · March 2026Stay up to date with the latest features and improvements. Every release makes document analysis more powerful.
How Context Pool works
Four deterministic phases. No semantic shortcuts. Every document, every chunk, every time.
Everything you need
Batteries included. From OCR to citations to a production-ready Docker setup.
Your model, your choice
Switch providers by changing one line in config.yaml. No code changes needed.
Up and running in minutes
Docker Compose is the fastest path. Local dev and API-only modes also supported.
First-class programmatic access
Every feature available in the UI is accessible via REST API and WebSocket. Build your own integrations.
Built for high-stakes document work
Wherever missing a relevant passage is not an option, exhaustive scanning pays off.
