Skip to content

User Guide

This guide walks you through every feature of Beyond Retrieval v2 from a user's perspective.


Getting Started

Opening the Application

  • Local development: Navigate to http://localhost:3000
  • Hosted deployment: Navigate to the domain provided by your administrator

Signing In

If authentication is enabled, you'll see a sign-in page. If bypassed (common in dev), you'll land directly on the Dashboard.

The Dashboard

The Dashboard shows a grid of all notebooks you have access to. From here you can create new notebooks, open existing ones, and access Global Settings from the sidebar.


Notebooks

Notebooks are the core organizational unit. Each notebook is an independent workspace with its own documents, conversations, search tools, and settings.

Creating a Notebook

  1. Click Create Notebook on the Dashboard.
  2. Fill in:
Field Description
Title A descriptive name (e.g., "Q4 Legal Contracts")
Icon Choose an emoji to distinguish this notebook
Embedding Model The model for converting text to vectors — permanent after creation
Database Type Cloud (hosted Supabase) or Local (Docker Supabase)
Storage Provider Where uploaded files are stored

Choosing an Embedding Model

Model Provider Dimensions Notes
text-embedding-3-small OpenAI (via OpenRouter) 1536 Best quality — recommended
text-embedding-004 Google (via OpenRouter) 768 Good quality, lower dimensions
nomic-embed-text Ollama 768 Free, runs locally

Immutable Choice

The embedding model is locked after notebook creation and cannot be changed. Choose carefully.


Documents

Uploading Files

Supported formats: PDF, DOCX, TXT, MD, CSV, XLSX, XLS, PPTX, HTML

  1. Navigate to the Documents page inside your notebook.
  2. Drag and drop files or click the upload area.
  3. After uploading, click Ingest to start processing.

Ingestion Settings

Setting Options Default Purpose
Parser Docling Parser, Mistral OCR Docling How text is extracted
Chunking Strategy Recursive, Docling Hybrid, Agentic Recursive How text is split
Chunk Size 100–2000 chars 600 Target chunk size
Chunk Overlap 0–500 chars 200 Overlap between chunks
Context Augmentation On / Off Off Enriches chunks during ingestion
Multimodal Processing On / Off Off Describes images with AI

Document Status

Status Meaning
Pending Waiting to be processed
Parsing Text extraction in progress
Processing Chunks being created and embedded
Success Ready for search and chat
Error Processing failed — click Retry to reprocess

OneDrive Integration

If configured, click the OneDrive button on the Documents page to browse SharePoint folders and import files directly.


Chat (RAG Mode)

Starting a Conversation

  1. Open a notebook → Chat page.
  2. Click + New Chat.
  3. Type your question and press Enter.

The system searches your documents, retrieves relevant chunks, and generates a cited answer with numbered references like [1], [2].

Citations

Click on citation numbers to see the source chunk text. Click the chunk to open the full document viewer with highlighted lines.

Personas

Persona Style
Professional Formal and concise
Funny Witty and light-hearted
Mentor Educational, explains the "why"
Storyteller Narrative and engaging
Clear Simple, plain language
Custom Your own instructions

Languages

10 supported: English, German, Spanish, French, Italian, Portuguese, Dutch, Russian, Chinese, Japanese.

Language Mode:

  • Auto-detect — mirrors the language of your question
  • Manual — always responds in the selected language

Feedback

Thumbs up/down on each response. Used by the LLM Judge for quality tracking and caching decisions.


Search Playground

Test and compare retrieval strategies without using chat.

Available Strategies

Strategy Best For
Fusion (default) General-purpose queries
Semantic Conceptual similarity
Full-Text Exact terms and phrases
Cache Repeated questions
Contextual AI-enhanced chunks
Agentic Complex/ambiguous questions
Smart Router Automatic strategy selection

Compare Mode

Run up to 4 strategies on the same query side-by-side to see which retrieves the most relevant content.


AI Enhancer

Enriches document chunks with AI-generated context for better search accuracy.

Workflow

  1. Not Enhanced — Select files to enhance
  2. Enhancing — AI processes chunks in parallel (progress updates every 4s)
  3. Published — Enhanced chunks are live in the vector store

Click Enhance → wait for all chunks to show "success" → click Publish.

Rate Limits

If chunks keep failing, lower the concurrency setting. Default: OpenRouter=10, OpenAI=5, Ollama=3.


Intelligence Settings

Per-notebook AI configuration:

Setting Description
Provider OpenRouter (recommended), OpenAI Direct, or Ollama
Model Select from available models for your provider
Temperature 0.0 (deterministic) to 1.0 (creative)
RAG Strategy Default retrieval strategy for chat
Language Mode Auto-detect or Manual
LLM Judge Background quality evaluation (on/off)

System Monitor

Health overview and maintenance tools:

  • Health Score (0–100) — based on duplicates, orphans, and enhanced chunks
  • Cleanup Tools — Remove Duplicates, Remove Orphans
  • Document Statistics — Total, Indexed, Errors, Pending

Global Settings (Admin Only)

  • API Keys — Configure OpenRouter, OpenAI, Mistral, Cohere keys
  • Database Type — Switch between Cloud and Local Supabase
  • Storage Provider — Supabase, S3, Local, or None

Sharing

  1. Open notebook → sharing settings
  2. Generate Invite Link with access level (Admin or Chat Only)
  3. Share the link — recipients sign in and accept to gain access

Admins can view, revoke access, and deactivate invite codes.


Troubleshooting

Problem Solution
"No API key configured" Add your key in Global Settings
Documents stuck in Pending Check backend is running + API keys are set
Wrong language responses Switch to Manual language mode
Low health score Run cleanup tools in System Monitor
Enhancement chunks failing Lower concurrency setting