Skip to content

Quickstart

Get from zero to your first search result in under 5 minutes.

Sign up to get your API key:

Terminal window
curl -X POST https://api.useragex.com/api/auth/signup \
-H "Content-Type: application/json" \
-d '{
"email": "dev@example.com",
"password": "your-secure-password",
"name": "Jane Developer"
}'

Save your api_key — it is shown only once. If you lose it, you can regenerate it from the dashboard.

A knowledge base is a logical collection of documents. All searches are scoped to a single KB.

Terminal window
curl -X POST https://api.useragex.com/api/v1/knowledge-bases \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{ "name": "Product Docs", "description": "All product documentation" }'

Note the id — you’ll use it in the next steps.

Upload a file to your knowledge base. Processing happens asynchronously.

Terminal window
curl -X POST https://api.useragex.com/api/v1/knowledge-bases/KB_ID/documents \
-H "Authorization: Bearer YOUR_API_KEY" \
-F "file=@product-guide.pdf" \
-F 'metadata={"department": "engineering", "version": 2}'

The document is returned immediately with status: "pending". It moves through: pendingparsingchunkingembeddingready.

If you already have text content, skip the file upload:

Terminal window
curl -X POST https://api.useragex.com/api/v1/knowledge-bases/KB_ID/documents/text \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"text": "Machine learning is a subset of AI...",
"name": "ml-notes.txt",
"metadata": { "source": "api" }
}'

Poll the document status until it reaches ready:

Terminal window
curl https://api.useragex.com/api/v1/knowledge-bases/KB_ID/documents/DOC_ID \
-H "Authorization: Bearer YOUR_API_KEY"

Run a natural language search against your knowledge base:

Terminal window
curl -X POST https://api.useragex.com/api/v1/knowledge-bases/KB_ID/search \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"query": "How do I configure authentication?",
"top_k": 5,
"rerank": true
}'

The response includes ranked chunks with relevance scores, chunk metadata (page numbers, section headings), and document metadata:

{
"data": {
"query": "How do I configure authentication?",
"results": [
{
"chunk_id": "chk_k1l2m3n4o5",
"document_id": "doc_f6g7h8i9j0",
"document_name": "product-guide.pdf",
"text": "To configure authentication, navigate to Settings > Auth and enable...",
"score": 0.94,
"metadata": {
"chunk_index": 3,
"start_page": 5,
"end_page": 5,
"section_heading": "Authentication Setup"
},
"document_metadata": {
"department": "engineering",
"version": 2
}
}
],
"usage": {
"embedding_tokens": 12,
"rerank_applied": true,
"chunks_searched": 34
}
}
}

Feed these results into your LLM as context — that’s RAG.