Documentation Index
Fetch the complete documentation index at: https://getalchemystai.com/docs/llms.txt
Use this file to discover all available pages before exploring further.
Quick Diagnosis
API Errors
Common errors (Add)
| Status Code | Error Type | Common Cause | Recommended Fix |
|---|
| 400 | BadRequest | Invalid JSON or missing required fields (e.g., source). | Validate your payload against the schema. Ensure documents array is not empty. |
| 401 | Unauthorized | Invalid or missing ALCHEMYST_AI_API_KEY. | Check your .env file. Ensure you aren’t using a Test key in Prod. |
| 403 | Forbidden | Accessing a scope (e.g., user_123) you don’t have permission for. | Verify the api_key or user_id matches the authenticated session. Make sure the key is still valid. |
| 409 | Conflict | Document with the same fileName already exists. | Follow Pattern 2: Document Updates to update or replace existing documents safely. |
| 413 | PayloadTooLarge | Document exceeds the 50MB limit or batch size > 100. | Split your documents array into chunks of 50 and retry. |
| 429 | RateLimit | Exceeded 1000 requests/minute. | Implement exponential backoff (default in SDK). Contact support for higher limits. |
| 422 | Unprocessable | The file type provided in metadata matches source but content is unparseable. | Ensure fileType matches the actual binary content (e.g., don’t send PDF bytes as text/plain). |
Common errors (Search)
| Status Code | Error Type | Common Cause | Recommended Fix |
|---|
| 400 | BadRequest | Missing or malformed search parameters. | Ensure query is provided and properly formatted.Validate your payload against the schema |
| 401 | Unauthorized | Invalid or missing ALCHEMYST_AI_API_KEY. | Confirm the request includes a valid API key. |
| 403 | Forbidden | Searching a scope you don’t have access to. | Make sure the key is still valid. Ensure the search scope is accessible to the current user or org. |
| 429 | RateLimit | Too many search requests. | Add throttling or cache repeated searches. |
| 422 | Unprocessable | Documents were not indexed correctly. | Verify documents were successfully ingested before searching. |
Application-Level Issues
Slow Ingestion
Diagnosis:
// Sequential adds: ~30 seconds for 1000 docs
for (let i = 0; i < 1000; i++) {
await alchemyst.add(documents[i]);
}
// Async add: ~3 seconds for 1000 docs (10x faster!)
// documents is an array of content
const documents = [];
documents.push({
content: "files content",
metadata: { // optional
file_name: "file_name",
file_type: "pdf/txt/json",
group_name: ["group1", "group2"],
},
});
await alchemyst.add(documents);
Common Causes:
- Document size > 50MB (max file size - 50MB)
- Large number of documents uploaded sequentially
- Token limit reached
Solutions :
- Make sure the doc size is less then 50MB
- Upload documents via bulkAdd (faster)
- Check if you have sufficent tokens
- For bulk operations read here
Slow Searches (> 1 second)
Quick Checks:
// Measure your query time
const start = Date.now();
const result = await alchemyst.v1.context.search({
groupName: ["a", "b", "c", "d", "e", "f", "g"] // 7 filters!
});
console.log(`Search took ${Date.now() - start}ms`);
Common Causes:
- 7+ groupName tags (exponential overhead)
- No minimum_similarity_threshold set
- 10K+ documents in scope
Solutions:
- Reduce to 3-5 groupName tags max
- Set minimum_similarity_threshold: 0.5
- Use namespaces to shard data
Retrieval Quality Issues
No Results Returned
Quick Checks:
// 1. Verify documents exist
const docs = await alchemyst.v1.context.view();
console.log(`Total docs: ${docs.length}`);
// 2. Try broader search
const result = await alchemyst.v1.context.search({
query: "your query",
similarity_threshold: 0.5, // Lower threshold
groupName: ["broad_tag"] // Fewer filters
});
Common Causes:
- High minimum_similarity_threshold set
- groupnames don’t intersect
- Query is not relevant or very specific
Solutions :
- Verify documents exist : platform
- Reduce similarity_threshold from 0.6-0.7 to 0.5-0.6
- Remove overly specific groupNames
- Try with a general query, make sure it’s not too specific
Common Scenarios
Scenario: Queries Return Too Many Fragments
- Problem: Search returns 50+ small chunks
- Cause: Over-segmentation (see Anti-Pattern 1)
- Solution: Combine related content into cohesive documents
- Problem: Storing too much or redundant information in metadata
- Cause: Metadata bloating (see Anti-Pattern 2)
- Solution: Limit metadata to only what you query/filter by, pass all the other things in the content itself.
| Feature | Limit / Specification |
|---|
| Max File Size | 50 MB per document |
| Max Batch Size | 100 documents per add() request |
| Supported File Types | .pdf, .txt, .docx, .md, .json, .csv |
| Token Limit | 8,192 tokens per document chunk |
| Metadata Fields | Max 20 keys per document; Values must be string or number |
| Indexing Speed | ~120 records/sec (Text), ~5 sec/page (OCR PDF) |
Still Stuck?
- Check our Discord for community help