Skip to main content

Add Context to Alchemyst AI

This guide shows you how to store context in Alchemyst AI and reuse it in your LLM or agent - step by step.

What you’ll build

By the end of this guide, you will:
  • Store context (memory) in Alchemyst
  • Reuse that context in future requests
  • Retrieve relevant context and feed it to llms

Prerequisites

You’ll need:
  • An Alchemyst AI account - sign up
  • Your ALCHEMYST_AI_API_KEY
  • Python 3.9+ or Node.js 18+

Step 1: Install the SDK

npm install alchemyst-ai

Step 2: Initialize the client

import AlchemystAI from '@alchemystai/sdk';

const client = new AlchemystAI({
    apiKey: process.env.ALCHEMYST_AI_API_KEY,
});

Step 3 : Create documents array

The documents array contains the raw content you want Alchemyst to store - documents, PDFs, and other sources must first be converted into text strings and then uploaded

interface AlchemystDocument {
content: string;
metadata?: { //optional
  file_name?: string;
  file_type?: string;
  group_name?: string[];
};
}

const docs_array: AlchemystDocument[] = [];

docs_array.push({
content: "files content",
metadata: {
  file_name: "file_name",
  file_type: "pdf/txt/json",
  group_name: ["group1", "group2"],
},
});

Step 4: Add context (store memory)

This step adds documents into the context processor which are then turned into nodes.
await client.v1.context.add({
  documents:docs_array,
  context_type: 'resource',  // one of: resource | conversation | instruction
  source: 'web-upload',
  scope: 'internal',     // one of: internal | external       
    // [OPTIONAL] metadata can be null. Use for filtering later.
  metadata: { 
    fileName: 'notes.txt',
    fileType: 'text/plain',
    lastModified: new Date().toISOString(),
    fileSize: 1024,
    groupName: ['group1', 'group2'],
  },
});
console.log("Context successfully added.")

What just happened?

  • The data was stored in Alchemyst’s context layer
  • It can be used to search query and provide llm with relevant context

Step 5: Search through context

Retrieve the most relevant stored context before invoking the model.
  const { contexts } = await client.v1.context.search({
      query: "user_query",
      similarity_threshold: 0.8,
      minimum_similarity_threshold: 0.5,
      scope: 'internal',
      metadata: null,
  });

Step 6 : Ingest the retrieved context into llm prompts

Convert the resolved context into a structured prompt before sending it to the LLM.
const promptText = contexts?.length ? `Use the context below to answer the question. If the context is insufficient, say so.
      ${contexts.map((c, i) => `Context ${i + 1}: ${c.content ?? JSON.stringify(c)}`).join('\n\n')}
      Question: ${userQuestion}`;

const result = await model.generateContent(promptText);

That’s it, you’re all set. With context in place, the model can now generate accurate, context-aware responses.

What Alchemyst does automatically

  • Stores context securely
  • Retrieves relevant memory
  • keeps behavior consistent across sessions

You don’t need:

  • Custom vector stores
  • Manual Prompt stuffing
  • Memory orchestration logic

Troubleshooting and Errors

Errors while adding or searching documents usually relate to payload structure, limits, or permissions. Refer to the Add Documents and Search Context sections above for correct usage.

Next Steps: Go Deeper with the SDK & APIs

Get up and running with our dedicated SDKs or integrate directly via our API.

Cookbooks & Community Examples

Step-by-step cookbooks and real projects from the community to help you build faster with Alchemyst.

Need Help?

If you get stuck or want to share feedback:
  • Browse the Guides and API docs on this site.
  • Search the documentation for targeted answers.
  • Join our Discord server for real-time help.