Add Memory to Your LLM with Alchemyst AI
This guide shows you how to add memory to your LLM applications using Alchemyst AI with Vercel’s AI SDK - step by step.
What you’ll build
By the end of this guide, you will:
- Wrap Vercel AI SDK functions with Alchemyst memory
- Store conversation context automatically
- Retrieve relevant memory across sessions
- Build context-aware AI applications
Prerequisites
You’ll need:
- An Alchemyst AI account - sign up
- Your
ALCHEMYST_AI_API_KEY
- Node.js 18+
Note that this tutorial is typescript-only. This is because AI SDK is only officially available for JS/TS.
Step 1: Install the SDKs
npm install ai @alchemystai/aisdk @alchemystai/sdk
Step 2: Initialize Alchemyst with AI SDK
import { streamText } from 'ai';
import { withAlchemyst } from '@alchemystai/aisdk';
const streamTextWithMemory = withAlchemyst(streamText, {
apiKey: process.env.ALCHEMYST_AI_API_KEY,
});
import { generateText } from 'ai';
import { withAlchemyst } from '@alchemystai/aisdk';
// Wrap the AI SDK function with Alchemyst memory
const generateTextWithMemory = withAlchemyst(generateText, {
apiKey: process.env.ALCHEMYST_AI_API_KEY,
// Optional configuration
withMemory: true
});
Step 3: Generate text with automatic memory storage
The memory integration automatically stores conversation history in Alchemyst’s context layer.
// Basic usage - memory is stored automatically
const { textStream } = await streamTextWithMemory({
model: "anthropic/claude-sonnet-4.5",
prompt: 'Explain quantum mechanics',
userId: "user_123",
sessionId: "convo_789",
});
// Process the stream
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
// Basic usage - memory is stored automatically
const { text } = await generateTextWithMemory({
model: "google/gemini-3-flash",
prompt: 'What is gravity?',
userId: "user_123", // Required: identifies the user
sessionId: "convo_456", // Required: groups related messages
});
console.log(text);
What just happened?
- The conversation was automatically stored in Alchemyst’s memory layer
- Future queries from the same
userId and sessionId will have access to this context
- Memory retrieval happens automatically on subsequent calls
Step 4: Continue conversations with context
Subsequent messages in the same conversation automatically retrieve relevant context.
// Follow-up question - automatically retrieves context
const { textStream: followUpStream } = await streamTextWithMemory({
model: "anthropic/claude-sonnet-4.5",
prompt: 'How does it relate to Einstein?',
userId: "user_123",
sessionId: "convo_789",
});
// Process the stream
for await (const chunk of followUpStream) {
process.stdout.write(chunk);
}
// Follow-up question - automatically retrieves context
const { text: followUp } = await generateTextWithMemory({
model: "google/gemini-3-flash",
prompt: 'How does it relate to Einstein?',
userId: "user_123",
sessionId: "convo_456", // Same conversation
});
// The AI now has context about gravity from the previous exchange
console.log(followUp);
Step 5: Stream responses with memory
For streaming responses, use streamText instead of generateText.
import { streamText } from 'ai';
import { withAlchemyst } from '@alchemystai/aisdk';
const streamTextWithMemory = withAlchemyst(streamText, {
apiKey: process.env.ALCHEMYST_AI_API_KEY,
});
const { textStream } = await streamTextWithMemory({
model: "google/gemini-3-flash",
prompt: 'Explain quantum mechanics',
userId: "user_123",
sessionId: "convo_789",
});
// Process the stream
for await (const chunk of textStream) {
process.stdout.write(chunk);
}
Step 6: Update or delete memories
Manage conversation memory as needed.
import AlchemystAI from '@alchemystai/sdk';
const client = new AlchemystAI({
apiKey: process.env.ALCHEMYST_AI_API_KEY,
});
// Update a specific memory
await client.v1.context.memory.update({
userId: "user_123",
sessionId: "convo_456",
messageId: "msg_001",
content: "Updated content",
});
// Delete entire conversation
await client.v1.context.memory.delete({
userId: "user_123",
sessionId: "convo_456",
});
That’s it, you’re all set. With memory in place, your AI can now maintain context across conversations and sessions.
Want to learn about user profiling for AI consumer applications?Alchemyst’s memory layer enables sophisticated user profiling that enhances personalization - tracking preferences, adapting communication styles, and building rich user profiles across sessions.To learn more about implementing user profiling with memory, check out our User Profiling Guide.
Advanced: Multi-user conversations
Handle group conversations where multiple users participate.
// User 1 asks a question
await generateTextWithMemory({
model: "anthropic/claude-sonnet-4.5",
prompt: 'What are the best practices for React?',
userId: "user_alice",
sessionId: "team_discussion_001",
});
// User 2 follows up
await generateTextWithMemory({
model: "anthropic/claude-sonnet-4.5",
prompt: 'Can you elaborate on hooks?',
userId: "user_bob",
sessionId: "team_discussion_001", // Same conversation
});
// Both users' messages are stored in the same conversation context
// User 1 asks a question
await generateTextWithMemory({
model: "google/gemini-3-flash",
prompt: 'What are the best practices for React?',
userId: "user_alice",
sessionId: "team_discussion_001",
});
// User 2 follows up
await generateTextWithMemory({
model: "google/gemini-3-flash",
prompt: 'Can you elaborate on hooks?',
userId: "user_bob",
sessionId: "team_discussion_001", // Same conversation
});
// Both users' messages are stored in the same conversation context
What Alchemyst does automatically
- Stores conversation history by user and conversation
- Retrieves relevant context across sessions
- Maintains conversation flow and coherence
- Handles memory cleanup and optimization
You don’t need
- Custom memory stores
- Manual context window management
- Session state handling
- Memory deduplication logic
Configuration Options
Customize how Alchemyst handles memory:
const generateTextWithMemory = withAlchemyst(generateText, {
apiKey: process.env.ALCHEMYST_AI_API_KEY,
// Memory retrieval settings
similarityThreshold: 0.8, // Higher = more relevant results
minimumSimilarityThreshold: 0.5, // Minimum relevance cutoff
scope: 'internal', // 'internal' | 'external'
// Storage settings
contextType: 'conversation', // 'resource' | 'conversation' | 'instructions'
source: 'chat-application', // Source identifier
// Advanced options
metadata: {
groupName: ['production'], // ["default"] by default
version: '1.0',
},
});
Troubleshooting and Errors
Error: Missing userId or sessionId
- Both
userId and sessionId are required for memory operations
- Solution: Always provide both parameters when calling wrapped functions
Error: Memory not retrieving
- Check that
similarityThreshold isn’t set too high
- Verify the same
userId and sessionId are used
- Use
client.v1.context.memory.search() to test retrieval directly
Error: Too much context retrieved
- Lower the
similarityThreshold for more precise results
- Implement pagination for long conversations
- Consider splitting into multiple conversation IDs
Next Steps: Go Deeper with Memory
Get up and running with dedicated SDKs and advanced memory features.
Memory Use Cases
Real-world applications powered by Alchemyst memory:
Need Help?
If you get stuck or want to share feedback:
- Browse the Guides and API docs on this site.
- Search the documentation for targeted answers.
- Join our Discord server for real-time help.