Skip to main content

What You’re Building

An AI that remembers user preferences and past conversations automatically. Without Memory:
// Day 1
await llm.generate("I'm vegan and allergic to peanuts");
// Day 2
await llm.generate("Give me a recipe");
// AI: "What dietary restrictions do you have?" ❌
With Memory:
// Day 1
await generateWithMemory({ prompt: "I'm vegan", userId: "alice" });
// Day 2
await generateWithMemory({ prompt: "Give me a recipe", userId: "alice" });
// AI: "Here's a vegan recipe: ..."  ✅ Remembered automatically

Prerequisites

  • Alchemyst account (sign up)
  • Your ALCHEMYST_AI_API_KEY
  • Node.js 18+ or Python 3.9+
Time to complete: 15 minutes

Choose Your Approach


Quick Start: TypeScript

Step 1: Set Up Memory

import { generateText } from 'ai';
import { withAlchemyst } from '@alchemystai/aisdk';

// Wrap AI SDK with automatic memory
const generateTextWithMemory = withAlchemyst(generateText, {
  apiKey: process.env.ALCHEMYST_AI_API_KEY,
});
What this does:
  1. Retrieves past conversations for this user
  2. Includes them in the prompt automatically
  3. Stores the new conversation

Step 2: Use Memory

// First conversation: User shares preference
const response1 = await generateTextWithMemory({
  model: "openai:gpt-4",
  prompt: "I'm vegan and allergic to peanuts",
  userId: "alice",
  sessionId: "profile_setup"
});

console.log(response1.text);
// Output: "Got it! I'll remember you're vegan and have a peanut allergy."

// Later: Different session, AI remembers
const response2 = await generateTextWithMemory({
  model: "openai:gpt-4",
  prompt: "Give me a dinner recipe",
  userId: "alice",
  sessionId: "cooking_monday"
});

console.log(response2.text);
// Output: "Here's a vegan stir-fry without peanuts: ..."
// ✅ Remembered from different conversation!
Memory works across:
  • Different sessions (profile_setup → cooking_monday)
  • Different topics (preferences → recipes)
  • Days or weeks apart

Complete TypeScript Example

import { generateText } from 'ai';
import { withAlchemyst } from '@alchemystai/aisdk';

const generateTextWithMemory = withAlchemyst(generateText, {
  apiKey: process.env.ALCHEMYST_AI_API_KEY,
});

async function main() {
  // Day 1: Learn preference
  const response1 = await generateTextWithMemory({
    model: "openai:gpt-4",
    prompt: "I love science fiction movies",
    userId: "bob",
    sessionId: "preferences"
  });
  console.log("AI:", response1.text);
  // Output: "Great! I'll remember you enjoy sci-fi films."

  // Day 2: Use preference
  const response2 = await generateTextWithMemory({
    model: "openai:gpt-4",
    prompt: "Recommend a movie",
    userId: "bob",
    sessionId: "movie_night"
  });
  console.log("AI:", response2.text);
  // Output: "How about Interstellar? You mentioned you love sci-fi."
}

main();
Verify it worked:
  1. Visit platform.getalchemystai.com/context
  2. You should see stored conversations for user “bob”
  3. Click to view memory contents

Quick Start: Python

Start with this simplified version to understand the basics:
import os
from alchemyst_ai import AlchemystAI

alchemyst = AlchemystAI(api_key=os.environ.get("ALCHEMYST_AI_API_KEY"))

# Store a memory
alchemyst.v1.context.memory.add({
    "user_id": "alice",
    "session_id": "preferences",
    "content": "User said: I'm vegan and allergic to peanuts"
})

print("✅ Memory stored!")

# Later: Retrieve memories
result = alchemyst.v1.context.memory.search(
    user_id="alice",
    session_id="preferences"
)

if result and hasattr(result, 'memories'):
    for memory in result.memories:
        print(f"Found: {memory.content}")
        # Output: Found: User said: I'm vegan and allergic to peanuts
Expected Output:
✅ Memory stored!
Found: User said: I'm vegan and allergic to peanuts

Step 2: Full Integration with OpenAI

Now integrate with OpenAI for complete chat functionality:
import os
from alchemyst_ai import AlchemystAI
import openai

alchemyst = AlchemystAI(api_key=os.environ.get("ALCHEMYST_AI_API_KEY"))
openai_client = openai.OpenAI()

def chat_with_memory(prompt: str, user_id: str, session_id: str):
    """Chat function that remembers past conversations"""
    
    # 1. Get past conversations
    memory = alchemyst.v1.context.memory.search(
        user_id=user_id,
        session_id=session_id,
        limit=10
    )
    
    # 2. Build message history
    messages = [{"role": "system", "content": "You are a helpful assistant."}]
    
    # Add past memories to context
    if memory and hasattr(memory, 'memories'):
        for mem in memory.memories:
            if hasattr(mem, 'content'):
                messages.append({"role": "assistant", "content": mem.content})
    
    # Add current prompt
    messages.append({"role": "user", "content": prompt})
    
    # 3. Generate response with full context
    response = openai_client.chat.completions.create(
        model="gpt-4",
        messages=messages
    )
    
    assistant_message = response.choices[0].message.content
    
    # 4. Store this conversation for next time
    alchemyst.v1.context.memory.add({
        "user_id": user_id,
        "session_id": session_id,
        "content": f"User: {prompt}\nAssistant: {assistant_message}"
    })
    
    return assistant_message

# Test it
response = chat_with_memory(
    prompt="I'm vegan",
    user_id="alice",
    session_id="profile"
)
print(response)
# Output: "Got it! I'll remember you're vegan."

response2 = chat_with_memory(
    prompt="Give me a recipe",
    user_id="alice", 
    session_id="cooking"
)
print(response2)
# Output: "Here's a vegan pasta recipe: ..."
Expected Output:
Got it! I'll remember you're vegan.
Here's a vegan pasta recipe: ...

Understanding userId and sessionId

These two parameters control memory scope:
await generateTextWithMemory({
  prompt: "...",
  userId: "alice",      // WHO is talking
  sessionId: "cooking"  // WHAT conversation thread
});

Real-World Examples

Use CaseuserIdsessionIdWhy
Customer supportcustomer_123ticket_456Track support history per customer
Personal assistantuser_alicedaily_planning_2024_02_01Separate daily planning sessions
Team collaborationuser_bobproject_alpha_sprint_3Isolate project discussions by sprint
Multi-user chatuser_charlieteam_standup_2024_w05Group conversations by topic and time
Pro Tip: Use descriptive sessionIds like "recipe_planning_2024_02" instead of "session_1" for easier debugging and analytics.
Rule: Same userId + same sessionId = same conversation thread

Advanced Features

Stream Responses with Memory

For real-time chat experiences:
import { streamText } from 'ai';
import { withAlchemyst } from '@alchemystai/aisdk';

const streamTextWithMemory = withAlchemyst(streamText, {
  apiKey: process.env.ALCHEMYST_AI_API_KEY,
});

async function streamChat() {
  const { textStream } = await streamTextWithMemory({
    model: "openai:gpt-4",
    prompt: "Tell me about quantum mechanics",
    userId: "user_123",
    sessionId: "physics_101"
  });

  // Process stream chunk by chunk
  for await (const chunk of textStream) {
    process.stdout.write(chunk);
  }
}

streamChat();
Expected Output:
Quantum mechanics is the branch of physics... [streams in real-time]

Manage Memory

Update or delete conversations as needed:
import AlchemystAI from '@alchemystai/sdk';

const client = new AlchemystAI({
  apiKey: process.env.ALCHEMYST_AI_API_KEY,
});

// Update specific memory
await client.v1.context.memory.update({
  userId: "alice",
  sessionId: "profile_setup",
  messageId: "msg_001",
  content: "Updated: I'm vegan and gluten-free"
});

// Delete a specific conversation
await client.v1.context.memory.delete({
  userId: "alice",
  sessionId: "profile_setup"
});

// Delete ALL memories for a user (use with caution!)
await client.v1.context.memory.delete({
  userId: "alice"
});

console.log("✅ Memory updated/deleted");

Multi-User Conversations

Handle group chats where multiple users participate in the same thread:
// User 1 starts discussion
await generateTextWithMemory({
  model: "openai:gpt-4",
  prompt: "What are React hooks best practices?",
  userId: "alice",
  sessionId: "team_discussion_001"
});

// User 2 joins same discussion
await generateTextWithMemory({
  model: "openai:gpt-4",
  prompt: "Can you elaborate on useEffect?",
  userId: "bob",
  sessionId: "team_discussion_001"  // ← Same session = shared context
});

// User 1 continues - AI has full thread context
await generateTextWithMemory({
  model: "openai:gpt-4",
  prompt: "What about custom hooks?",
  userId: "alice",
  sessionId: "team_discussion_001"
});

// AI has full thread context regardless of who asks
Key insight: Using the same sessionId across different userId values creates a shared memory space for team conversations.

Configuration Options

Customize memory retrieval and storage behavior:
const generateTextWithMemory = withAlchemyst(generateText, {
  apiKey: process.env.ALCHEMYST_AI_API_KEY,

  // Memory retrieval settings
  similarityThreshold: 0.8,           // How relevant (0-1)
  minimumSimilarityThreshold: 0.5,    // Absolute minimum cutoff
  scope: 'internal',                  // 'internal' | 'external'

  // Storage settings
  contextType: 'conversation',
  source: 'chat-app',

  // Organization (optional)
  metadata: {
    groupName: ['production', 'app-v2'],
    environment: 'production',
    version: '2.0'
  },
});

Configuration Reference

ParameterValueWhen to Use
similarityThreshold0.5Broad, exploratory searches
0.7Recommended default - balanced
0.9Precise matches only
minimumSimilarityThreshold0.5Never return results below this
scope'internal'Your app’s private data
'external'Public/shared knowledge
Recommendation: Start with similarityThreshold: 0.7. Lower to 0.5 if you get no results, raise to 0.9 if results are too broad.

Troubleshooting

Error Message:
{
  "error": "userId and sessionId are required",
  "code": "MISSING_PARAMETERS"
}
Cause: Both parameters are required for memory operations.Fix: Always provide both:
await generateTextWithMemory({
  model: "openai:gpt-4",
  prompt: "Hello",
  userId: "user_123",    // ✅ Required
  sessionId: "chat_456"  // ✅ Required
});
Symptoms: AI doesn’t remember past conversations.Causes:
  1. Threshold too high
  2. Wrong userId/sessionId
  3. Memory wasn’t stored correctly
Fixes:1. Lower threshold:
similarityThreshold: 0.6  // Instead of 0.9
2. Verify exact same IDs:
// IDs must match EXACTLY (case-sensitive)
userId: "user_123"      // ❌ Not "user_124" or "User_123"
sessionId: "chat_456"   // ❌ Not "chat_457" or "Chat_456"
3. Test retrieval directly:
const memories = await client.v1.context.memory.search({
  userId: "user_123",
  sessionId: "chat_456"
});
console.log("Found memories:", memories.memories?.length);
console.log("Memory content:", memories.memories);
Expected Output:
Found memories: 2
Memory content: [
  { content: "User: I'm vegan\nAssistant: Got it!" },
  { content: "User: Give me a recipe\nAssistant: Here's a vegan..." }
]
Symptoms: AI references unrelated past conversations or gets confused.Causes:
  1. Threshold too low
  2. Mixing unrelated conversations in same session
Fixes:1. Raise threshold:
similarityThreshold: 0.85  // More strict
2. Use separate sessions by topic:
// ✅ Good - separate by topic
sessionId: "physics_homework"
sessionId: "cooking_recipes"
sessionId: "movie_recommendations"

// ❌ Bad - everything mixed
sessionId: "general_chat"
3. Limit memory retrieval:
// Only retrieve last 5 memories instead of 10
limit: 5
Error Message:
{
  "error": "Failed to store memory",
  "code": "STORAGE_ERROR"
}
Common Causes:
  1. Invalid API key
  2. Rate limit exceeded
  3. Content too large
Fixes:1. Verify API key:
console.log("API Key set:", !!process.env.ALCHEMYST_AI_API_KEY);
// Should output: API Key set: true
2. Check rate limits:
  • Free tier: 100 operations/day
  • Pro tier: Unlimited
3. Reduce content size:
// Keep memory entries under 10KB each
const content = longText.slice(0, 10000);  // Truncate if needed

Best Practices

1. Session Naming Convention

// ✅ Good - descriptive and structured
sessionId: "support_ticket_2024_02_001"
sessionId: "recipe_planning_vegan_week_5"
sessionId: "project_alpha_sprint_3_planning"

// ❌ Bad - hard to debug
sessionId: "session1"
sessionId: "chat"
sessionId: "abc123"

2. Memory Retention Limits

// ✅ Good - limit memory lookback
limit: 10  // Last 10 conversations

// ❌ Bad - retrieving everything
limit: 1000  // Too much context, slows down LLM

3. Privacy-Aware Memory

// Store memories with appropriate scope
const generateTextWithMemory = withAlchemyst(generateText, {
  apiKey: process.env.ALCHEMYST_AI_API_KEY,
  scope: 'internal',  // ✅ Private user data
  metadata: {
    dataClassification: 'PII',  // Track sensitive data
    retentionPolicy: '90days'
  }
});

4. Error Handling

// ✅ Good - handle failures gracefully
try {
  const response = await generateTextWithMemory({
    model: "openai:gpt-4",
    prompt: userInput,
    userId: user.id,
    sessionId: conversation.id
  });
  return response.text;
} catch (error) {
  console.error("Memory error:", error);
  // Fallback: Generate without memory
  return await generateText({
    model: "openai:gpt-4",
    prompt: userInput
  });
}

Verify Your Setup

After implementing memory, verify it’s working:

1. Check Platform UI

  1. Visit platform.getalchemystai.com/context
  2. Filter by your userId
  3. You should see stored memories with timestamps
  4. Click to view conversation content

2. Test with Code

// Store a test memory
await client.v1.context.memory.add({
  userId: "test_user",
  sessionId: "test_session",
  content: "Test memory: The user likes pizza"
});

// Retrieve it immediately
const result = await client.v1.context.memory.search({
  userId: "test_user",
  sessionId: "test_session"
});

console.log("Test passed:", result.memories?.length === 1);
// Expected Output: Test passed: true

What’s Next?

Add Context Search

Combine memory with document search for powerful RAG

Vercel AI SDK Guide

Deep dive into AI SDK integration

TypeScript SDK Reference

Complete API reference for TypeScript

Python SDK Reference

Complete API reference for Python

Learn Advanced Patterns

User Profiling

Build rich user profiles from memory

Memory Use Cases

Customer support, personal assistants, chatbots

Memory API Reference

Complete REST API documentation

Sample Projects

Community-built memory applications

Need Help?

Discord Community

Get real-time help from our community

Documentation

Browse guides and API references

Email Support

Contact our support team

GitHub Issues

Report bugs or request features