Two EdTech Deployments, 45,000 Calls, One Pattern

JK Shah Classes and Unacademy ran different use cases at different price poin...

Written by
Reviewed by
5 min read
Published at Today
Updated on Today
Table of Contents
({ title: a.title, href: `/blog/${a.slug}`, track: a.track }))} >

One case study proves a product works. Two case studies prove a pattern. When skeptics dismiss a single deployment as cherry-picked, two independent deployments with consistent results across different use cases, different price points, and different lead volumes are harder to wave away. This article pulls both deployments together and identifies the patterns that hold.

The Setup: Two Clients, Two Use Cases, One Kathan Voice OS

[Data table has been removed during migration]

Different objectives. Different conversational structures. Different price points. Same enterprise voice OS. Same context engine. The results tell a consistent story — and that consistency is the evidence. This is a testament to the platform being built in India, for the world.

[Stat card removed]

Pattern 1: Connection Rates Are 10–15 Points Above AI-Enhanced Benchmarks

JK Shah: 38.7%. Unacademy: 35.2%. The industry AI-enhanced ceiling sits at 20–25%. Traditional cold calling connects at 12–15%. Both deployments on Kathan's voice OS cleared the AI-enhanced benchmark by 10–15 percentage points. The gap is consistent across use cases.

The common variable is context-aware agents that adapt their opening seconds to what they know about the person. JK Shah's agent referenced the student's course interest and preferred language. Unacademy's agent referenced the learner's specific program and engagement history. Both opened with relevance instead of a generic script. Both connected at rates that stateless systems cannot reach.

[Data table has been removed during migration]

Pattern 2: Lead Freshness and Context Depth Correlate with Performance

JK Shah's Gujarat retargets (57.3%) outperformed cold outreach (37–38%). Unacademy's Campaign 1 (fresh leads, 45.5%) outperformed Campaign 4 (staler leads, 23.7%). When the agent has more context and the lead is more recent, performance spikes. When either is weak, performance drops — but still exceeds industry norms.

[Data table has been removed during migration]

The pattern is clear: context depth and lead freshness are multiplicative. High context + fresh leads = peak performance. But even low context + stale leads on the Alchemyst enterprise voice OS (कथन) still matches or exceeds the industry AI ceiling. The context layer sets a higher floor, not just a higher ceiling.

Pattern 3: Cost Per Outcome Beats Every Alternative

₹24.93 per qualified enrollment interaction. ₹10.79 per NPS response. Both are fractions of the BPO equivalent. The cost advantage isn't from cheaper telephony — JK Shah used ₹9/min, Unacademy used ₹3/min. It's from the context layer reducing wasted call time and increasing conversion per connected call.

[Data table has been removed during migration]

The cost-per-outcome math works at both price points. This is important for prospects evaluating voice AI across different budget tiers. Whether you're running a ₹9/min enrollment campaign or a ₹3/min feedback campaign, the Kathan OS makes the economics work by eliminating the waste that inflates cost in stateless systems.

[Stat card removed]

Pattern 4: Qualitative Data Comes Free with the Conversation

JK Shah captured objection types, language preferences, and callback requests as structured data. Unacademy captured NPS scores alongside qualitative feedback about specific courses, modules, and feature requests. Neither deployment required a separate data collection step. The conversation itself was the data pipeline.

This is a structural advantage of voice AI over email or SMS surveys. When a learner tells the agent "I gave a 6 because Module 4's video quality was poor," that's simultaneously an NPS data point, a product feedback signal, and a churn risk indicator. The voice agent captures all three in a single interaction. A traditional approach would require three separate tools.

Pattern 5: The Context Layer Is the Differentiator, Not the Voice

Both deployments used the same Context Engine. Both used context arithmetic to scope, filter, and rank information at call time. The voice quality mattered — but it was table stakes. Every serious voice AI vendor has acceptable TTS quality in 2026. The measurable performance gap came from agents that knew who they were calling and why.

"The voice is the interface. The context is the intelligence. Two deployments, two use cases, one consistent finding: the agents that carry memory outperform the agents that don't. By 10–15 percentage points. Every time."

The Aggregate Numbers

[Data table has been removed during migration]

These are production numbers, not pilot metrics. Over 500,000 calls deployed daily across 36 campaigns for two independent clients. The consistency across deployments — in connection rates, success rates, and cost efficiency — is the strongest evidence that the Kathan context layer delivers repeatable results, not one-off wins.

What This Means for Your Evaluation

[Diagnostic box removed]

If you've dismissed voice AI based on a single vendor's underwhelming pilot, or if you're skeptical that any voice OS can consistently outperform industry benchmarks, the data from two independent EdTech deployments tells a different story. Start a 48-hour pilot with Alchemyst Kathan and add your own data point to the pattern.

Ready to build your next AI agent?