You Spent ₹3 Lakh on Voice AI and Got 200 Leads. Where Did the Money Go?

A forensic breakdown of why most voice AI budgets underperform — and how cont...

Written by
Reviewed by
5 min read
Published at Today
Updated on Today
Table of Contents
({ title: a.title, href: `/blog/${a.slug}`, track: a.track }))} >

A mid-size EdTech company budgets ₹3,00,000 for an outbound campaign with Alchemyst Kathan. They upload 20,000 leads, configure a single script in English, and run for two weeks. The results arrive: 12% connection rate. Average call duration: 18 seconds — most hang up before the agent finishes the intro. 200 leads marked as "interested." Cost per lead: ₹1,500. The CFO asks what happened. Nobody has a good answer.

[Stat card removed]

Where the Money Actually Went

The ₹3 lakh didn't disappear into a black hole. It was spent methodically — on the wrong things. Here's the forensic breakdown:

[Data table has been removed during migration]

85% of the budget was wasted — not because the voice AI is ineffective, but because the deployment was context-blind. Let's walk through each failure:

Failure 1: One Language for a Multilingual Country

The lead list included prospects from Gujarat, Karnataka, Telangana, Maharashtra, Tamil Nadu, and Delhi. The script was configured in English only. Connection rates in Gujarat and Karnataka tanked — not because the leads were bad, but because an English call to a Gujarati-speaking parent about their child's CA coaching feels foreign. They hang up in 3 seconds.

In the JK Shah deployment, Alchemyst's Kathan engine operated in 12+ Indian languages — Hindi, Tamil, Telugu, Gujarati, Kannada, Marathi, Bengali, Malayalam, Punjabi, Odia, Assamese, and Urdu. Language selection wasn't configured per campaign; it was determined per lead based on region and metadata signals. Gujarat retarget campaigns hit 57.3% connection rates. The same leads, in English, would have connected at under 15%.

Failure 2: Warm Leads Treated Like Cold Names

Among the 20,000 leads were people who had visited the company's website, attended a webinar, or filled out an inquiry form. These leads had demonstrated intent. They should have received a different opening and a different script. Instead, they got the same generic pitch as names purchased from a third-party list.

Context engineering solves this by feeding the Kathan agent prior interaction data at call time. A lead who attended a webinar on "CA Foundation 2026" gets an opening that references the webinar. A lead who filled a form asking about fees gets a call that leads with pricing. The agent's approach matches the lead's stage in the funnel.

Failure 3: One Script for Two Objectives

The single script tried to cover both career guidance and course enrollment in the same flow. A lead interested in career guidance needs a consultative conversation. A lead ready to enroll needs logistics. Forcing both through the same script means neither gets served well. The voice agent can't adapt because it doesn't know what the lead cares about.

With context-aware dynamic script branching, the Kathan OS selects the right conversation flow based on the lead's profile and campaign objective. Career guidance leads get discovery questions. Enrollment-ready leads get a direct path to registration.

Failure 4: No Retargeting

Leads who picked up but said "call me later" or "I'm busy right now" were never called again. In a two-week campaign, these warm leads — people who actually answered and engaged — were left on the table. No callback was scheduled. This campaign treated every dial as a one-shot opportunity.

In the JK Shah deployment, a core part of the strategy developed by our team in India, was retargeting. Leads who expressed interest but didn't convert were automatically re-engaged with context from the prior call. The result: retarget campaigns connected at 42.7% and converted at 21.3% — significantly outperforming cold campaigns.

Spending Smarter: The Unacademy NPS Campaign

Waste isn't just about high-level connection rates; it's about the unit economics of every conversation. For their NPS feedback campaign, part of a deployment handling over 500,000+ calls daily, Unacademy spent a total of ₹11,963 to have 1,109 meaningful conversations, collecting detailed qualitative feedback from their learners. This was a targeted, efficient operation.

[Stat card removed]

The key was a lower per-minute rate combined with a highly effective, context-aware agent. By focusing on cost-per-outcome (a completed NPS survey) rather than just cost-per-minute, Unacademy achieved its objective with a fraction of the budget that a less intelligent voice OS would have required. This demonstrates that a higher price tag doesn't guarantee better results; the intelligence of the agent does.

The Same ₹3 Lakh, With Context

Now model the same budget with context engineering applied:

[Data table has been removed during migration]

[Stat card removed]

"The same ₹3 lakh budget, deployed with the Kathan (कथन) OS's context engineering, would have generated 2,400+ meaningful conversations instead of 200 leads. The technology isn't the problem. The architecture is."

If your voice AI budget is underperforming, the diagnosis is almost always the same: the agent lacks the context it needs to have relevant conversations. Better scripts won't fix it. Better voices won't fix it. Better lead lists won't fix it. Context engineering fixes it — by ensuring every dial is informed by everything the system already knows about that lead.

Start a pilot with Alchemyst Kathan, built in India for the world, and see the difference context makes on your next campaign.

Ready to build your next AI agent?