Alchemyst at 2026 - the year ahead

A raw, unfiltered (maybe random) stream of thoughts on behalf of our team.

Alchemyst at 2026 - the year ahead
Anuran RoyWritten by Anuran Roy
Reviewed by
6 min read
Published at Today
Updated on Today

Summary

2025 was a huge year for us - for both product and our mission. The world finally "got" why AI needs context and memory, and finally people woke up from the dream that AGI will be achieved with compute. Here's what we did for 2025 - and how we're looking at 2026.

Table of Contents

2025 was a huge year for us - for both product and our mission. The much awaited GPT-5 came, Claude reigned supreme on coding, Google finally caught upto the AI race, Elon entered the chat with xAI. China and US AI wars heated up, Europe quietly snuck up with Mistral and Aleph Alpha.

But something in AI felt different...

The reckoning

There were less people holding their breath, waiting for the next big "thing" in AI - model improvements became commonplace, but the craze died down. We were used to people waiting for model releases in 2024 with custom waitlists and reminders popping up - on which mass consumer apps like ChatGPT were built. Amidst all of this, there was a quite win for people like us - the ones who kept on shouting that the dream of AGI cannot be simply solved with compute ALONE.

The explosion of "context engineering"

The YC Summer School of 2025. A stellar lineup of speakers - Sam Altman (if you don't know he's the CEO of OpenAI, you've been living under a rock), Elon Musk, Bill Gates, Satya Nadella, and so on. Amidst all of them, the one person whose talk stood out the most was Andrej Karpathy. Later in a tweet full of substance (yes, usual Karpathy sensei - pardon my fanboying), he finally said it: "+1 for context engineering over prompt engineering)"

The world finally understood and woke up to the fact:

 Content Context is king

The context buzz.

The second half of 2025 was taken up mostly with the new buzzword in town, "Context Engineering". As with most trends, hype caught up with startups raising rounds left and right promising context engineering. At a projected growth of >38% YoY CAGR, it's outpacing the growth of ALMOST any other sector in the AI space so far.

When the hype dies down...

Substance remains. People had a lot of theories, many of which have led to tangential research in different directions - some thriving, some not so much. The buzz around context went cold for a few weeks, but it was ready to spring back alive in December.

Starting with this fantastic read by Jaya Gupta (Partner at Foundation Capital) that starts with:

Agents are cross-system and action-oriented. The UX of work is separating from the underlying data plane. Agents become the interface, but something still has to be canonical underneath.

Then a bunch of other articles, notable of which includes the fine article on Customer Relationship Context Graphs (aka CRCGs) by Ishaan Chhabra. Context became truly mainstream.

Across almost all of them, a few common ideas prevailed:

  • How do we reimagine systems of record for Agentic Era across multiple business surfaces?
  • In case of a blackboxed LLM, how do we add a Proof of Decision (PoD) layer for AI agents to decide on, tractably?

That's where the fundamental thesis of Alchemyst kicks in so well. "A verifiable context graph across all data sources" sounds trivial at first, but it explodes once you consider the following pain points:

  • How do you maintain determinism while trying to segment information into context data points, eventually leading to a graph? A lot of products in the market throw data at an LLM, not taking into account possible fudged IDs, numbers, etc etc.
  • How do you fix conflicting data points? Version control of data is not as simple as code versioning.
  • What about missing data? How do you cover up? Coincidentally this is the area where LLM powered knowledge graph products hallucinate the most.
  • How do you incorporate business nuances into retrieval? Let's be honest - business agnostic retrieval isn't what you want, business aware retrieval is what you need.

What did we build in 2025?

The good parts

Okay, now I'm wearing the hat of a techie, not as a founder. 2025, by no means, was a smooth ride for us. From a beta product, infuriated customers, and broken pipelines - we came a long way. A quick run through of what we did, in no particular order:

  • Stabilized the platform, scaled down our costs, moved our cost efficiency
  • Achieved State-of-the-art performance in September, had a set back in November, came back stronger with the best-in-class Performance : Cost ratio in December.
  • Introduced SDKs, MCPs and integrations with notable frameworks.
  • Built comprehensive documentation.
  • Landed our first enterprise customers, built gradual distribution with on-premise readiness.

Lessons learnt

But not all was great - there were a lot of bad patches as well:

  • Lost a few customers on failing to solve for reliability issues - taught us a lesson in maintenance windows.
     
  • Initially we were trying to focus on a bit of everything, leaving us spread too thin. At one point, the entire team got burnt out, leading to quite a few valued members leaving our team. Team morale was at an all time low, and I had to encounter the worst possible nightmare for an early-stage founder - a team devoid of morale.
     
  • Trying to deploy en-masse without evaluative benchmarks led us to loads of rabbit holes that we had to fix. Ultimately, building a product that people want is of paramount importance.
     
  • Not focusing on our strengths and trying to cover too much of our weaknesses taught us lessons in transparency - no software is a silver bullet.

Looking at 2026...

Now that our thesis is consolidated regarding how the world will be treating context as an invariant, we're coming up with loads of exciting new features that will serve businesses like no one else. A few of them that we've already done:

  • Collaborative context sharing: Why limit ourselves to SharePoint-esque data sharing, when we have a completely new surface of interactions through LLMs and AI Agents?
  • Data connectors: It's a hassle to work with data, but to fix that, we also need to fix how to bring that data to our platform. Data connectors are, and will be, an effort in that direction.

...And a lot more to come. We'll keep on innovating, and we're just getting started.

To a much more exciting 2026 ahead!

Ready to build your next AI agent?