← Back to Blog

LLMs for Analytics: Honest Take After a Year of Building

March 4, 2026  ·  Data & AI  ·  7 min read


I want to be careful not to write the usual "AI is changing everything!" post. You've read those. They're mostly written by people who've spent more time thinking about AI than actually integrating it into analytical workflows.

I've spent the last year doing the second thing. At Razorpay and in side projects. Here's what I actually think.


Let's start with what doesn't work

LLMs don't fix bad data. This sounds obvious but I've seen teams reach for AI tooling as a way to avoid the harder work of data quality — and it doesn't work. The model confidently answers wrong questions with wrong data.

They don't reason causally. Ask an LLM "did this campaign cause the revenue lift?" and it will give you a confident, well-structured answer. That answer is a guess dressed up as analysis. For causal questions you still need experimental design, proper controls, and causal inference. The LLM can help you write the code for that — it cannot replace the thinking.

They also don't know your business. They don't know that your December data is always noisy because of year-end cleaning, or that user_id 0 is a test account, or that "activated" means something different in your product than the industry definition. That institutional context lives in people's heads and in your code, and no amount of prompting gets around its absence.


Where they actually help — five things I use regularly

1. EDA, faster. Exploratory data analysis is the most important and most tedious part of the job. Loading data, checking distributions, looking for nulls and outliers, finding obvious correlations. With a good coding assistant, I do this in a third of the time I used to. The value isn't just speed — it's that I end up exploring more. When iteration is cheap, you ask more questions.

2. SQL for non-technical stakeholders. The highest-leverage near-term use case, and the one I'd recommend most teams start with. Product managers and growth leads have data questions they never ask because the friction of "file a ticket with analytics" is too high. A decent Text-to-SQL tool collapses that friction. The analytics team's job shifts from writing queries to building the system and handling the hard questions. That's the right direction.

3. Automated metric narration. Weekly business reviews. Monthly reports. Anomaly alerts. Someone has to translate numbers into sentences — and it's usually the analyst, which is a waste of their time for routine reporting. I've built a system at work that takes the week's key metrics, passes them to Gemini with enough context, and generates a 3-4 sentence Slack summary automatically. Took a day to build. Saves a few hours a week.

def narrate_metric(metric_name, current, previous, context, model):
    pct = ((current - previous) / previous) * 100
    direction = "up" if pct > 0 else "down"

    prompt = f"""Analytics assistant. Write 2-3 sentences on this weekly metric.

Metric: {metric_name}
This week: {current:,.0f} ({direction} {abs(pct):.1f}% vs last week)
Context: {context}

Be specific. Flag if change exceeds 10%. No jargon. No filler."""

    return model.generate_content(prompt).text

4. Classifying free-text data at scale. This one unlocks something genuinely new. Support tickets, NPS comments, cancellation reasons — we've always collected this data and almost never analysed it systematically because it's time-consuming to classify manually. LLMs make this fast and cheap. I once classified 40,000 support tickets into 12 reason categories in about 20 minutes. The business had been asking for that analysis for two years.

5. Code documentation and review. Analytics codebases are badly documented. LLMs are good at explaining what a complex SQL transformation does and adding useful comments. Not glamorous, but it matters — especially for anything that other people need to maintain.


The shift in what the job looks like

Here's what I notice changing in my own work: I spend less time on the mechanical translation parts — question to query, data to chart, numbers to sentences — and more time on the harder stuff. Defining what question is even worth asking. Catching the places where data is misleading. Designing experiments properly.

I think this is the real change. Not that AI makes analysts redundant — it makes the boring parts of the job smaller, which means more time for the parts that actually require judgment.

Whether that's a good thing depends on whether you find the boring parts of analytics frustrating or comfortable. Honestly, some people prefer the predictability of writing a known query to the ambiguity of doing genuine analytical thinking. That's fine — but it's worth being honest about which camp you're in, because the job is shifting.


Where to start if you haven't yet

Don't start with the most ambitious thing. Start with the most repetitive thing your team does — the weekly report that takes 3 hours and follows the same structure every time. Automate that first. Get comfortable with the tooling. Then expand.

The teams I've seen get the most value from LLMs aren't the ones chasing the most sophisticated use cases. They're the ones who identified the boring, high-volume work and made it disappear — freeing people to think harder about the actual problems.