FP&A · Strategic Planning · Python · Monte Carlo · NumPy · Plotly
Every quarter, the finance and strategy team needed to present revenue forecasts for 12, 24, and 36 months to the board. The existing approach was a spreadsheet with three fixed scenarios — bear, base, bull — where each scenario was a single point estimate derived by manually adjusting a few key assumptions.
The problem: single-point forecasts communicate false precision. A "₹500Cr ARR by Q4" projection carries no information about how confident you are, what assumptions drive that number, or what range of outcomes is plausible. Board decisions made on these numbers often carried unquantified risk.
Instead of three fixed scenarios, we built a simulation engine where each key driver is modelled as a probability distribution. Running 10,000 simulations samples from those distributions and produces a full distribution of revenue outcomes — including percentile ranges, tail risks, and sensitivity rankings.
Key drivers modelled:
import numpy as np
def simulate_revenue(months: int = 24, n_sims: int = 10_000) -> np.ndarray:
"""Returns array of shape (n_sims, months) with monthly revenue."""
rng = np.random.default_rng(seed=42)
# Sample driver distributions once per simulation
monthly_new_users = rng.normal(loc=25_000, scale=4_000, size=(n_sims, months))
activation_rate = rng.beta( a=8, b=2, size=(n_sims, months))
churn_rate = rng.beta( a=2, b=18, size=(n_sims, months))
arpu = rng.lognormal(mean=4.5, sigma=0.4, size=(n_sims, months))
revenue = np.zeros((n_sims, months))
active_base = np.zeros(n_sims)
for t in range(months):
new_activated = monthly_new_users[:, t] * activation_rate[:, t]
active_base = active_base * (1 - churn_rate[:, t]) + new_activated
revenue[:, t] = active_base * arpu[:, t]
return revenue
sims = simulate_revenue(months=36, n_sims=10_000)
# sims.sum(axis=1) gives total 36-month revenue per simulation
From the 10,000 simulations we extracted:
import pandas as pd
monthly_revenue = sims # shape: (10_000, 36)
summary = pd.DataFrame({
'month': range(1, 37),
'p10': np.percentile(monthly_revenue, 10, axis=0),
'p50': np.percentile(monthly_revenue, 50, axis=0),
'p90': np.percentile(monthly_revenue, 90, axis=0),
})
# Sensitivity: correlate each driver sample with total revenue
total_rev = monthly_revenue.sum(axis=1)
# Spearman rank correlation of each input vs output
# (run for each sampled driver array)
The tornado chart from the sensitivity analysis surfaced a non-obvious insight: monthly churn rate had 2× the impact on 36-month revenue as new user acquisition rate. A 1pp improvement in churn was worth more than a 10% increase in acquisition volume.
This directly influenced Q3 planning: retention budget was increased by 30% while acquisition efficiency targets were tightened, rather than simply growing top-of-funnel spend.