Financial Models in Practice · Part 15 of 16
Monte Carlo DCF: Adding Probability and Uncertainty to Your Valuation
Your DCF model for WidgetCo spits out an implied share price of £8.47. You have run a sensitivity table — varying WACC and terminal growth — and the range comes back at £6.01 to £18.81. You present this to your manager and the immediate question is: “Which end of that range is more likely?”
A standard sensitivity table cannot answer that. It treats every combination of WACC and growth as equally probable. In reality, some combinations are far more likely than others, and the assumptions you are varying are not independent of each other — a higher revenue growth rate tends to correlate with a tighter margin profile and a slightly higher risk premium. The sensitivity table ignores all of that.
Monte Carlo simulation is the tool that replaces this limitation. Instead of asking “what happens at each specific combination of inputs?”, it asks “if we draw our assumptions randomly from realistic probability distributions, and repeat that process ten thousand times, what does the distribution of possible valuations look like?” The result is not a single number or a table of point estimates — it is a full probability distribution of enterprise value, with percentiles you can quote directly to a client, investment committee, or examiner.
This technique is used routinely on commodity trading risk desks, in corporate treasury for stress-testing capital structures, and in sophisticated PE firms for validating acquisition assumptions. Once you understand the mechanics, it takes roughly an afternoon to layer onto a DCF model you have already built.
The Problem With Single-Point Estimates
Return to the WidgetCo base case. The five-year DCF, with an 8% WACC and a 2.5% terminal growth rate, gives an enterprise value of approximately £9,270,000 and an implied equity value of £8,470,000 — £8.47 per share on 1,000 shares outstanding with £800,000 of net debt.
That number is the output of one specific combination of assumptions. Change revenue growth from 8% to 10% and WACC from 8.0% to 7.5% and the value changes materially. But those two changes do not move independently in the real world. A company growing faster is usually perceived as riskier by the market, not less risky — so a higher growth rate might warrant a slightly higher WACC, not a lower one. The sensitivity table, which varies one or two inputs at a time while holding everything else fixed, misses these interactions entirely.
There is a second limitation: the sensitivity table implicitly assumes that the extreme values in your range — say, 10% growth and 7.5% WACC simultaneously — are possible scenarios deserving equal weight to the central case. In practice, that combination might be an outlier: plausible but unlikely. The analyst presenting that range has no way of communicating relative likelihood.
Monte Carlo handles both issues. By drawing each assumption from a specified distribution on every iteration, it naturally reflects the relative likelihood of each outcome. Assumptions that cluster tightly around a central value (low standard deviation) will rarely produce extreme results; assumptions with wide uncertainty will drive more dispersion in the output. And if you specify correlations between inputs, the model respects those too.
Assigning Distributions to Assumptions
The first task in any Monte Carlo model is replacing each point estimate in your Assumptions tab with a probability distribution. You are not changing your central view — you are adding an explicit description of your uncertainty around that view.
For WidgetCo, the key uncertain assumptions and their distributions are:
| Assumption | Base case | Distribution | Parameters |
|---|---|---|---|
| Revenue growth (Yr 2–5) | 8% | Normal | Mean = 8%, SD = 3% |
| EBIT margin | 15% | Normal | Mean = 15%, SD = 2% |
| WACC | 8.0% | Normal | Mean = 8.0%, SD = 0.8% |
| Terminal growth rate | 2.5% | Triangular | Min = 1.5%, Mode = 2.5%, Max = 3.5% |
| CapEx % of revenue | 6% | Normal | Mean = 6%, SD = 1% |
Choosing the right distribution requires thought:
Normal distribution is the right choice when uncertainty is roughly symmetric around a central estimate and you have no strong reason to expect skew. Revenue growth rates and EBIT margins tend to fit well — they can move in either direction, and extreme values become progressively less likely as you move away from the mean.
Triangular distribution suits assumptions where you have a clear minimum, most-likely value, and maximum, but limited data to estimate a standard deviation. Terminal growth rate is a classic candidate: most analysts agree it should sit somewhere between long-run inflation and GDP growth (roughly 1.5–3.5%), with the mode around 2.5%.
Log-normal distribution is appropriate for assumptions that cannot go negative and exhibit right-skewed uncertainty — CapEx as a fraction of revenue, for example, or a company’s EV/EBITDA multiple.
Uniform distribution is the right choice when you have very limited information and all values within a range are roughly equally plausible. Use it sparingly; it tends to produce wider output distributions than are really warranted.
One practical constraint: the terminal growth rate must always be less than WACC, otherwise the Gordon Growth Model blows up. In the simulation, clip any draw where terminal growth meets or exceeds WACC to avoid division by zero or negative terminal values.
Running the Simulation in Excel
Excel is surprisingly capable for Monte Carlo analysis on a model of this size. The approach uses NORM.INV(RAND(), mean, sd) to generate random draws from a normal distribution, and a Data Table to repeat the calculation thousands of times without any macros.
Step 1 — Create a simulation row. Add a dedicated section in your model (or a new tab) with one cell per assumption. In each cell, enter the distribution formula rather than a hard-coded number:
Revenue growth: =NORM.INV(RAND(), 8%, 3%)
EBIT margin: =NORM.INV(RAND(), 15%, 2%)
WACC: =NORM.INV(RAND(), 8.0%, 0.8%)
CapEx % revenue: =NORM.INV(RAND(), 6%, 1%)
Terminal growth: =MIN(NORM.INV(RAND(), 2.5%, 0.5%), WACC_cell - 0.5%)
For the triangular terminal growth rate, Excel does not have a built-in TRIANG.INV function. The workaround is to use the inverse CDF formula directly, or use the Normal approximation shown above with a clipping constraint.
Step 2 — Link your FCF model to the simulation row. The DCF calculation should reference these simulation cells rather than the main Assumptions tab. Do not hard-code any assumption directly into a formula.
Step 3 — Create an output cell. One cell should display the single output metric you want to track across simulations — in this case, implied share price.
Step 4 — Set up the Data Table. In a blank column, enter row numbers from 1 to 1,000 (or 2,000 for better precision). Select a range that includes those row numbers and one adjacent column for the output. Go to Data → What-If Analysis → Data Table. Leave the Row Input Cell blank. Set the Column Input Cell to any blank cell that does not affect the model. Click OK.
Excel now recalculates the entire sheet once per row, generating a new RAND() draw each time and recording the output share price. You end up with 1,000 independent valuations.
Step 5 — Calculate percentile statistics. Use PERCENTILE.INC() to summarise the 1,000 outputs:
| Percentile | Implied Share Price |
|---|---|
| P10 (bear) | £5.82 |
| P25 | £6.94 |
| P50 (median) | £8.51 |
| P75 | £10.43 |
| P90 (bull) | £12.76 |
Step 6 — Build a histogram. Use Excel’s FREQUENCY() function or the built-in histogram chart to visualise the distribution. The shape of the histogram is itself informative — more on that below.
Monte Carlo Simulation — Pseudocode Walkthrough
For those who want to implement this in Python (using NumPy) or in any other scripting environment, here is the complete algorithm in plain language. The logic is identical whether you are running it in Excel, Python, or R — only the syntax differs.
SET n_simulations = 10,000
FOR each simulation (repeat 10,000 times):
DRAW random assumptions:
revenue_growth ~ Normal(mean = 8%, sd = 3%) → clip to [−10%, +25%]
ebit_margin ~ Normal(mean = 15%, sd = 2%)
wacc ~ Normal(mean = 8.0%, sd = 0.8%) → clip to [4%, 15%]
terminal_g ~ Triangular(min = 1.5%, mode = 2.5%, max = 3.5%)
constrain: terminal_g < wacc
PROJECT 5-year revenue:
base_revenue = £5,000
revenue[t] = revenue[t−1] × (1 + revenue_growth) for t = 1 … 5
CALCULATE FCF each year:
NOPAT[t] = revenue[t] × ebit_margin × (1 − tax rate)
FCF[t] = NOPAT[t] × 85% (simplified CapEx / working capital adjustment)
DISCOUNT to present value:
PV_FCF = Σ FCF[t] / (1 + wacc)^t for t = 1 … 5
TERMINAL VALUE:
TV = FCF[5] × (1 + terminal_g) / (wacc − terminal_g)
PV_TV = TV / (1 + wacc)^5
ENTERPRISE VALUE:
EV = PV_FCF + PV_TV
EQUITY VALUE & SHARE PRICE:
equity_value = EV − net_debt (£800)
implied_price = equity_value / shares_outstanding (1,000)
STORE implied_price
SUMMARISE 10,000 results:
Report: P10, P25, P50 (median), P75, P90
Plot: histogram of implied_price
vertical dashed line at mean
Walk through the logic iteration by iteration: in simulation run 1, revenue growth might be drawn as 6.2%, WACC as 8.4%, and terminal growth as 2.1%. The model calculates FCF for each of the five years using those specific values, discounts them, adds the terminal value, and records an implied share price of, say, £7.14. In run 2, the draws might be 9.8%, 7.6%, and 2.9%, producing a share price of £11.43. After 10,000 runs, you have 10,000 different share prices, each a legitimate consequence of assumptions drawn from your specified distributions.
The clipping constraints are important in practice. Drawing revenue growth of −15% or WACC of 2% is theoretically possible from a normal distribution but economically implausible for WidgetCo. Setting hard bounds prevents a small number of extreme draws from distorting the tails of your output distribution.
Interpreting and Presenting the Output
The output histogram will almost always show a right skew for a DCF simulation. This is not an artefact of the model — it reflects a genuine asymmetry in the valuation. On the downside, equity value is bounded at zero: even if every assumption goes wrong, the share price cannot fall below £0. On the upside, there is no corresponding ceiling. A combination of strong growth, margin expansion, and a low discount rate can produce very high valuations. The distribution stretches further to the right than to the left.
This asymmetry is one of the reasons analysts using only symmetric sensitivity tables understate the potential upside of growth businesses. The Monte Carlo captures it naturally.
For communicating results to a non-technical audience, the percentile summary table is your best tool. A clear way to frame it:
“Based on our Monte Carlo analysis across 10,000 simulations, the median implied share price is £8.51. There is a 10% probability the value falls below £5.82 (P10 bear case) and a 10% probability it exceeds £12.76 (P90 bull case). The current market price of £7.50 sits in the 30th to 35th percentile of our distribution — meaning roughly two-thirds of simulated outcomes produce a higher value.”
That framing is far more actionable than “our sensitivity range is £6 to £19.” It attaches probabilities to specific outcomes.
It is also worth being clear about what Monte Carlo is not. It is not a replacement for scenario analysis. Scenarios are specific, internally consistent stories — a recession scenario changes multiple assumptions in coordinated, economically logical ways. Monte Carlo samples randomly from the full joint distribution of assumptions without imposing those internal consistencies. The two approaches are complementary: use scenario analysis to stress-test specific narratives, and Monte Carlo to characterise the full distribution of outcomes.
Finally, the width of your output distribution is only as good as your input distributions. If you assume a standard deviation of 3% for revenue growth but the actual historical volatility of comparable companies is 8%, your P10–P90 range will be too narrow and you will understate the true uncertainty. Calibrate your distribution parameters against real data wherever possible.
Key Takeaways
- Single-point DCF estimates present false precision; the sensitivity table improves on this but treats all combinations as equally likely and ignores correlations between inputs
- Monte Carlo simulation replaces point estimates with probability distributions and runs the DCF thousands of times, producing a full distribution of possible valuations with probabilities attached
- The Excel approach —
NORM.INV(RAND(), mean, sd)combined with the Data Table trick — works well for models of this complexity without requiring any specialist software or macros - Right skew is typical in DCF Monte Carlo outputs: equity value is bounded at zero on the downside but has unlimited theoretical upside
- Present the P10/P50/P90 range alongside the point estimate; this communicates uncertainty in a form that is both statistically rigorous and immediately interpretable by a non-technical audience
Practice
Run the Excel Monte Carlo for 1,000 simulations on the WidgetCo DCF, using the assumption distributions from this post. Record the P10 and P90 implied share prices. Then change the standard deviation of the revenue growth assumption from 3% to 5% and re-run — observe how the P10–P90 spread widens and note how the median shifts. As a further extension, add a correlation between revenue growth and EBIT margin (higher growth tends to compress margins due to investment spend) by using a conditional formula for the margin draw, and compare the output distribution to the uncorrelated version.
Topics
Related posts
A step-by-step walkthrough of Discounted Cash Flow valuation for students and junior analysts — from projecting free cash flows to deriving an implied share price, with a worked Excel example.
Beginner Basics Scenario and Sensitivity Analysis: How to Stress-Test Any Financial ModelLearn the difference between sensitivity and scenario analysis, how to build both in Excel using data tables and the scenario manager, and why every financial model needs them before anyone relies on it.
Python & Quant Models Time-Series Forecasting Models: ARIMA and Holt-Winters for Finance AnalystsA practical introduction to statistical time-series forecasting for financial analysts — covering decomposition, ARIMA, and Holt-Winters with step-by-step worked examples you can apply to revenue and cost forecasting.