CVaR, short for Conditional Value at Risk, is a risk measure that tells you the average loss you can expect during the worst-case scenarios for an investment or portfolio. If you set CVaR at the 95% confidence level, it calculates the average loss across the worst 5% of outcomes. It goes by several other names in finance: expected shortfall (ES), average value at risk (AVaR), and expected tail loss (ETL). They all refer to the same concept.
How CVaR Relates to Value at Risk
To understand CVaR, it helps to first understand Value at Risk (VaR), the measure it builds on. VaR gives you a single threshold: the loss level that your portfolio won’t exceed with some probability, typically 95% or 99%. For example, a one-day 95% VaR of $1 million means there’s a 95% chance your losses won’t exceed $1 million on any given day.
The problem with VaR is that it tells you nothing about what happens in the remaining 5% of cases. Your losses beyond that threshold could be $1.1 million or $50 million, and VaR treats both situations identically. It’s a fence that marks where the danger zone begins but doesn’t tell you what’s inside it.
CVaR crosses that fence. It calculates the average of all losses that fall beyond the VaR threshold. So if your 95% VaR is $1 million, your 95% CVaR might be $2.3 million, meaning that when losses do exceed $1 million (the worst 5% of days), they average $2.3 million. This makes CVaR far more informative about extreme losses, which is exactly the kind of risk that can wipe out a fund or trigger a financial crisis.
A Simple Example
Imagine you have 100 daily return observations for a portfolio, and you want to calculate CVaR at the 95% confidence level. You’d sort those returns from worst to best and isolate the bottom 5%, which is the five worst days. Say those five daily losses were $800,000, $1.2 million, $1.5 million, $2 million, and $3.5 million. Your CVaR is simply the average of those five values: $1.8 million.
VaR at the same confidence level would just be the boundary point, roughly $800,000 in this case. CVaR captures the full severity of the tail by averaging all the losses beyond that point. In continuous distributions (which real-world portfolios more closely resemble), the calculation uses an integral rather than a simple average, but the logic is identical: find the cutoff, then average everything worse than it.
Why CVaR Is Considered a Better Risk Measure
Risk measures in finance are evaluated against a set of four mathematical properties, collectively called “coherence.” A coherent risk measure must satisfy all four:
- Subadditivity: The risk of two portfolios combined should never exceed the sum of their individual risks. Diversification should never make measured risk worse.
- Homogeneity: If you double your position, your measured risk doubles.
- Monotonicity: If one portfolio always loses more than another, it should always show higher risk.
- Risk-free condition: Adding a guaranteed return to a portfolio reduces risk by that exact amount.
CVaR satisfies all four properties. VaR fails the subadditivity test, which is the most practically important one. There are real cases where VaR tells you that combining two portfolios increases your measured risk, even though diversification should do the opposite. This creates perverse incentives: a bank using VaR might look less risky by splitting a portfolio across separate desks rather than managing it as a whole. CVaR never produces this contradiction, which is why regulators and risk managers increasingly prefer it.
CVaR in Banking Regulation
The shift from VaR to CVaR isn’t just academic preference. The Basel III banking reforms, which set capital requirements for major banks worldwide, moved the market risk framework to an expected shortfall methodology. The updated rules, known as the Fundamental Review of the Trading Book (FRTB), require banks to use expected shortfall rather than VaR when calculating how much capital they need to hold against potential trading losses. The FDIC has described this as “a more robust methodology to capitalize for potential tail risks.”
Common confidence levels used in practice range from 95% to 99.9%. The 97.5% level is standard under the Basel framework for market risk. Higher confidence levels focus on rarer, more extreme events but require more data to estimate reliably.
Limitations of CVaR
CVaR is harder to estimate than VaR, especially at high confidence levels like 99%. Because it relies on averaging losses in the tail, you need enough data points in that tail to produce a stable estimate. With a 99% CVaR, you’re averaging only the worst 1% of observations. If you have 1,000 data points, that’s just 10 observations driving the entire estimate, which means small changes in the data can swing the result significantly.
CVaR also assumes you can accurately model the distribution of returns. In practice, financial returns have fatter tails than normal distributions predict, so historical simulation (using actual past returns rather than a theoretical bell curve) tends to produce more reliable CVaR estimates. But even historical data has gaps: a 20-year dataset may not contain a pandemic-level shock or a sudden market structure breakdown.
Despite these challenges, CVaR remains the preferred tail risk measure in both regulatory and institutional settings because it answers the question that actually matters during a crisis: not “where does the pain start?” but “how bad does it get?”