Catastrophe Modeling: How It Works, Key Components, and Software

Hurricane Andrew made landfall in South Florida on August 24, 1992, and wiped out $20 billion in insured property in a single day. Eleven insurers went bankrupt. The industry had badly underestimated its exposure because loss estimates relied on historical averages and expert judgment rather than probabilistic simulation.

That failure gave rise to catastrophe modeling, a discipline that uses physics-based simulation, engineering data, and financial logic to estimate losses from natural disasters before they happen. Three decades later, cat models support virtually every reinsurance treaty, regulatory capital requirement, and catastrophe bond issued globally.

What Is Catastrophe Modeling?

Catastrophe modeling (also called cat modeling) is the process of using computer simulations to estimate the probability and financial impact of catastrophic events. A cat model generates thousands of synthetic disaster scenarios, applies them to a portfolio of buildings and infrastructure, and calculates the resulting economic and insured losses.

The field emerged in the late 1980s when Karen Clark (later founder of AIR Worldwide) built the first commercial hurricane model. After Andrew exposed the industry’s blind spots, adoption accelerated rapidly. By the mid-2000s, cat models had become standard tools for insurers, reinsurers, banks, and regulators.

Modern cat models cover earthquakes, hurricanes, floods, wildfires, severe convective storms, and even terrorism. They produce risk metrics that drive pricing, capital allocation, and regulatory compliance across the global insurance market shaped by climate change.

The Four Modules of a Catastrophe Model

Every catastrophe model follows the same four-module architecture, regardless of vendor or peril. Each module takes the output of the previous one and transforms it into a more actionable form.

Module What It Does Key Inputs Key Outputs
Hazard Simulates thousands of potential events Historical catalogs, climate data, geophysical data Event set with intensity footprints
Vulnerability Estimates physical damage from event intensity Building characteristics, damage curves, construction type Damage ratios by structure type
Exposure Describes properties and values at risk Location, replacement value, occupancy, stories Portfolio inventory
Financial Applies insurance terms to damage Policy limits, deductibles, reinsurance treaties Insured losses, AAL, EP curves

Hazard Module

The hazard module generates a stochastic event set: a catalog of tens of thousands of synthetic disasters representing a time horizon of 10,000 to 100,000 years. For hurricane models, this means simulating storm tracks, central pressures, wind fields, and rainfall. For flood models, it means translating precipitation into river flow and inundation depth.

Each event in the catalog comes with an intensity footprint showing how severe the hazard is at every location. A hurricane footprint maps peak wind speed across its path. A flood footprint maps water depth across the floodplain.

Vulnerability Module

The vulnerability module translates hazard intensity into physical damage. It relies on damage curves (also called fragility functions or vulnerability functions) that map an intensity measure (wind speed, flood depth, ground acceleration) to a damage ratio.

For flood catastrophe modeling, the standard damage curves come from FEMA’s HAZUS (U.S. buildings, 33 occupancy types) and the European Commission’s JRC (global coverage). Each curve accounts for building characteristics like occupancy type, number of stories, basement presence, and first floor height.

The output is a damage ratio for every building in the portfolio at every hazard intensity. A commercial warehouse might sustain 15% damage at 2 feet of flooding and 45% at 6 feet, while a residential structure with an elevated first floor might sustain only 5% at the same 2-foot depth.

Catastrophe modeling vulnerability output: damage estimation report showing HAZUS damage ratio and loss breakdown
Damage estimation report showing building-level vulnerability assessment with HAZUS damage ratios and structural loss breakdown. Source: Continuuiti.

Exposure Module

The exposure module holds the inventory of properties at risk: their locations, replacement values, occupancy types, construction materials, and structural characteristics. Data quality in this module directly affects model accuracy. Incomplete or geocoded-to-postcode-centroid exposure data can shift loss estimates by 20-40%.

Modern exposure databases track total insured value (TIV), building age, roof type, cladding material, number of stories, and whether the structure has been retrofitted. The more granular the data, the more accurate the vulnerability module’s damage estimates.

Financial Module

The financial module applies insurance and reinsurance contract terms to the ground-up damage. It layers deductibles, policy limits, sublimits, co-insurance, and reinsurance treaty structures on top of the raw loss to produce the final insured loss for each event.

The output is a set of risk metrics: average annual loss (AAL), exceedance probability (EP) curves, and tail risk measures. These numbers drive every downstream decision from pricing to capital allocation.

Catastrophe modeling: four-module process flow showing hazard, vulnerability, exposure, and financial components
The four modules of a catastrophe model: from hazard simulation through financial loss calculation. Source: Continuuiti.

How Is Catastrophe Modeling Used?

Cat models serve different stakeholders with different questions, but the underlying simulation engine is the same. What changes is which output metrics matter and how they feed into decisions.

Stakeholder Primary Use Key Metrics
Insurers Pricing, underwriting, reserve setting AAL, PML, rate-on-line
Reinsurers Treaty pricing, portfolio optimization EP curves, TVaR, PML
Banks and lenders Mortgage portfolio climate risk CVaR, expected annual loss
Corporates TCFD disclosure, supply chain risk Physical risk scores, AAL
Regulators Solvency assessment, rate review Aggregate EP, capital adequacy
Governments Emergency planning, infrastructure investment Scenario losses, return period losses

Banks increasingly use cat model outputs for climate stress testing. Under frameworks like TCFD and the ECB’s climate stress tests, lenders must quantify how physical climate risk affects their mortgage and commercial real estate portfolios. Climate value at risk calculations translate cat model outputs into financial terms that fit into existing risk management frameworks.

Key Output Metrics

Cat models produce several standard metrics. Understanding what each one measures and how it gets used separates practitioners who can interpret model results from those who just read the summary page.

Metric Definition Typical Use
AAL (Average Annual Loss) Expected loss per year averaged across all simulations Pricing, budgeting, TCFD disclosure
EP Curve Probability that losses exceed a given threshold Reinsurance purchasing, capital modeling
PML Loss at a specific return period (e.g., 1-in-250 year) Underwriting limits, cat bond sizing
TVaR (Tail Value at Risk) Average loss beyond a given percentile Solvency capital requirements
OEP vs AEP Occurrence vs aggregate exceedance probability Single-event vs annual cumulative risk

The probable maximum loss metric deserves special attention because it appears in nearly every reinsurance negotiation and catastrophe bond prospectus. PML is simply the loss value read from the EP curve at a specified return period, typically 250 or 475 years. Its counterpart, estimated maximum loss (EML), represents the expected loss when safety systems function normally.

Perils Covered by Catastrophe Models

Not all perils are equally well modeled. Hurricane and earthquake models have three decades of refinement behind them. Flood and wildfire models are improving rapidly but still carry more uncertainty, especially at high resolutions.

Peril Modeling Maturity Key Challenge
Hurricane / Typhoon High Track uncertainty, intensity at landfall
Earthquake High Fault rupture timing, cascading effects
Flood (river + coastal) Medium-High Terrain resolution, climate change effects
Wildfire Medium Ember transport, urban-wildland interface
Severe convective storm Medium Hail size prediction, tornado intensity
Terrorism / Man-made Low Intent modeling, attack scenario design
Damage Estimation API
Build Damage Estimation Into Your Own Stack
HAZUS and JRC damage curves via API. Batch processing for portfolios up to 5,000 buildings.

View API Docs

Major Catastrophe Modeling Companies

The catastrophe modeling market has been dominated by three vendors for most of its history. New entrants are challenging that concentration with open-source frameworks and API-first approaches.

Company Founded Key Platform Approach
Verisk (AIR) 1987 Touchstone Licensed platform, 100+ models globally
Moody’s RMS 1988 Intelligent Risk Platform Cloud-native SaaS
CoreLogic 1990s CoreLogic CAT Models Integrated with property data
Oasis LMF 2012 Open-source framework Community-contributed models
Continuuiti 2024 Damage Estimation API API-first, pay-per-estimate

The traditional model has been annual licensing fees running into six or seven figures, which prices out smaller insurers and non-insurance users like banks and corporates. Newer entrants like Continuuiti offer API-first damage estimation, letting developers embed vulnerability analysis directly into their applications without licensing a full platform. The free flood damage calculator demonstrates this approach with HAZUS and JRC curves accessible through a browser.

Catastrophe modeling API: damage estimation endpoint documentation showing HAZUS and JRC curve sources
Damage estimation API documentation showing endpoint structure, HAZUS and JRC curve sources, and request format. Source: Continuuiti.

How Climate Change Is Reshaping Catastrophe Models

Traditional cat models rest on an assumption of stationarity: the idea that future catastrophe frequency and severity will resemble the historical record. Climate change breaks that assumption.

Hurricane Andrew caused $20 billion in insured losses in 1992. Hurricane Ian caused over $50 billion in 2022. Munich Re reported $120 billion in global insured losses from natural disasters in 2024 alone. The trend line is unmistakable, and it challenges models calibrated to 20th-century data.

Model vendors are responding by incorporating climate projections from CMIP6/SSP scenarios into their hazard modules. Forward-looking models can now simulate how hurricane wind speeds, flood depths, and wildfire behavior might shift under different warming pathways through 2050 and beyond.

For organizations running their own risk analyses, climate model outputs from NASA NEX-GDDP-CMIP6 provide the raw projections that feed into these adjusted hazard modules. The challenge is translating global climate projections into local hazard intensity at the building level.

Catastrophe Modeling Challenges

Cat models are powerful but imperfect. Several structural limitations affect their reliability:

Data quality. Exposure data is often incomplete. Geocoding to postal code centroids rather than exact addresses can shift flood loss estimates by 20-40%. Missing building characteristics force the model to use defaults, which may not reflect actual construction.

Model disagreement. Different vendors produce different results for the same portfolio. Divergences of 20-50% on AAL are common for complex commercial portfolios, especially for perils like flood and wildfire where models are still maturing.

Emerging risks. Cyber-physical attacks, pandemic business interruption, and supply chain cascading failures fall outside the scope of traditional cat models. These correlated, non-geographic risks require fundamentally different modeling approaches.

Regulatory fragmentation. Regulators in different jurisdictions accept different models, apply different loading factors, and require different output metrics. A model approved by the Florida Commission on Hurricane Loss Projection may need adjustments to satisfy Lloyd’s or the European Insurance and Occupational Pensions Authority.

Frequently Asked Questions

How would you explain the catastrophe modelling process?

Catastrophe modeling runs through four modules. The hazard module simulates thousands of potential events. The vulnerability module applies damage curves to estimate physical damage at each location. The exposure module maps all properties and their values. The financial module applies insurance terms to produce metrics like average annual loss and exceedance probability curves.

What are the elements of catastrophe modeling?

Four core elements: hazard (generates synthetic disasters), vulnerability (converts intensity to damage using curves like HAZUS), exposure (inventories properties and values), and financial (applies deductibles, limits, and reinsurance structures to produce insured loss figures).

What is the difference between catastrophe modeling and risk modeling?

Catastrophe modeling is a specialized subset of risk modeling focused on low-frequency, high-severity natural disaster events. It uses physics-based simulation of natural phenomena. Risk modeling is broader, covering attritional losses, liability, credit, and operational risk using statistical and actuarial methods.

What software is used for catastrophe modeling?

The dominant platforms are Verisk’s Touchstone, Moody’s Intelligent Risk Platform, and CoreLogic’s CAT Models. Oasis LMF offers an open-source framework. API-based alternatives from companies like Continuuiti provide the vulnerability module (damage curves) without requiring a full licensed platform.

How is climate change affecting catastrophe models?

Climate change breaks the assumption that future disasters will mirror historical patterns. Insured losses have risen from $20 billion (Hurricane Andrew, 1992) to over $50 billion (Hurricane Ian, 2022). Vendors now integrate CMIP6/SSP climate projections into hazard modules to simulate shifting risk through 2050.

The Future of Catastrophe Modeling

Catastrophe modeling has evolved from a niche actuarial tool into the backbone of how the global insurance industry measures, prices, and transfers risk. Whether you are an insurer pricing hurricane exposure, a bank stress-testing a mortgage portfolio, or a corporate reporting physical risk under TCFD, catastrophe modeling translates physical hazards into the financial language that drives decisions. The models will keep getting better. The question for most organizations is not whether to use them, but how to access the right level of sophistication for their specific needs.

Govind Balachandran
Govind Balachandran

Govind Balachandran is the founder of Continuuiti. He writes extensively on climate risk and operational risk intelligence for enterprises. Previously, he has worked for 7+ years in enterprise risk management, building and deploying third-party risk management and due diligence solutions across 100+ enterprises.