Whereas catastrophe models were once a niche actuarial tool utilised by a small number of specialist catastrophe re/insurance companies, their use has spread widely through the industry. Helen Yates reports
Combining science, engineering, computing and the actuarial professional, and developed in the 1980s when computer modelling was used for the first time to try to measure cat loss potential, the practical use of catastrophe models by insurers did not really come about until roughly decade later – specifically after Hurricane Andrew in 1992 and the Northridge Earthquake in 1994.
Both were significant cat events which saw several insurers become insolvent as a result of having too many eggs in one basket. These events suggested there was a better way of measuring and managing catastrophe exposure. “There was an understanding in the 1990s that the old actuarial methods didn’t work – science and engineering treatment was needed to be able to quantify the losses from natural hazards,” says Dr Milan Simic, managing director of AIR Worldwide.
Lessons were learned after 9/11 when classes of business as diverse as property, aviation and fine art were all subject to substantial losses. This kind of correlation had not previously been considered and the models were adapted accordingly.
More issues came to light during the 2004 storm season when hurricanes Charley, Frances, Ivan and Jeanne made landfall along the Florida coast in quick succession. The high activity witnessed during the storm season suggested the long-term view of hurricane risk could need updating, but this could not be done in time for the 2005 storm season. Cat modellers were also questioning some of their assumptions, including the impact of clustering – such as when storms follow a similar path as those that went before, exacerbating past damage.
But the real lessons came after Katrina in 2005. When Katrina overcame the flood defences in New Orleans it inundated over 80 per cent of the city, leading to more than 1,000 deaths and causing widespread devastation. Although Katrina had made landfall as a Category 3 storm with wind speeds up to 200kph (with the city’s levees thought to be able to withstand such a size storm), it produced a disproportionately high storm surge as it moved inland and had other characteristics not captured by the models.
There was a backlash against them, with insurers claiming the models had failed to predict the extent of the loss and frustration at the vague loss estimates being issued by the vendor cat modelling agencies. “What the modelling firms were doing was trying to come out very quickly with a number that could help,” says Paul Miller, head of international catastrophe management at Aon Benfield Analytics. “So they were coming out with some wide ranges and that opened them up to criticism. They’ve done a much better job recently to take a bit more time and then come out with a narrower range.
“Cat models are not there to perfectly recreate an historic event on the fly or to be used as a single even analysis tool,” he continues. “They’re there as statistical or financial modelling solutions thaxt give you the probability of exceeding a certain monetary loss in a given year.
I think a lot was being expected of the modelling companies, sometimes because of a lack of understanding on the part of the user.” Many people thought the users of the models had been naive in their assumptions of what the models could cover and that they had used the models too blindly. The fact the cat models had not modelled for losses like business interruption, demand surge and onshore flooding meant they were never going to be able to capture an event like Katrina in its entirety.
“One event that has seen a lot of dust kicked up is Katrina,” says AIR’s Simic. “That’s simply because people running the models did not understand, or were not told, what perils and risks were not covered by the models, for example flooding from the failure of the levees.
“The other thing we uncovered was the quality of the exposure data across the industry, and following Katrina and other events we launched a major initiative to improve data,” he continues. “The models need replacement values and in the US we have this common problem with under-insurance – for example, if you put in a value of US$100,000 when it’s really worth US$500,000 – so it’s no wonder your loss estimates are quite different from reality.”
IMPROVING SCIENCE
Today, the modelling agencies are working more closely with their clients to make sure property characteristics – including location, construction type, occupancy class, age and height – are properly determined. “Unless you’ve coded that your residential risk has a pool enclosure the model is not going to recognise that the building has experienced a significant loss,” says Simic. “There were a number of residential buildings that had those and the majority of the loss came from that pool enclosure and yet they were not properly coded in the exposure information.”
He thinks the models are getting better all the time and that model users are also improving their understanding. “It is a relatively young industry and it takes time to build a pool of talented people who understand models, can interpret them properly and who have the strength of character to interpret them properly to their managers and underwriters.”
After Katrina, all three cat modelling agencies introduced near-term hurricane models to reflect a heightened level of hurricane risk. While Atlantic storm seasons have so far failed to produce the losses witnessed in 2004 and 2005, they have not been completely benign. The 2010 storm season – with 12 named hurricanes – had the second highest number of hurricanes since records began in 1852 (with only 2005 exceeding it), according to RMS. While the US coastline was spared any major damage, Mexico experienced widespread flooding and other major storms dissipated in the mid-Atlantic.
UNMODELLED PERILS
The next big challenge as the industry improves its existing toolset is to look at unmodelled perils. With Solvency II and the industry’s widespread adoption of practices such as enterprise risk management (ERM) it is no longer enough to model a select handful of natural hazards. In recent years, the modelling agencies have expanded their repertoire to include risks such as pandemics, terrorism and mortality.
“Solvency II says you need to understand all your risks, whether it is Somalian pirates or South African mining risks – you have to factor them in,” says Simic. “And people increasingly want to blend different results from different views of risk – they also want to blend different perceptions of risk from the same model such as long-term view versus SST-conditioned view of hurricane risk, or time-dependent versus time-independent view of earthquake risk.”
Insurance and reinsurance companies are increasingly looking to blend the results of those modelled analyses with information across the rest of their portfolio, explains Simic. “It’s all to do with capturing the exposures. If you know that you have certain marine exposure off the coast of Somalia, you can put that into the model and by applying a simple PML ratio can combine that with your other worldwide risks that you are modelling in a more formal way.”
There is clearly a long way yet to go with the industry’s development of catastrophe models. And most likely, there will be plenty more pitfalls along the way. “Let’s not forget how young a science cat modelling is,” says Aon Benfield’s Miller.
“It is 20-odd-years-old and that means every time an event occurs there will always be something which means the models can and should be refined further. I recognise that’s frustrating for all of us because you’re constantly trying to change your view on risk.”
He gives Hurricane Ike as an example. This 2008 storm had such a wide footprint (despite being a Category 2 hurricane) that it took much longer to dissipate as it moved inland, causing greater damage than had been anticipated. “Anybody who’s just using their cat model out of the box is going to be wrong and the modelling firms would say that as well. The modelling firms can’t be expected to do and cover everything.”
He thinks the models have improved over time as lessons have been learnt from major events and as the technology has improved. “They’ve got better because there are more observations – the science has improved and the computing has allowed more iterations to be run...There is also a better view of science as a result of more recorded information and actually much more loss data. Because the great thing about losses is first, you get to test the model and second, you get to refine your view on vulnerability. The models will continue to change – sometimes dramatically – over time and anyone who wants the modelling view to be constant will be disappointed.”
Printed Copy:
Would you also like to receive CIR Magazine in print?
Data Use:
We will also send you our free daily email newsletters and other relevant communications, which you can opt out of at any time. Thank you.
YOU MIGHT ALSO LIKE