Actuarial pricing, capital modelling and reserving

Pricing Squad

Issue 4 -- June 2016

Welcome back to Pricing Squad!

Pricing Squad is a newsletter for fellow pricing practitioners and actuaries in general insurance. Enjoy, and let me know your comments and ideas for future issues.

Today's issue is about by-peril lost cost GLMs, and why most of the time, you probably don't need them.

Do you need by-peril GLMs?

Most personal line insurers model loss cost by peril.

For example, a private motor loss cost can be modelled with eight models, namely one frequency and one severity model for each major peril: own damage, third-party property damage, third-party bodily injury, fire and theft. Additional minor perils and propensity of large claims are also frequently modelled.

Household has been recorded as being modelled with up to 32 GLMs!

Real-life example

An actual private motor portfolio was modelled using 11 GLMs, resulting in 597 estimated parameters in total. It took one and a half months to make these models. These were good quality models, built and used by a reputable company and peer-reviewed by a dominant UK GI pricing consultancy.

Then a single burning cost model was fitted to the same data with only 119 estimated parameters. It took two hours to make this cursory single model.

Finally, out-of-sample data was scored using the 11 GLMs and the single burning cost model to produce the comparison chart below.

The major y-axis shows predicted claims divided by actual claims minus one. The blue line is for single-model prediction and the red line is for 11 GLMs (by peril).

The x-axis represents 20 buckets of equal out-of-sample exposures. Exposure on the left-hand side is where the single model predicts higher claims than the 11 GLMs; the exposure on the right is where the opposite is true. For policies in the middle, the two predictions are about the same.

The feeling I get when I look at the chart is that overall both predictions have similar accuracy. If anything, the single model might be marginally better (the average absolute error for the single model is 14% and for 11 GLMs it is 16%), but neither prediction is significantly better than the other.

If a single model predicts as well or better than 11 GLMs, why waste time building 11 GLMs?

What went wrong?

People think they need by-peril modelling to generate more informed selections of factors and interactions for individual models.

In reality, the sheer numbers of modelling decisions by peril mean that more errors and omissions are likely to occur, even when experienced actuaries are used. Peer reviews are only partly effective in detecting those errors and omissions.

In addition, slicing the data for use with multiple by-peril models means that each by-peril model is based upon less data than a single burning-cost model would be. This further reduces the quality of each by-peril model.

And pragmatically speaking why would you need to know the cost of, say, third-party property damage separately from third-party bodily damage, unless you were going to sell one of those separately (which you are not)?


Simplicity is good.

What to do?

Regularly compare your by-peril models to a good single burning-cost model. Do this comparison out-of-sample, of course, in case your by-peril models are over-fitted.

If your findings match mine, try and communicate this within your organisation. You might need extra time to convince those colleagues who are invested in heavy GLM modelling.

That's it! Thank you for reading.

Can we help you?

If you are interested in new pricing ideas to radically simplify your current analytical procedures and deliver reduced loss ratio quickly, or if you are simply looking for an actuarial contractor, please get in touch.

Thank you for reading and have a great day,
Jan Iwanik, FIA PhD

Copyright © 2016 Jan Iwanik, All rights reserved. You are receiving this email because you are subscribed to updates from We publish data and analysis for informational and educational purposes only. You can unsubscribe from this list by emailing us.