Actuaries and others who support actuaries need to think about data in ways we have not thought about before. New disclosure requirements under LDTI require different information and different levels of granularity in our data.

Actuaries are end users of data. We use it in our financial models. When we implement a major financial reporting change like US GAAP long-duration targeted improvements or IFRS17, re-designing the data to meet the requirements of the new standard. It’s both a requirement but it’s also an opportunity for us if we are redesigning it. What do we want it to look like? We want accessibility. We want tables or output from our IT process and data fabric that will connect directly to our actuarial software. We want it to be adaptable. We want to be able to change our data if future requirements change. We want applicable data. The data needs to support our actuarial software. I use AXIS from Moody’s Analytics. It’s organized around product contractual features and assumptions. So within the data, I need to be able to differentiate different plans of insurance and what those product features are. And within the plans of insurance, I need information about the insured and the underlying insurance to apply the correct assumptions to them. Under GAAP LDTI, we need to update assumptions at least once a year. So we need data for our software input, but we also need it as experienced study input to help us update our assumptions. Our data needs to be auditable, and the process of getting the data is much easier to work with if the process is automated.


Actuaries and others who support actuaries need to think about data in ways we have not thought about before. New disclosure requirements under LDTI require different information and different levels of granularity in our data. We need to be able to group results into groups that didn’t exist before the standard was updated. We need to apply controls to parts of the process that were not applied before, and the LDTI standard and what’s been written about it is pretty good about spelling out some of the requirements, but it’s less good about others. There are some hidden things in there. If our companies issue annuities or other insurance products with embedded options called market risk benefits in the standard, those are called out. We need to value the market risk benefits. The liability data needs to contain that information at the level we need. And that’s a very obvious business requirement for the data. But what may be less obvious is that there are other needs, in addition to that, things that we as actuaries need to value, but we need to think those through and kind of pull those through the process because they don’t quite jump out onto our list of business requirements.

If we look at the discount rate for LDTI, there are three requirements to use a discount rate for upper medium-grade investments. If you look that up, it’s pretty straightforward. Moody’s Investor Services has a company that rates bonds. Single A is an upper medium grade. It’s pretty easy to get yields for those. We can try to make it more complicated, but it isn’t complicated. You know, it also says to use observable inputs whenever we can. That’s also straightforward. But there’s a third requirement hidden within those two that says use an investment yield that reflects the duration characteristics of a liability. That’s a little bit of a loaded requirement because to pick investments that reflect the duration characteristics of the liability, we need to know the duration of the liability. We can have a product like term insurance, where the cash flows might not have a big connection to interest rates. And in that case, it’s a straightforward calculation. But if we have something, that’s, you know, between insurance protection and an investment product, so it doesn’t hit the gap guide guidance for an investment or a limited pay product, like a whole life product with the premiums and insurance, coverages are both lifetime. If it’s priced and sold under a particular interest rate scenario, we might see a response to interest rates. So there are some options that the policyholder and the owners have that impact the duration of the liability.


And we need that information packaged in our data. To model that, we might need an iterative process to run our model once to calculate the duration, and then once we know that, we can get the discount rates and calculate the liability. So my point is that the policyholder’s options need to be reflected in our data so that the models can use them. The biggest challenge I see is that companies deal with is just trying to get things right, you know, either the first time or even the second or third time and not have it, be a constantly iterative process to get, you know, business requirements and specifications for the data and work through the data. So if somebody doesn’t get it perfect, that’s pretty common. But what we don’t want is an ongoing iterative process where we get a set of data that we think is what we want and then be able to use it and put it into our model, see the results and realize we need to make a lot of major changes and have it, you know, have that be a continuous process of updating it. Avoiding that is probably the biggest challenge that I see. Certainly, the tools we see today allow changes to the data but having it be an ongoing process that impacts the project timeline is a challenge. 🟦