The content of my talk is very much reflected in its title and should leave some surprise to the audience concerning which theorem I would consider as a candidate for such an honour.
Prima facie, valuing a longevity-contingent claim that provides guaranteed income-for-life should be a relatively straightforward operation. One selects a mortality basis with a proper discount curve and the remainder is left to expectations. And yet, history is strewn with examples of mispriced annuities. From early attempts by the English King Henry VIII to securitize cashflows from the dissolution of the monasteries in the 16th century, to European tontines in the 17th and 18th centuries and of course North American variable annuity companies who teetered on the precipice of bankruptcy in the early 21st century, it seems annuities can be a very tricky business. Motivated by these examples and recent controversies, in this talk I will review the pricing of plain, compound and decorative annuities, discuss the economic rationale for their existence and conclude with some observations on why Mother Nature would prefer that retirees pool longevity risk.
The stochastic programming scenario optimization using multiiperiod stochastic linear programming is a very good approach to asset and asset -liability management. This talk discusses the authors experiences in building and implementing large scale ALM models. A comparison of the difficulties and advantages is made and shown to be more useful than other approaches. The basic idea is to generate future scenarios for the uncertain asset returns and liability commitments. Then one optimizes the final wealth risk adjusted given the various constraints on the activities. Shortfalls of deterministic and stochastic targets convex weighted over time where the weights involve type of shortfall and when it might occur over time create the models risk. So the objective is the maximization of a concave function that’s piecewise linear approximated. Successful implemented applications are discussed as the approach progressed over time. These include the Russell Yasuda Kasai and InnoALM models in insurance, pension fund and bank portfolio management, futures trading and other areas. Models for institutions are simpler than models for individuals. Computing is now not a key issue. Most difficult is scenario generation and selling the models.
Vine copulas have proven to be flexible dependence models, which are able to model tail dependence pattern as they occur in financial and insurance data. The models power is driven by the ability to construct a d dimensional dependence model from a collection of bivariate models through appropriate conditioning. This pair copula construction allows to work with high dimensional data. We will discuss the basic construction principle and give a review of its applications to finance. We highlight current advances in the area of quantile regression and two part modeling in insurance. More information about papers and software can be found at vine-copula.org
Aas, K., Czado, C., Frigessi, A., & Bakken, H. (2009). Pair-copula constructions of multiple dependence. Insurance: Mathematics and Economics, 44(2), 182-198.
Aas, K. (2016). Pair-copula constructions for financial applications: A review. Econometrics, 4(4), 43.
Kraus, D., & Czado, C. (2017). D-vine copula based quantile regression. Computational Statistics & Data Analysis, 110, 1-18.
Yang, L. & Czado, C. (2019). Two-Part D-Vine Copula Models for Longitudinal Insurance Claim Data. Submitted.
In this talk, I will discuss the economic approaches to evaluate the social cost of carbon, i.e., the present value of the flow of climate damages generated in the next few centuries by one more ton of CO2 emitted today. What discount rates should we use to perform this task? What is the risk profile of climate damages? I will combine standard asset pricing models and integrated assessment models to measure the impact of carbon emissions on intergenerational welfare, using the principles of utilitarian ethics.
Choosing an appropriate model to describe and quantify randomness of losses is a classical topic in non-life insurance. Historically and across different types of portfolios, the spectrum ranges from little to quite substantial understanding of involved causal mechanisms, and data sets can be very scarce as in some instances of natural catastrophe insurance all the way to being so abundant that the mere need for a model is questioned, as one observes in the era of big data. In this talk I will discuss some general aspects of modelling and the involved challenges, present some recent results on dealing with the situation of limited data, and illustrate the findings on reinsurance data sets.