Tree-based machine learning methods for modeling and forecasting mortality
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/MARC21/slim http://www.loc.gov/standards/marcxml/schema/MARC21slim.xsd">
<record>
<leader>00000cab a2200000 4500</leader>
<controlfield tag="001">MAP20220026109</controlfield>
<controlfield tag="003">MAP</controlfield>
<controlfield tag="005">20221003155655.0</controlfield>
<controlfield tag="008">221003e20220905bel|||p |0|||b|eng d</controlfield>
<datafield tag="040" ind1=" " ind2=" ">
<subfield code="a">MAP</subfield>
<subfield code="b">spa</subfield>
<subfield code="d">MAP</subfield>
</datafield>
<datafield tag="084" ind1=" " ind2=" ">
<subfield code="a">6</subfield>
</datafield>
<datafield tag="100" ind1="1" ind2=" ">
<subfield code="0">MAPA20220008556</subfield>
<subfield code="a">Skovgaard Bjerre, Dorethe</subfield>
</datafield>
<datafield tag="245" ind1="1" ind2="0">
<subfield code="a">Tree-based machine learning methods for modeling and forecasting mortality</subfield>
<subfield code="c">Dorethe Skovgaard Bjerre</subfield>
</datafield>
<datafield tag="520" ind1=" " ind2=" ">
<subfield code="a">Machine learning has recently entered the mortality literature in order to improve the forecasts of stochastic mortality models. This paper proposes to use two pure, tree-based machine learning models: random forests and gradient boosting, based on the differenced log-mortality rates to produce more accurate mortality forecasts. These forecasts are compared with forecasts from traditional, stochastic mortality models and with forecasts from random forests and gradient boosting variants of the stochastic models. The comparisons are based on the Model Confidence Set procedure. The results show that the pure, tree-based models significantly outperform all other models in the majority of cases considered. To address the lack of interpretability issue associated with machine learning models, we demonstrate how to extract information about the relationships uncovered by the tree-based models. For this purpose, we consider variable importance, partial dependence plots, and variable split conditions. Results from the in-sample fit suggest that tree-based models can be very useful tools for detecting patterns within and between variables that are not commonly identifiable with traditional methods.
</subfield>
</datafield>
<datafield tag="650" ind1=" " ind2="4">
<subfield code="0">MAPA20170005476</subfield>
<subfield code="a">Machine learning</subfield>
</datafield>
<datafield tag="650" ind1=" " ind2="4">
<subfield code="0">MAPA20080555306</subfield>
<subfield code="a">Mortalidad</subfield>
</datafield>
<datafield tag="650" ind1=" " ind2="4">
<subfield code="0">MAPA20080579258</subfield>
<subfield code="a">Cálculo actuarial</subfield>
</datafield>
<datafield tag="773" ind1="0" ind2=" ">
<subfield code="w">MAP20077000420</subfield>
<subfield code="g">05/09/2022 Volumen 52 Número 3 - septiembre 2022 , p. 765-787</subfield>
<subfield code="x">0515-0361</subfield>
<subfield code="t">Astin bulletin</subfield>
<subfield code="d">Belgium : ASTIN and AFIR Sections of the International Actuarial Association</subfield>
</datafield>
</record>
</collection>