Search

Predicting High-Cost health insurance members through boosted trees and oversampling : an application using the HCCI database

<?xml version="1.0" encoding="UTF-8"?><modsCollection xmlns="http://www.loc.gov/mods/v3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-8.xsd">
<mods version="3.8">
<titleInfo>
<title>Predicting High-Cost health insurance members through boosted trees and oversampling</title>
<subTitle>: an application using the HCCI database</subTitle>
</titleInfo>
<name type="personal" usage="primary" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20130016856">
<namePart>Hartman, Brian M.</namePart>
<nameIdentifier>MAPA20130016856</nameIdentifier>
</name>
<name type="personal" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20210005367">
<namePart>Owen, Rebecca</namePart>
<nameIdentifier>MAPA20210005367</nameIdentifier>
</name>
<name type="personal" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20210005374">
<namePart>Gibbs, Zoe</namePart>
<nameIdentifier>MAPA20210005374</nameIdentifier>
</name>
<typeOfResource>text</typeOfResource>
<genre authority="marcgt">periodical</genre>
<originInfo>
<place>
<placeTerm type="code" authority="marccountry">esp</placeTerm>
</place>
<dateIssued encoding="marc">2021</dateIssued>
<issuance>serial</issuance>
</originInfo>
<language>
<languageTerm type="code" authority="iso639-2b">spa</languageTerm>
</language>
<physicalDescription>
<form authority="marcform">print</form>
</physicalDescription>
<abstract displayLabel="Summary">Using the Health Care Cost Institute data (approximately 47 million members over seven years), we examine how to best predict which members will be high-cost next year. We find that cost history, age, and prescription drug coverage all predict high costs, with cost history being by far the most predictive. We also compare the predictive accuracy of logistic regression to extreme gradient boosting (XGBoost) and find that the added flexibility of the extreme gradient boosting improves the predictive power. Finally, we show that with extremely unbalanced classes (because high-cost members are so rare), oversampling the minority class provides a better XGBoost predictive model than undersampling the majority class or using the training data as is. Logistic regression performance seems unaffected by the method of sampling.</abstract>
<note type="statement of responsibility">Brian Hartman, Rebecca Owen, Zoe Gibbs</note>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080602437">
<topic>Matemática del seguro</topic>
</subject>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20130012056">
<topic>Gastos médicos</topic>
</subject>
<subject xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20120011137">
<topic>Predicciones estadísticas</topic>
</subject>
<subject authority="lcshac" xmlns:xlink="http://www.w3.org/1999/xlink" xlink:href="MAPA20080638337">
<geographic>Estados Unidos</geographic>
</subject>
<classification authority="">6</classification>
<relatedItem type="host">
<titleInfo>
<title>North American actuarial journal</title>
</titleInfo>
<originInfo>
<publisher>Schaumburg : Society of Actuaries, 1997-</publisher>
</originInfo>
<identifier type="issn">1092-0277</identifier>
<identifier type="local">MAP20077000239</identifier>
<part>
<text>01/03/2021 Tomo 25 Número 1 - 2021 , p. 53-61</text>
</part>
</relatedItem>
<recordInfo>
<recordContentSource authority="marcorg">MAP</recordContentSource>
<recordCreationDate encoding="marc">210331</recordCreationDate>
<recordChangeDate encoding="iso8601">20210405201151.0</recordChangeDate>
<recordIdentifier source="MAP">MAP20210010781</recordIdentifier>
<languageOfCataloging>
<languageTerm type="code" authority="iso639-2b">spa</languageTerm>
</languageOfCataloging>
</recordInfo>
</mods>
</modsCollection>