Each article summary must list the citation(APA format) for the article being summarized and provide 800-1000 words i which the student summarizes the main questions and conclusions of the paper and explains how the conclusions of the paper are useful to someone working in financial management of a corporation in the students current or proposed career path. The text of each summary must be 12-point Times New
Roman font, with one-inch top, bottom, left, and right margins, and each summary must begin on a new page.
Vikram Maheshri1 & Clifford Winston2
Published online: 28 July 2016
# Springer Science+Business Media New York 2016
Abstract Motorists’ fatalities and the fatality rate (roadway deaths per vehicle-mile
traveled (VMT)) tend to decrease during recessions. Using a novel data set of individ-
ual drivers, we establish that recessions have differential impacts on driving behavior
by decreasing the VMT of observably risky drivers, such as those over age 60, and by
increasing the VMT of observably safer drivers. The compositional shift toward safer
drivers associated with a one percentage point increase in unemployment would save
nearly 5000 lives per year nationwide. This finding suggests that policymakers could
generate large benefits by targeting new driver-assistance technology at vulnerable
groups.
Keywords Automobile safety. Motorists’ fatalities . Risky drivers . Vehicle mile
s
traveled . Autonomous vehicles
JEL Classifications I1 . R4
Highway safety has steadily improved during the past several decades, but traffic
fatalities, exceeding more than 30,000 annually, are still one of the leading causes of
non-disease deaths in the United States. The United States also has the highest traffic
accident fatality rate in developed countries among people of age 24 and younger,
despite laws that ban drinking until the age of 21. In addition to those direct costs,
traffic accidents account for a large share of highway congestion and delays (Winston
and Mannering 2014) and increase insurance premiums for all motorists (Edlin and
Karaca-Mandic 2006).
It is not an exaggeration to suggest that reducing traffic accidents and their associ-
ated costs should be among the nation’s most important policy goals. The top line in
Fig. 1 shows that automobile fatalities have followed a downward trend since the
J Risk Uncertain (2016) 52:255–280
DOI 10.1007/s11166-016-9239-6
* Clifford Winston
CWinston@brookings.edu
1 Department of Economics, University of Houston, Houston, TX 77204, USA
2 The Brookings Institution, 1775 Massachusetts Ave., NW, Washington, DC 20036, USA
http://crossmark.crossref.org/dialog/?doi=10.1007/s11166-016-9239-6&domain=pdf
1970s, and have fallen especially rapidly during recessions, which are shaded in the
figure.
A natural explanation is that those declines are simply a consequence of the decrease
in vehicle miles travelled (VMT) that typically accompanies a recession. With a smaller
labor force commuting to work, fewer goods being shipped along overland routes, and
less overall economic activity, a decline in traffic facilities is no surprise. But the
heavier line in the figure shows that the fatality rate (fatalities per VMT) has also
decreased more sharply during recessions than during other parts of the business cycle.1
This implies that factors other than declining VMT contribute to the reduction in
fatalities that tends to occur when real economic activity contracts. The purpose of this
paper is to document those factors with an eye toward informing public policies tha
t
could reduce fatalities during all parts of the business cycle.
Researchers have shown that cyclical fluctuations in economic conditions affect
most major sources of accidental deaths, including motor vehicle accidents, and they
have concluded that fatalities resulting from most of those sources decline approxi-
mately in proportion to the severity of cyclical contractions in economic activity (Ruhm
2000; Evans and Moore 2012).2 Huff Stevens et al. (2011) found that overall death rates
rose when unemployment fell, and argued that this relationship could be explained by
labor shortages that resulted in elderly people receiving worse health care in nursing
Fig. 1 National monthly automobile fatalities over the business cycle. Notes: Recessions as determined by the
National Bureau of Economic Research are denoted by shaded areas. Fatality and VMT data are from the
Fatality Analysis Reporting System of the National Highway Traffic Safety Administration
1 In a simple time series regression of the change in the fatality rate on a time trend and a recession dummy
with seasonal controls, the coefficient on the recession dummy is negative and statistically significant at the
99% level.
2 Ruhm (2013) finds that although total mortality from all causes has shifted over time from being strongly
procyclical to being essentially unrelated to macroeconomic conditions, deaths due to transportation accidents
continue to be procyclical. Stewart and Cutler (2014) characterize driving as a behavioral risk factor. But in
contrast to other risk factors such as obesity and drug overdoses, motorists’ safer driving and safer vehicles
have led to health improvements over the time period from 1960 to 2010.
256 J Risk Uncertain (2016) 52:255–280
homes as the economy expanded. But, as noted by Peterman (2013), this line of
research does not explain why the percentage decline in automobile fatalities during
recessions has been so much greater than the percentage decline in VMT. In 2009, for
example, VMT declined less than 1%, but fatalities declined fully 9%.
There are several leading hypotheses that have been proposed to explain the sharper
decline in the automobile fatality rate during recessions, including:
& motorists drive more safely because they are under less time pressure to get to
various destinations (especially if they are unemployed and have a lower value of
travel time than they have when they were working);
& households try to save money by engaging in less discretionary or recreational
driving, such as Sunday drives into the country on less-safe roads;
& motorists become risk averse because of the economic strain during a recession and
drive more carefully to avoid a financially devastating accident3;
& recessions may cause a change in the mix or composition of drivers on the road that
results in less risk and greater safety because the most dangerous drivers account for
a smaller share of total VMT.
To the best of our knowledge, researchers have not tested those hypotheses empir-
ically because the data on individual drivers’ VMT, socioeconomic and vehicle char-
acteristics, and safety records during recessionary and non-recessionary periods that
would be required to do so are not publicly available. Publicly available data on VMT
are generally aggregated to at least the metropolitan area or state level and suffer from
potentially serious measurement problems. For example, nationwide VMT statistics
that are released by the federal government are not based on surveys of individual
drivers’ VMT; instead, they are estimated from data on gasoline tax revenues to
determine the amount of gasoline consumed which is then multiplied by an estima
te
of the average fuel efficiency of the nation’s vehicle fleet.4
This paper avoids the problems associated with the publicly available data and
instead analyzes motorists’ safety over the business cycle using a novel, disaggregated
data set of drivers who allowed a private firm to use a new generation of information
technologies, referred to as telematics, to remotely record their vehicles’ exact VMT
from odometer readings and to store information about them and their safety records.
The private firm supplied the data to State Farm Mutual Automobile Insurance
3 As a related point, Coates (2008) conducted experiments and found that people became more risk averse as
economic volatility became greater. This may occur during a recession.
4 The government also collects VMT data from the Highway Performance Monitoring System (HPMS) and
from Traffic Volume Trends (TVT) data. HPMS data count vehicles on a highway under the assumption that
those vehicles traverse a certain length of highway. So, if 10,000 cars are counted per day on a midpoint of a
segment of road that is 10 miles long, the daily VMT on that segment is estimated as 100,000. Those data
suffer from a number of problems including (1) they are aggregated across drivers, so the best that can be done
is to distinguish between cars and large trucks; (2) the vehicle counts are recorded at a single point and
assumed to remain constant over the entire road segment, ignoring the entry and exit of other vehicles; (3)
daily and seasonal variation in traffic counts is unaccounted for; and (4) the traffic counts are infrequently
updated. The final problem causes the HPMS data to be especially inaccurate during unstable economic
periods like recessions when VMT could decrease significantly. The TVT data are VMT estimates that are
derived from a network of about 4000 permanent traffic counting stations that do not move and that operate
continuously. Unfortunately, the locations were explicitly determined by their convenience to the state
Departments of Transportation instead of by a more representative sampling strategy.
J Risk Uncertain (2016) 52:255–280 257
Company (hereafter State Farm®) and State Farm provided the data to us. This unique
data set enables us to identify heterogeneous adjustments in vehicle use to changes in
local economic conditions across a large number of motorists and to assess how
differences in their adjustments may affect overall road safety.
Capturing motorists’ heterogeneous responses turns out to be important because we
find that changes in local unemployment do not affect the average VMT per driver
across all drivers, but they do affect the composition of drivers on the road. In
particular, we find that the drivers in our sample who are likely to pose the greatest
risks to safety, as indicated by several observable characteristics such as the driver’s age
and accident history, reduce their VMT in response to increasing unemployment. Thus,
rational individual choices involving the risk of driving during a recession lead to safer
drivers accounting for a larger share of VMT during periods when aggregate unem-
ployment is high. This finding reconciles the two key safety-related phenomena
observed during recessions: the large (and typically permanent) decline in aggregate
automobile fatalities and the modest (and usually transient) decline in aggregate VMT,
which we note may be spuriously correlated because the decline in aggregate VMT
could be due to measurement error in the publicly available aggregate VMT data,
reduced commercial driving activity, or other unobserved determinants of recessions
that are correlated with driving. In contrast, we argue that our finding that increases in
unemployment do not affect average VMT per driver is causal.
More importantly, the quantitative effect of the change in driver composition on
automobile safety is economically significant: our estimates suggest that the change in
the composition of VMT that results from an increase in the nationwide unemployment
rate of one percentage point could save nearly 5000 lives nationwide per year or a
reduction of 14% of the 34,000 deaths nationwide attributed to automobile accidents
during our period of analysis. Thus, our finding identifies an economic benefit associ-
ated with a recession that should be noted in government spending programs that are
guided by changes in the unemployment rate.
Our findings also illustrate the opportunity for policymakers to significantly reduce
the aggregate costs of automobile accidents by implementing policies that induce the
most dangerous drivers to curtail their VMT. However, we point out the difficulty of
identifying specific policies that could target such a broad segment of the motoring
public. At the same time, the significant technological advance in the automobile
itself—such as the development of driverless or autonomous cars—suggests that
prioritizing a push of the most dangerous drivers towards vehicles with greater auton-
omy could generate substantial social benefits as we transition to a fully autonomously
driven fleet.
1 Data and empirical strategy
Previous analyses of automobile safety, such as Crandall et al. (1986) and Edlin and
Karaca-Mandic (2006), have taken an aggregate approach to estimate the relationship
between accident fatalities and VMT by including state-level controls for motorists’
socioeconomic characteristics (e.g., average age and income), riskiness (e.g., alcohol
consumption), vehicle characteristics (e.g., average vehicle age), and the driving
environment (e.g., the share of rural highways). Taking advantage of our novel data
258 J Risk Uncertain (2016) 52:255–280
set, our disaggregated approach focuses on individual drivers to estimate the effect of
changes in the macroeconomic environment on automobile fatalities, which could be
transmitted through three channels:
& individual drivers might respond to the economic changes by altering their
behavior;
& the composition of drivers or vehicles in use might respond to the economic
changes;
& the driving environment itself might be affected in ways that influence automobile
safety (e.g., the public sector might increase spending on road maintenance as fiscal
stimulus).
Our empirical analysis is based on data provided to us by State Farm (hereafter
referred to as the BState Farm data^).5 State Farm obtained a large, monthly sample of
drivers in the state of Ohio containing exact odometer readings transmitted wirelessly (a
non-zero figure was always reported) from August 2009, in the midst of the Great
Recession, to September 2013, which was well into the economic recovery. 6 The
number of distinct household observations in the sample steadily increased from 1907
in August 2009 to 9955 in May 2011 and then stabilized with very little attrition
thereafter. 7 The sample also contains information about each driver’s county of
residence, which is where their travel originates and tends to be concentrated, safety
record based on accident claims during the sample period, socioeconomic characteris-
tics, and vehicle characteristics. For each of the 88 counties in Ohio, we measured the
fluctuations in economic activity and the effects of the recession by its unemployment
rate.8 We use the unemployment rate because it is easy to interpret and because other
standard measures of economic activity, such as gross output, are not well measured at
the county-month level. Using the size of the labor force residing in each county instead
of the unemployment rate did not lead to any changes in our findings.9
The sample is well-suited for our purposes because drivers’ average daily VMT and
Ohio’s county unemployment rates exhibit considerable longitudinal and cross-
sectional variation. Figure 2 shows that drivers’ average daily VMT over the period
we examine ranges from a few miles to more than 100 miles and Fig. 3 shows that
county unemployment rates range from less than 5% to more than 15%. Finally, we
show in Fig. 4 that for our sample average daily VMT and the unemployment rate are
5 We are grateful to Jeff Myers of State Farm for his valuable assistance with and explanation of the data. We
stress that no personal identifiable information was utilized in our analysis and that the interpretations and
recommendations in this paper do not necessarily reflect those of State Farm.
6 Although the National Bureau of Economic Research determined that the Great Recession officially ended in
the United States in June 2009, Ohio was one of the slowest states in the nation to recover and its economy
was undoubtedly still in a recession when our sample began.
7 Less than 2% of households left the sample on average in each month. This attrition was not statistically
significantly correlated with observed socioeconomic or vehicle characteristics.
8 Monthly data on county unemployment were obtained from the U.S. Department of Labor, Bureau of Labor
Statistics.
9 We considered allowing the county unemployment rate to vary by age classifications, but such data were not
available, perhaps because employment in certain age classifications may have been too sparse in lightly
populated counties.
J Risk Uncertain (2016) 52:255–280 259
negatively correlated (the measured correlation is -0.40), which provides a starting
point for explaining why automobile fatalities are procyclical.
We summarize the county, household, and vehicle characteristics in the sample that
we use for our empirical analysis in Table 1. Although we do not observe any time-
varying characteristics of individual drivers such as their employment status, we do
observe monthly odometer readings from drivers’ vehicles that allow us to compute
time-varying measures of their average daily VMT. The drivers included in our sample
are generally State Farm policyholders who are also generally the heads of their
respective households. The data set included information on one vehicle per household,
which did not appear to be affected by seasonal or employment-related patterns that
would lead to vehicle substitution among household members because less than 2% of
the vehicles in the sample were idled in a given month.
The sample does suffer from potential biases because individual drivers self-select to
subscribe to telematics services that allow their driving and accident information to be
monitored in return for a discount from State Farm. Differences between the drivers in
our sample and drivers who do not wish their driving to be monitored suggest that the
Ohio drivers in our sample are safer compared with a random sample of Ohio drivers.
This is confirmed to a certain extent in Table 1 because our sample, as compared with a
random sample, tends to contain fewer younger drivers, with the average age of the
household head nearly 60. The table also suggests our sample is likely to have safer
drivers, as compared with a random sample, because it has a somewhat higher share of
new cars and of trucks and SUVs.
To assess the potential bias on our findings, we obtained county-month level data
from State Farm containing household and vehicle characteristics of all drivers in the
(Ohio) population, and we used that data to construct sampling weights on each
observed characteristic. But because we expect that unweighted regressions using our
Source: State Farm Data.
0
.0
0
5
.0
1
.0
1
5
.0
2
.0
2
5
F
re
q
u
e
n
cy
0 50 100 150 200
Miles/Day
Fig. 2 Distribution of daily VMT in Ohio, 2009–2013
260 J Risk Uncertain (2016) 52:255–280
sample of disproportionately safe drivers, as we have hypothesized, should yield
conservative estimates of the effect of the Great Recession on automobile safety, we
initially report the results from those regressions. As a sensitivity check, we then re-
estimate and report our main findings weighting by the age of drivers in the population
in each county, which corrects for the most important potential source of sample bias.
Of course, drivers who select not to be in our sample may have unobserved
characteristics that we cannot measure that contribute to their overall riskiness.
Source: US Bureau of Labor Statistics.
0
5
1
0
1
5
2
0
F
re
q
u
e
n
cy
.05 .1 .15 .2
County Level Unemployment Rate
Fig. 3 Distribution of unemployment rates by county in Ohio, 2009–2013
Source: Average individual VMT from State Farm; seasonally unadjusted unemployment data
from the US Bureau of Labor Statistics.
.0
6
.0
7
.0
8
.0
9
.1
.1
1
P
e
rc
e
n
t
2
2
2
4
2
6
2
8
3
0
3
2
M
ile
s
2010 2011 2012 2013
Daily VMT Unemployment Rate
Fig. 4 Ohio average individual daily VMT and unemployment rate
J Risk Uncertain (2016) 52:255–280 261
Nonetheless, a weighted sample that somehow accurately represented those unobserved
characteristics in the population would likely still be composed of a population of
drivers who are less safe than the drivers in our unweighted sample, which again
suggests that the unweighted sample yields conservative estimates of the effect of the
Great Recession on automobile safety.
Another consideration regarding our sample—and generally any disaggregated
sample of drivers’ behavior—is that although it consists of a large number of observa-
tions (291,834 driver-months, covering 15,228 drivers, and 17,766 vehicles, none of
which was strictly used for commercial purposes), only a very small share of drivers
ever experiences a fatal automobile accident. Thus our sample would have to be
considerably larger than 300,000 driver-months to: (1) assess whether the change in
fatalities during a business cycle can be explained by more than a change in VMT
alone; and (2) identify the specific causal mechanism at work by jointly estimating how
individual drivers’ employment status affects their VMT, and how any resulting change
in their VMT affects their likelihood of being involved in a fatal automobile accident.
Accordingly, our empirical strategy proceeds as follows:
1. We identify the causal effect of changes in the local economic environment over
our sample period, as measured by the local unemployment rate, on the driving
behavior of individual drivers, as measured by the variation in their monthly VMT.
We first carry out this estimation at the aggregate (county) level, which appears to
show that the primary channel through which increasing unemployment reduces
fatalities is by reducing VMT. Estimating identical model specifications using
Table 1 Summary statistics
Variable Mean Std. dev.
Daily Vehicle Miles Traveled (VMT) 28.85 20.51
County unemployment rate (Percent) 9.35 2.49
Age of household head 59.71 15.34
Share of households that filed an accident claim during our sample period 0.16 0.36
Share of household heads aged between 30 and 50 0.25 0.43
Share of households with one or two members 0.29 0.46
Share of new cars (less than or equal to 2 years old) 0.77 0.42
Share of old cars (over 4 years) 0.08 0.27
Share of compact or subcompact cars 0.05 0.21
Share of trucks or SUVs 0.18 0.38
Number of observations 291,834
Number of months 49
Number of vehicles 17,766
Notes: The monthly sample spans August 2009 to September 2013. The county unemployment rate is reported
by the US Bureau of Labor Statistics. All other variables are obtained from State Farm. The variables
expressed as shares are defined as dummy variables in our empirical analysis; standard deviations in the table
are computed accordingly
262 J Risk Uncertain (2016) 52:255–280
disaggregated measures of average monthly VMT for individual drivers, however,
reveals that rising local unemployment has no apparent effect on individual drivers’
VMT.
2. In order to ascribe a causal interpretation to these estimates and address concerns
about endogeneity bias, we replicate the disaggregate estimations using an instru-
mental variables approach that relies on plausibly exogenous spatial variation in
economic conditions. The results reinforce our previous finding that the variation
in local unemployment has no apparent effect on individual drivers’ VMT.
3. We enrich our analysis by estimating heterogeneous effects of local economic condi-
tions on VMT by individual driver and vehicle characteristics. We find that plausibly
riskier drivers disproportionately reduce their driving in response to adverse economic
conditions. Although we cannot separate the contribution of changes in drivers’
behavior and in their composition, both responses suggest that an important reason that
highway safety improves during a recession is that a larger share of VMT is accounted
for by safer drivers during periods of greater unemployment.
4. Finally, we identify the effect of the local unemployment rate on the local auto-
mobile fatality rate (as measured by fatalities per VMT), and we find that rising
unemployment within a county has a statistically and economically significant
effect in reducing that county’s fatality rate.
Our analysis controls for a variety of factors related to the driving environment in
order to explore the extent to which this effect is mediated solely through safer driving
by some individuals (including switching to safer vehicles) or by changes in the
representation of a greater share of less risky drivers on the road. Although we cannot
control for all of the unobserved factors that characterize the driving environment, our
results strongly suggest that the notable improvement in safety during the Great
Recession has occurred largely because risky drivers’share of total VMT has decreased.
2 Economic conditions and VMT
Based on aggregate statistics, it is widely believed that an economic downturn causes VMT
to decline, which is central to understanding why automobile safety improves during
recessions. We first investigate the relationship between economic conditions and aggregate
VMT by constructing aggregate VMT in a given county as the simple average of the daily
VMT of all the drivers in a given county in our sample. We then estimate a regression of
aggregate county VMT on the county unemployment rate. We stress that those estimates
should not be interpreted as causal, because as noted below they may suffer from
endogeneity bias; nevertheless, they offer a useful comparison with other findings in the
literature. In order to allow for multicollinearity across drivers within treatment groups, we
estimate robust standard errors clustered at the county-month level in all regressions.
The estimation results presented in the first column of Table 2 indicate that reces-
sions are associated with declines in VMT, and that this effect is statistically significant.
As shown in the second column, the estimated coefficient increases somewhat when we
control for county and year-month fixed effects, although its statistical significance
declines from the 99% to the 90% level. Thus our use of the State Farm data to measure
aggregated VMT and to estimate the relationship between it and unemployment yields
J Risk Uncertain (2016) 52:255–280 263
results that are consistent with the conventional wisdom. This provides some reassur-
ance about the accuracy of the VMT information obtained from the State Farm data and
that it is not a potential source of bias that could affect our findings.
In order to take advantage of the unique panel of drivers that we observe, we re-
estimate the two aggregate specifications at the driver level by computing each driver’s
average daily VMT from the monthly odometer readings on his or her vehicle. As
noted, we do not observe individuals’ employment status over time, but the specifica-
tions can shed light on whether unobserved driver characteristics are correlated with
county level unemployment rates, which could bias the aggregate results. The estimates
in the third column of Table 2 show that the effect of the county unemployment rate on
individual VMT is considerably weaker as its estimated coefficient (-0.03) is barely
non-zero but it is precisely estimated.
As shown in columns 4 and 5, the effect of local unemployment on individual drivers’
VMT clearly remains both statistically and economically insignificant when we include
county, year-month, and individual driver fixed effects. Most important, the estimates from
the disaggregated analysis differ statistically significantly from the estimates obtained using
aggregate data. This result provides evidence that individual drivers’ responses to local
economic conditions vary considerably, and casts serious doubt on the conventional wisdom
that aggregate relationships between VMT and economic conditions identified in previous
research can be interpreted as evidence that recessions reduce fatalities simply by reducing
the level of automobile use. We suggest that our findings of a strong aggregate relationship
between VMT and unemployment (in specification (1)) and virtually no disaggregate
relationship between VMT and unemployment (in specification (3)) can be reconciled by
the idea that the unobserved characteristics of drivers in county-months with lower unem-
ployment lead them to drive more.10
Table 2 Unemployment and vehicle miles traveled: OLS estimation
Dependent variable:
(1) (2) (3) (4) (5)
County VMTa County VMTa Indiv. VMT Indiv. VMT Indiv. VMT
County unemployment rate –0.21*** (0.06) –0.27* (0.16) –0.03** (0.01) 0.01 (0.11) 0.09 (0.08)
County fixed effects? N Y N Y N
Year-month fixed effects? N Y N Y Y
Driver fixed effects?
N N N N Y
R2 0.003 0.43 0.001 0.03 0.58
Num. observations 4312 4312 291,834 291,834 291,834
a The dependent variable is measured as a daily average over the month for each county
Robust standard errors clustered by county are reported in parentheses
*** 99% significance level, ** 95% significance level, * 90% significance level
10 Formally, we can express the difference in the coefficients on VMT from the aggregate and disaggregate
regressions as βA−βD
� �
¼ 1uct
1
nct
∑
ct
λi þ 1nct ∑ct
ϵDict−ϵ
A
ct
� �� �
; where uct is the unemployment rate in county c in
month t, nct is the number of drivers in the sample in county c in month t, λi is the driver i fixed effect, and ϵict
D
and ϵct
A. refer to error terms from the disaggregate and aggregate regressions respectively. The large difference
in the estimated coefficients from the aggregate and disaggregate regressions is not particularly surprising
given several terms contribute to this difference, including individual driver heterogeneity, differences in the
number of drivers across counties, and the potential bias in the estimated aggregate error.
264 J Risk Uncertain (2016) 52:255–280
We address the issue of causality more carefully by using instrumental variables to
verify that we have reliably identified the causal relationship between local economic
conditions and VMT. Specifically, we use the unemployment rate in neighboring
counties as an instrument for the unemployment rate in a given county and estimate
the relationship between individual VMT of drivers in each county and that county’s
unemployment rate using two-stage least squares.
Our identification strategy rests on the assumption that changes in economic condi-
tions in surrounding counties are not related to unobserved determinants of changes in
driving behavior in a given county. This is likely to be the case because according to the
2006 to 2010 American Community Surveys from the U.S. Census, nearly three
quarters of Ohio workers were employed in the county where they resided, and
according to the most recent National Household Travel Survey (NHTS) taken in
2009, roughly half of all vehicle trips were less than 5 miles.11 At the same time,
economic linkages are likely to make the economic conditions in neighboring counties
a good predictor for the economic conditions in a given county. Because unemploy-
ment in a neighboring county might be correlated with unobserved determinants of
cross-county trips for purposes other than commuting to work, however, we explore the
robustness of our instrument by also examining more distant counties.
Figure 5 presents a map of Ohio that demarcates its 88 counties and their spatial
relationships; note that the variation of county borders is likely to generate additional
variation in unemployment rates between neighboring counties. Following the argu-
ment above, we constructed instruments for the unemployment rate of each county: (1)
the unemployment rates of neighboring counties (for example, the unemployment rates
of Ross, Pike, Adams, Brown, Clinton, and Fayette counties were used as instruments
for the unemployment rate of Highland county); and (2) the unemployment rates of
neighbors of neighboring counties (for example, the unemployment rates of Clermont,
Warren, Greene, Madison, Pickaway, Hocking, Vinton, Jackson, and Scioto counties
were used as instruments for the unemployment rate of Highland county). Our first
instrument is likely to give a superior prediction of the unemployment rate of a given
county, while the second instrument is more likely to provide plausibly exogenous
variation in the unemployment rate of a given county.
We showed previously that Ohio county unemployment rates exhibited considerable
variation.12 The scatterplot in Fig. 6 indicates that a given Ohio county’s unemployment
rate bears a strong positive relationship to its neighboring counties’ unemployment
rates.
The persistence of unemployment suggests that lagged values of the instruments are
also likely to be correlated with the county unemployment rates. We exploited this fact
by specifying lagged values of neighboring county unemployment rates as additional
instruments. Figure 7 in the appendix shows the strength of as many as six monthly lags
and indicates that all of them have some explanatory power in a first-stage regression of
county unemployment rates.13 These additional instruments improve the strength of the
11 The NHTS is available at http://nhts.ornl.gov.
12 The wide distribution of unemployment rates in neighboring counties during any single month is also
similar to the wide distribution of county unemployment rates, ranging from below 5% to more than 15%.
13 Our findings did not change when we used fewer lags.
J Risk Uncertain (2016) 52:255–280 265
http://nhts.ornl.gov/
first-stage regression and also crucially provide the means to conduct over-
identification tests of our instruments’ validity.
Table 3 reports instrumental variables estimates of the relationship between VMT and
county unemployment rates using six monthly lags for the neighboring and neighbors of
neighboring county instruments. The specification in the first column obtains our previous
finding based on OLS estimation that the county unemployment rate has a negative
statistically significant effect on aggregate (county) VMT. The remaining specifications
show that the county unemployment rate has a statistically insignificant effect on an
individual driver’s VMT regardless of which of the two instruments we use and of whether
we include the various fixed effects in the specification. Moreover, we cannot reject our
exclusion restriction for any of the specifications as indicated by the p-values associated with
the over-identification tests (Hansen 1982). Taken together, we interpret those results as
Source: Ohio Department of Transportation
Fig. 5 County map of Ohio
266 J Risk Uncertain (2016) 52:255–280
strong evidence in support of our identification strategy. The estimated effects are also highly
economically insignificant; for example, the fifth specification in Table 3 allows us to
conclude with 95% confidence that a one percentage point increase in the unemployment
rate causes drivers to decrease their daily VMT by no more than 0.09 miles.14
However, this very small aggregate effect may mask heterogeneous and large effects for
different subpopulations. We exploit our disaggregated data to identify heterogeneous effects
of economic conditions on drivers’ VMT by including in our main regressions interactions of
the unemployment rate with both driver and vehicle characteristics. We interacted those
characteristics with the local unemployment rates instead of including those variables sepa-
rately to capture the idea that changes in the unemployment rate are likely to affect the mix of
drivers on the road by simultaneously affecting all motorists’ travel behavior. The driver’s
characteristics indicated whether the driver had filed an accident claim at any time during the
sample period, whether the driver is between the ages of 30 and 50 years old, whether the
driver is over 60 years old, and whether the driver lives alone or with only one other person.
The driver’s vehicle characteristics indicated whether it is at least five years old and whether it
is an SUVor a truck. In all of these specifications, we specified the county unemployment rate
by itself and its interaction with driver and vehicle characteristics, again using the neighboring
unemployment rates as instruments for the county unemployment rate.15
14 Given that we obtained very similar results with both sets of instruments, which are constructed using
different spatial information, it is likely that our empirical strategy and specification avoid potential spatial
autocorrelation of the error terms.
15 We obtained similar results when we used the unemployment rates of the neighbors of neighboring counties
as instruments.
Source: US Bureau of Labor Statistics
.0
5
.1
.1
5
.2
N
e
ig
h
b
o
ri
n
g
C
o
u
n
ty
U
n
e
m
p
lo
ym
e
n
t
R
a
te
.05 .1 .15 .2
County Unemployment Rate
Fig. 6 Relationship between a Ohio county’s and its neighbors’ unemployment rates
J Risk Uncertain (2016) 52:255–280 267
Empirical evidence obtained by other researchers suggests that in addition to a
driver’s accident history these demographic categories could be important and
distinct. Tefft (2012) provides evidence from 1995 to 2010 that mileage-based crash
rates were highest for the youngest drivers, who are under-represented in the State Farm
data, and decreased until age 60, after which they increased slightly. It is reasonable to
characterize drivers in small (1 or 2 person households) as less likely to drive as safely
as drivers in larger households, because automobile insurance companies consider
married people to be safer drivers than their unmarried counterparts, as evidenced by
the significant discounts on their automobile insurance rates that they offer. In addition,
people in households with children tend to see themselves as role models in road safety
for their children (Muir et al. 2010).
NHTSA (2013) provides evidence that drivers’ safety declines with the age of their
vehicles. Recent safety improvements, in particular electronic stability control systems
that make vehicles less likely to flip, are responsible for at least part of the drop in
deaths associated with the latest model year vehicles.16 Various other studies indicate
16 As a rough attempt to control for the type of people who drive new cars, Andrea Fuller and Christina
Rogers, BSafety Gear Helps Reduce U.S. Traffic Deaths,^ Wall Street Journal, December 19, 2014 report that
new models from 2013 had a noticeably lower fatality rate than comparable brand-new cars had five years
earlier.
Table 3 Unemployment rates and vehicle miles traveled: instrumental variable (IV) estimates
Dependent
variable:
(1) (2) (3) (4) (5)
County VMTa Indiv. VMT Indiv. VMT Indiv. VMT Indiv. VMT
County
unemploy-
ment rate
–1.08*** (0.30) –0.18 (0.22) 0.22 (0.58) 0.01 (0.20) 0.03 (0.06)
County fixed
effects?
Y Y Y N N
Year-month
fixed effects?
Y Y Y Y Y
Driver fixed
effects?
N N Y Y
IVs from?
Neighbors of
neighboring
counties
Neighboring
counties
Neighbors of
neighboring
counties
Neighboring
counties
Neighbors of
neighboring
counties
J-statistic
(p-value)
1.28 (0.97) 2.82 (0.83) 3.70 (0.72) 2.32 (0.89) 0.91 (0.99)
R2 0.42 0.03 0.03 0.58 0.61
Num.
observations
4312 291,834 291,834 291,834 291,834
a Dependent variable is measured as a daily average
Specifications (2) and (4) are estimated using six lags of monthly unemployment rates in neighboring counties
as instruments. Specifications (1), (3), and (5) are estimated using six lags of monthly unemployment rates in
neighbors of neighboring counties as instruments. J-statistics are reported from Hansen’s (1982) over-
identification test. Robust standard errors clustered by county are reported in parentheses
*** 99% significance level, ** 95% significance level, * 90% significance level
268 J Risk Uncertain (2016) 52:255–280
that while drivers’ safety increases when they travel in vehicles in larger size classifi-
cations such as SUVs and trucks (for example, Jacobsen 2013), drivers of those
vehicles tend to be safer than other drivers regardless of the vehicles they drive. Train
and Winston (2007), for example, found that drivers in households with children are
more likely to own SUVs and vans than are other drivers. A counterargument is that
such drivers may engage in risky offsetting behavior by driving recklessly in their
larger and safer vehicles (Peltzman 1975), but there are other factors that lead drivers to
select those vehicles that apparently enable them to be classified by automobile
insurance companies (including State Farm) as safer drivers when compared with
drivers of other vehicle size classifications.17
Two potential sources of endogeneity are present in the regressions. First, unob-
served determinants of driving behavior could be correlated with local unemployment
rates, but our instrumental variables should control for those unobservables following
the earlier argument and empirical support for our instruments. Second, unobserved
determinants of driving behavior could be correlated with drivers’ demographic char-
acteristics, or with attributes of the vehicles they drive. This source of endogeneity,
however, should not affect our causal interpretation that the coefficients simply capture
heterogeneous effects of local unemployment on individual driving behavior. For
example, if we find that drivers over the age of 60 decrease their VMT in response
to local unemployment, it does not matter whether the reduction is attributable to age
itself or attributable to an unobserved factor—like retirement—that is correlated with
age. This would not undermine our central finding that different drivers respond
differently to changes in local economic conditions.18
The parameter estimates in the first column of Table 4 show that even after
controlling for other factors, the county unemployment rate’s average effect on an
individual driver’s VMT remains statistically insignificant. 19 But the statistically
significant coefficient estimates on the various interaction terms show that drivers
who experienced an accident during the sample period, who were over the age of 60,
and who lived either by themselves or with only one other person did significantly
reduce their VMT as the county unemployment rate increased. In contrast, drivers who
17 Since 2009, total U.S. vehicle traffic and pedestrian deaths have been declining and pedestrian deaths as a
share of total vehicle-related deaths have been increasing. This could indicate that some offsetting behavior has
been occurring or it may indicate that recent safety improvements protect vehicle occupants more than they
protect pedestrians or that growing urbanization has increased pedestrian traffic.
18 Although we do not invoke an exogeneity assumption about the effect of socioeconomic characteristics on
utilization, it is worth noting that a long line of empirical research on consumers’ utilization of durable goods
(for example, Dubin and McFadden 1984) has argued that it is reasonable to treat socioeconomic character-
istics, such as drivers’ ages and household size, as exogenous influences on VMT, and that Winston et al.
(2006) did not find that VMT had an independent effect on the probability of a driver being involved in an
accident. It is possible that unobserved variables that influence VMT are correlated with a driver’s age and
household size, but our primary interest in estimating the VMT regression is to explore whether the effect of
unemployment on driving is different for different groups of people. As noted, the drivers in the State Farm
data may not be representative of the population of drivers in Ohio, but our central goal is to document the
selected effects of unemployment on those drivers’ VMT. Finally, Mannering and Winston (1985) showed that
although, in theory, VMT is jointly determined with vehicle type choice (i.e., make, model, and vintage), and
thus with vehicle characteristics, Mannering (1983) has argued that vehicle characteristics could be treated as
exogenous in VMT equations if VMT were being analyzed over a short time period as we do here.
19 Our basic findings did not change for any of the specifications in the table when we specified VMT in
logarithms to control for the possibility that different demographic groups had substantially different VMT
baselines.
J Risk Uncertain (2016) 52:255–280 269
were between the age of 30 and 50 increased their VMT as the county unemployment
rate increased.
The parameter estimates in the second column indicate that, all else constant, the
county unemployment rate’s effect on an individual driver’s VMT was statistically
insignificant, but that drivers of vehicles that were at least five years old reduced their
VMT as the county unemployment rate increased and that drivers of SUVs and light
trucks increased their VMT as the county unemployment rate rose. Finally, as shown in
the third column, any possible bias in the parameters of any of the socioeconomic
characteristics does not appear to affect the estimates of the vehicle characteristics (and
vice-versa), because the estimated parameters of both sets of characteristics change
little when they were included in the same specification.
Because unemployment and driver behavior in Ohio are quite seasonal, we included
year-month dummies in all regressions of interest. It is possible that those seasonal
effects could influence different drivers differently. For example, drivers over the age of
60 may have different seasonal driving patterns than younger drivers (e.g., they may
drive less when it is dark, and therefore drive less during the winter than other groups
drive). We took two approaches to explore those possible patterns in our data. First, we
interacted monthly dummy variables with a given demographic characteristic, but we
did not find any changes in the results.20 Second, we estimated all of the coefficients
separately for months with inclement weather, including all the winter and some spring
months (December–May), and for other months. However, we were unable to obtain
statistically significant differences between the two seasonal models, which we ac-
knowledge may be due to a lack of statistical power.
As we summarize in the following chart, the general thrust of our estimation results
is that economic fluctuations, as indicated by changes in the unemployment rate, affect
the VMTof individual drivers, as characterized by various characteristics, differentially.
Characteristic High risk or low risk? Impact of recession on VMT
Accident claim filed High Negative
Age 30–50 Low Positive
Age 60+ High Negative
Lives in 1–2 person household High Negative
Car 5+ years old High Negative
SUV or truck Low Positive
Moreover, it appears that these heterogeneous effects cause riskier drivers to reduce
their VMT, while at the same time causing safer drivers to increase their VMT. It is
reasonable to interpret the estimated change in the overall mix of drivers as conservative,
because it is likely that safer drivers are already overrepresented in the State Farm data.
Why, compared with other drivers, do riskier drivers appear to reduce their VMT during a
recession even if their employment situation remains unchanged? One possibility is that a
correlation between less safe drivers and risk aversion is reinforced by economic downturns.
20 Indeed, less than 10% of the variation in daily VMT interacted with the driver characteristics in Table 4
across months in our sample, which substantially limited the scope for different seasonal driving patterns
across demographic groups to explain our findings.
270 J Risk Uncertain (2016) 52:255–280
For example, Dohmen et al. (2011) conducted a study of attitudes toward risk in different
domains of life and found that older people were much less willing than younger people to
take risks when driving, which could lead to them taking fewer risky trips, such as driving in
bad weather, late at night, on less-safe roads, or after they had been drinking. Individuals in
our sample who were previously involved in an accident may have also developed some new
aversion to driving and may thus have taken fewer risky trips during the recession. The
financial stress caused by a recession may even result in drivers who were not initially risk
averse to take fewer risky trips. Cotti and Tefft (2011) found that alcohol-related accidents
declined during 2007–2008 and Frank (2012) reported that accidents and VMT declined
between 2005 and 2010 during the times of day (generally late at night) that are considered to
be the most dangerous times to drive. Both changes in driving behavior could reflect less risk
taking by older drivers, drivers living in small households, and other drivers whose charac-
teristics were associated with more risky behavior during normal economic conditions.21
At the same time, the recession could also induce some people to offset a potential loss in
income by increasing their work effort, which could include taking jobs that involve longer
commutes to work by automobile, taking an additional job that requires more on-the-job
driving, and so on. Those responses could explain why we find that drivers of prime
working ages and drivers of utility vehicles like SUVs and trucks increased their VMT.22
21 Bhatti et al. (2008) reported that individuals in France were less likely to drive while they were sleepy soon
after they retired from the workforce. The changes in driving behavior may also reflect less risk taking by the
youngest drivers, who are generally included among the most risky drivers. However, as noted, the State Farm
data tended to include very few of those drivers, so we could not identify how they adjusted their VMT in
response to the recession.
22 Note we are suggesting that those drivers increased their VMTon vehicles that were used for work-trips and
non-work trips, not on vehicles that were used for commercial purposes only.
Table 4 The effect of unemployment on VMT accounting for driver and vehicle characteristics: instrumental
variable estimates
Dependent variable: (1) (2) (3)
Indiv. average daily
VMT
Indiv. average daily
VMT
Indiv. average daily
VMT
County unemployment rate 0.32 (0.24) –0.33 (0.23) 0.32 (0.24)
×1 (Driver filed accident claim during sample
period?)
–0.31*** (0.04) –0.31*** (0.04)
×1 (Driver between 30 and 50?) 0.59*** (0.07) 0.52*** (0.07)
×1 (Driver over 60?) –0.90*** (0.06) –0.85*** (0.06)
×1 (Driver in a 1 or 2 person household?) –0.30*** (0.03) –0.28*** (0.03)
×1 (Vehicle is at least five years old?) –0.73*** (0.04) –0.57*** (0.04)
×1 (Vehicle is an SUV or Truck?) 0.76*** (0.06) 0.54*** (0.05)
County fixed effects? Y Y Y
Year-month fixed effects? Y Y Y
R2 0.10 0.05 0.11
Num. Observations 291,834 291,834 291,834
Notes: The dependent variable is individual average daily VMT. The county unemployment rate is interacted
with dummy variables as listed in each specification. All specifications are estimated using six lags of monthly
unemployment rates in neighboring counties as instruments. Robust standard errors clustered by county are
reported in parentheses
*** 99% significance level, ** 95% significance level, * 90% significance level
J Risk Uncertain (2016) 52:255–280 271
This behavior implies that the decline in VMT that is generally observed during recessions is
likely to be primarily explained by a decrease in commercial and on-the-clock driving,
including for-hire trucking, other delivery services, and certain business-related driving
during the workday. Indeed, data provided to us by the Traffic Monitoring Section of the
Ohio Department of Transportation indicated that as of 2013, vehicle-miles-traveled by
trucks on the Ohio state system of roads, including interstates, U.S. Routes, and state routes,
had declined notably during the recession and continued to do so even thereafter (roughly
10% during our sample period).
3 Implications for automobile safety
The final step in our analysis is to link the change in different drivers’ VMT to potential
improvements in automobile safety. As noted, we lack the statistical power to analyze
individual drivers’ fatalities, so we analyze the determinants of total monthly automo-
bile fatalities in each of Ohio’s 88 counties.23 For each county in each month, we
compute the average daily VMT of drivers in the State Farm sample, and we obtain the
number of motor vehicle occupant fatalities from the National Highway Traffic Safety
Administration’s Fatality Analysis Reporting System (FARS) database.
We include monthly fixed effects to capture statewide trends such as changes in gasoline
prices and alcohol consumption.24 In addition, effective August 31, 2012, a new Ohio law
prohibited persons who were less than 18 years of age from texting and from using an
electronic wireless communications device in any manner while driving. Abouk and Adams
(2013) found that texting bans had an initial effect that reduced accident fatalities but that this
effect could not be sustained. In any case, our monthly fixed effects capture the introduction
of this ban. It is possible that monthly fixed effects may not capture a trend like traffic
congestion if congestion affects traffic fatalities and changes significantly across counties
over time. However, Ohio does not have many highly-congested urban areas and the six that
are included in the Texas Transportation Institute’s Urban Mobility Report, Dayton, Cin-
cinnati, Cleveland, Toledo, Akron, and Columbus, experienced little change in congestion
delays during our sample period.
We also specify county fixed effects, which capture the effect of variation in police
enforcement of maximum speed limits and other traffic laws, differences in roadway
topography and conditions, and other influences that vary geographically, on highway
fatalities.25
The first column of parameter estimates in Table 5 shows that VMT and the county
unemployment rate on their own do not affect fatalities, but their interaction does have a
statistically significant (at the 90% level) negative effect on vehicle fatalities. Based on our
23 Although our data from State Farm include individual drivers’ claims related to predominantly non-fatal
accidents, we found that even those claims were too infrequent to analyze empirically.
24 We obtained average monthly gasoline prices from GasBuddy.com that varied by county, but when we
included them in the model they had a statistically insignificant effect on fatalities and had no effect on the
other parameter estimates. This is not surprising given that we include the year-month fixed effects.
25 DeAngelo and Hansen (2014) found that budget cuts in Oregon that resulted in large layoffs of roadway
troopers were associated with a significant increase in traffic fatalities and The National Economic Council
(2014) concluded that poor road conditions were associated with a large share of traffic fatalities.
272 J Risk Uncertain (2016) 52:255–280
previous estimation results, we hypothesize that increases in the unemployment rate reduce
automobile fatalities by increasing the share of total VMT accounted for by safer drivers.
While this specification cannot capture any effect of the changing composition of drivers, we
can capture that effect by estimating the determinants of fatality rates. Because we found that
unemployment does not significantly affect the VMT of the average driver in our sample,
any reduction in the average county fatality rate due to unemployment must be attributable
to a reduction in the average fatality rate of all drivers. Such a reduction could occur only if
there was a change in the composition of VMT for the drivers in our sample, or if some
motorists drove more safely as unemployment rose.
The second column of Table 5 presents the results of OLS estimates showing that an
increase in the county unemployment rate does appear to reduce the average county
fatality rate. The magnitude of the effect of unemployment on the fatality rate is
potentially underestimated because a decline in VMT due to increasing unemployment,
which we reported in our OLS estimates in Table 2, columns 1 and 2, would by itself
mechanically increase the fatality rate. In column 3, we address this potential bias by re-
estimating the model using six lags of the unemployment rate in neighboring counties as
instruments for the local unemployment rate.26 As expected, the resulting estimates
show that the effect of the county unemployment rate on the fatality rate increases—in
fact, nearly doubles—and that this effect is statistically significant. The average daily
fatality rate in our sample is 0.03; thus, our estimated coefficient implies that a one
percentage point increase in unemployment reduces the fatality rate by roughly 16%.27
Of course, other influences on the driving environment within counties may vary over
time and thus help to explain why observed automobile fatalities declined during our sample
period. In the fourth and fifth columns of Table 5, we present estimates that include per-
capita transfers from the state of Ohio to each county, including both intergovernmental
transfers from the state to counties and direct capital spending by state government within
each county. Those variables control for financial conditions that may be correlated with the
driving environment that motorists encounter in different counties, and that affect highway
safety. We also include a measure of cold weather conditions—the number of days with
minimum temperatures less than or equal to 32 degrees Fahrenheit—which may adversely
affect highway safety.28
The parameter estimates reported in columns (4) and (5) indicate that the capital transfers,
which are primarily used to improve transportation and infrastructure, reduce fatalities per
26 The instrumental variable parameter estimates presented in this column and in the other columns of the table
were statistically indistinguishable from those that were obtained when we used the unemployment rate in
neighbors of neighboring counties as an instrument.
27 We obtain this figure by multiplying a hypothetical one percentage point increase in the unemployment rate
by the coefficient capturing its effect on the fatality rate and expressing it as a percentage of the average fatality
rate in the sample (i.e., −0:49�1%
0:03
≈16%.)
28 Annual county level financial data (expressed in 2013 dollars) are from the Ohio Legislative Service
Commission. The majority of capital spending is allocated to transportation and infrastructure, while the
majority of subsidies are allocated to Revenue Distribution, Justice and Corrections, and Education and Health
and Human Services and some is also allocated to local governments for infrastructure. Monthly weather data
are from the National Climatic Data Center of the National Oceanographic and Atmospheric Administration.
We used readings from local weather stations in 76 Ohio counties. For the 12 counties without fully
operational stations, we used data from the neighboring county with the longest shared border. We also
explored using a precipitation measure of weather, but a number of weather stations did not report that
information.
J Risk Uncertain (2016) 52:255–280 273
VMT, and their effect has some statistical reliability. However, the intergovernmental
transfer and weather measures are statistically insignificant, perhaps because they vary
insufficiently across Ohio counties to allow us to identify their effects. In any case, the
effect of the unemployment rate is only slightly reduced by including those variables, and
remains statistically significant. As a further robustness check, we control for any time-
varying effects on fatalities that may differ between urban and rural counties, which could
include changes in commercial driving and congestion, by specifying separate, fully flexible
time trends for those county classifications.29 The traffic fatality rate in 2012 on Ohio’s non-
interstate rural roads was 2.15 per 100 million miles of travel compared with a traffic fatality
rate of 0.63 on all its other roads (TRIP 2014). The estimates presented in the fifth column
show that including those time trends increases the regression’s overall goodness of fit, but
again has no effect on the estimated parameter for the county unemployment rate, which
increases the confidence we have in the validity of our instrumental variables.
Based on the specification in the last column of Table 5, a one percentage
point increase in unemployment reduces the fatality rate 14% on average.30
29 Urban counties are defined as those in which more than 50% of the population lives in an urban setting as
defined by the 2010 U.S. Census.
30 As before, this figure is obtained by multiplying a hypothetical one percentage point increase in the
unemployment rate by the coefficient capturing its effect on the fatality rate and expressing it as a percentage
of the average fatality rate in the sample, so −0:43%
0:03
≈−14%. The decline in Ohio’s unemployment rate during
2009 to 2012 was associated with a modest increase in its fatality rate, but that association does not hold any
other effects constant, such as alcohol consumption, which would affect the fatality rate.
Table 5 Automotive fatalities, unemployment, and VMT
Dependent variable (1) (2) (3) (4) (5)
Fatalities
Fatalities/
Daily
VMT
Fatalities/
Daily
VMT
Fatalities/
Daily
VMT
Fatalities/
Daily
VMT
Average daily VMT 0.01 (0.01)
County unemployment
rate
0.19 (0.36) –0.26*** (0.09) –0.49** (0.19) –0.43** (0.20) –0.43** (0.21)
County unemployment
rate × average daily VMT
–0.12* (0.07)
Per capita state-to-county
transfers, subsidy (Millions)
1.53 (6.65) –2.00 (6.57)
Per capita state-to-county
transfers, capital (Millions)
–7.84 (4.99) –7.16 (4.91)
Number of days with minimum
temperature less than or
equal to 32 F × 100
–0.03 (0.04) –0.01 (0.04)
County fixed effects? Y Y Y Y Y
Year-month fixed effects? Y Y Y Y N
Year-month-urban county
fixed effects?
N N N N Y
Estimation method OLS OLS 2SLS 2SLS 2SLS
R2 0.46 0.42 0.42 0.42 0.44
Number of obs. 4312 4312 4312 4312 4312
Robust standard errors clustered by county are reported in parentheses
*** 99% significance level, ** 95% significance level, * 90% significance level
274 J Risk Uncertain (2016) 52:255–280
Instrumental variables estimate local average treatment effects; thus, extrapola-
tion of this estimate to the entire United States should be done with caution.
That said, it is plausible to use our estimate to simulate the safer driver
composition of VMT that results from a one percentage point increase in
unemployment throughout the country, which implies that we could reduce
the roughly 34,000 annual fatalities by as many as 4800 lives per year. 31
Extrapolating our results to estimate the effects of more dramatic economic
shocks, such as the 4 to 8 percentage point increases in unemployment expe-
rienced by some parts of the country during the Great Recession, would be
inappropriate and quite likely to be misleading.32
In addition to the benefits from fewer fatal accidents, changing the mix of
VMT to reflect a larger share of safer drivers would reduce injuries in non-fatal
accidents, vehicle and other property damage, and congestion. Accounting for
the reductions in all of those social costs by assuming plausible values for life
and limb, time spent in congested traffic, and the cost of repairs yields an
estimate of total annual benefits in the tens of billions of dollars with some
favorable distributional effects.33
Taking a broader perspective, our estimate may be conservative if a national
recession reduces average VMT even if an increase in local unemployment does
not. Thus, a recession may have a direct (linear) effect and a compositional
effect that reduces VMT, and it is possible that both effects may be mediated
through macroeconomic variables other than the unemployment rate.
31 Ohio’s 2012 fatality rate per 100,000 people of 9.73 is reasonably close to the average fatality rate of all
states and the District of Columbia of 10.69 (Sivak 2014). Thus our extrapolation based on Ohio drivers’
behavior and safety environment should not be a poor prediction of the likely nationwide improvement in
automobile safety. Because we have tried to hold commercial driving constant in this specification, which
generally declines when unemployment increases thereby reducing fatal accidents, we have probably
overestimated the precise number of lives that would be saved. However, our estimate of annual lives saved
in the thousands is of the right order of magnitude.
32 Although we observe within county variation of unemployment of as much as 4 to 8 percentage points
during our sample period, it is important to note that we can use only the component of that observed variation
that is induced by changes in our instruments, the neighboring counties’ unemployment rate and the neighbors
of neighboring counties’ unemployment rate, to identify the effects of unemployment on driving behavior.
Because the amount of variation in our instruments is not equivalent to the amount of variation in a county’s
unemployment rate, instead their variation is roughly three-quarters of the amount of variation in a county’s
unemployment rate (see, for example, Fig. 6), and because the largest of the first stage coefficients for our
instruments is roughly 0.66, we caution readers not to extrapolate the impact of a change in unemployment
that exceeds 2 percentage points. This caution is especially warranted because the effect of unemployment on
fatalities may be non-linear, with the first percentage point drop in unemployment, for example, inducing a
decrease in driving by the most dangerous drivers and subsequent percentage point decreases in unemploy-
ment not having nearly the same effect on the composition of VMT and fatalities. We tried to estimate non-
linear effects in a more flexible specification, but we were unable to obtain statistically precise coefficient
estimates. We suspect that this may be due to insufficient statistical power; hence, we maintain that the non-
linear relationship between unemployment and traffic fatalities is a valid topic of interest that merits further
research.
33 Per capita pedestrian death rates from automobile accidents are greater in lower income census tracts than in
higher income census tracts. Data provided to us by Governing magazine, published in Washington, D.C.,
shows that this difference has widened as the U.S. economy has come out of the recession and unemployment
has decreased. Changing the mix of VMT to reflect a larger share of safer drivers would reduce the difference
in pedestrian death rates across census tracts with different levels of income.
J Risk Uncertain (2016) 52:255–280 275
Finally, we previously hypothesized that our estimate of benefits may be
conservative because the share of risky drivers in our sample is likely to be
less than the share of risky drivers in the population. To test this possibility, we
estimated weighted regressions based on weights we constructed from data
provided by State Farm on the household characteristics of drivers in Ohio’s
population. Specifically, we re-estimated the specification in Table 5 weighting
the regression based on the age of drivers in Ohio’s driving population, the
most important potential source of sampling bias, and we found that the effect
of a one percentage point increase in unemployment increased the reduction in
the fatality rate to 22%, on average, which confirms that our estimates based on
the unweighted regressions are conservative.34
We did not estimate a weighted regression that simultaneously accounted for
all the variables that may reflect sampling bias, including household and
vehicle characteristics, because that estimation requires us to observe the joint
distribution of all those characteristics in the population to accurately construct
the sampling weights, which we were unable to do. In any case, reweighting
our initial regression to reflect the fact that our sample of drivers is safer than
the drivers in the population would likely show that we are underestimating the
effect of unemployment on fatality rates.
4 Qualifications and policy implications
We have addressed the long-standing puzzle in automobile safety of why
fatalities per vehicle-mile decline during recessions by showing that a downturn
in the economy causes the mix of drivers’ VMT to change so that the share of
riskier drivers’ VMT decreases while the share of safer drivers’ VMT increases.
This combination results in a large reduction in automobile fatalities. It is also
possible that this result arises partly because riskier drivers actually drive more
safely—rather than simply driving less—during recessions. To the extent that
this contributes to the result we observe, however, it reinforces our argument
that a key to improving highway safety is to reduce the safety differential
between drivers with varying degrees of riskiness.
We were able to perform our empirical analysis by obtaining a unique,
disaggregated sample of Ohio drivers. The sample’s main drawback is that
middle-aged (and thus arguably safer) drivers are over-represented, while youn-
ger and arguably more dangerous drivers are under-represented. Nonetheless,
34 We constructed driving age sampling weights for the regressions using the following procedure. We
obtained year-month-county level data on the number of drivers in eight distinct age bins (under 25, 25–34,
35–44,…, 65–74, over 75) from State Farm. We used these data to estimate the age distribution of the
population of drivers in each year, month, and county. We then weighted each observation in our data set by
the relative probability that it was sampled (where that weight is given by: Weight ¼ Pr in Pop:ð ÞPr in Sampleð Þ) and re-
estimated the regressions by weighted least squares. We also constructed household size and vehicle type
sampling weights by an analogous procedure using data from State Farm on the household size and vehicle
type distributions of the population of drivers in each year, month, and county. We again found that our
weighted regressions tended to yield estimates of the effect of unemployment on the fatality rate that were
greater than the estimates obtained by the unweighted regressions.
276 J Risk Uncertain (2016) 52:255–280
we were still able to observe sufficient heterogeneity among drivers to docu-
ment our explanation of the automobile safety puzzle and to show that aggre-
gate data, which continues to be used to analyze highway safety, is potentially
misleading because it obscures differences among drivers and their responses to
varying conditions that affect vehicle use and road safety. Indeed, the extent of
aggregation bias may be even greater in a representative sample of Ohio drivers
because such a sample would include a greater share of younger drivers and
would capture more heterogeneity than was captured in our sample. In addition,
our sensitivity tests using regressions that were weighted to more accurately
reflect the characteristics of drivers in the population indicated that our findings
based on the unweighted regressions were conservative.
While we control for the effects of many aspects of the driving environment,
our findings certainly do not rule out other possible explanations of the safety
puzzle. We hope to have motivated other scholars to build on our work and
findings by assembling and analyzing a more extensive and representative
disaggregated sample of drivers and their behavior.35
Because we have documented an instance of a natural economic force that
impels riskier drivers to drive less while not discouraging safer drivers, it
should give us hope that we could suggest a public policy that has the same
effect. However, our characterization of riskier drivers applies to an amorphous
group that includes drivers with a broad range of socioeconomic characteristics;
thus, it is difficult to apply our findings to develop a new well-targeted public
policy that could affect those drivers’ behavior and produce a substantial
improvement in highway safety.
Turning to conventional policies, Morris (2011) points out that from 1995 to 2009
annual traffic fatalities declined considerably less in the United States than in other
high-income countries and that officials in those countries attribute their improvement
in highway safety to more stringent regulations and penalties for driving offenses such
as speeding, drunk driving, and drug use, and to more aggressive and extensive police
enforcement of traffic safety laws. Although those measures might reduce motorists’
fatalities in the United States, it is not clear that they would do so by disproportionately
reducing the most dangerous drivers’ VMT.
In fact, an ongoing challenge to policymakers has been to improve automobile
safety efficiently by designing and implementing VMT taxes that reflect the riskiness of
different drivers (Winston 2013). Economists have pointed out that policies that have
been proposed to help finance highway infrastructure expenditures, such as raising the
gasoline tax for motorists or introducing a fee for each mile driven, could improve
highway safety by reducing VMT (Parry and Small 2005; Edlin and Karaca-Mandic
2006; and Langer et al. 2016). Anderson and Auffhammer (2014) have recently
35 Using a disaggregate data set to link VMT to the business cycle is also important to get a more precise
understanding of how much VMT will increase as the economy completes its recovery. For example, recent
aggregate estimates of VMT released by the Federal Highway Administration indicate that as of June 2015,
Americans’ driving has hit an all-time high, fueling calls for greater investment in highways that must bear
growing volumes of traffic. At the same time, some observers have claimed that younger people (specifically,
Millennials) are driving less than previous generations in their age group drove, which has implications for
forecasts of VMT growth and estimates of funds for future highway spending. In addition, the financial
success of any public-private highway partnerships will be affected by the accuracy of traffic growth estimates.
J Risk Uncertain (2016) 52:255–280 277
proposed a mileage tax that increases with vehicle weight to account for the fact that
heavier vehicles increase the likelihood that multi-vehicle accidents will result in
fatalities. However, our analysis suggests that those pricing policies do not fully satisfy
the challenge facing policymakers because they do not take account of the different
risks posed by different drivers and thus are not focused on reducing the most
dangerous drivers’ VMT.
Finally, automobile insurance companies have a strong interest in reducing accidents
and they offer discounts to drivers who drive safely; but, to the best of our knowledge,
they have not implemented a detailed VMT-based policy for rates that encourages the
most dangerous drivers to drive less.
Fortunately, it appears that recent technological advances in the automobile itself
may be able to accomplish what public policies cannot by effectively recreating in
expansionary periods the safer pool of drivers who are found on the road during
recessions. Specifically, exciting developments in autonomous automobile technologies
are currently being tested in actual driving environments throughout the nation and the
world. The transition to their eventual adoption on the nation’s roads is increasingly
likely to happen in the near future.
Driverless cars are operated by computers that obtain information from an array of
sensors on the surrounding road conditions, including the location, speed, and trajec-
tories of other cars. The onboard computers gather and process information many times
faster than the human mind can do so. By gathering and reacting immediately to real-
time information and by eliminating concerns about risky human behavior, such as
distracted and impaired driving, the technology has the potential to prevent collisions
and greatly reduce highway fatalities, injuries, vehicle damage, and costly insurance.
Additional benefits include significantly reducing delays and improving travel time
reliability by creating smoother traffic flows and by routing—and when necessary
rerouting—drivers who have programmed their destinations.
Driverless cars could affect the mix of VMT in two ways. First, during the transition
from human drivers to driverless cars, policymakers could allow the most dangerous
drivers, who ordinarily might have their driver’s licenses suspended or even revoked
following a serious driving violation or who have reached an age where their ability to
operate a vehicle safely has been seriously impaired, to continue to have access to an
automobile provided it is driverless or at the very least has more autonomy than current
vehicles. This would expedite the transition to driverless cars and help educate the
public and build trust in the new technology (Reimer 2014). At the same time, it would
immediately improve the safety of the most dangerous drivers on the road by giving
them legal and safe access to automobile travel when they might otherwise drive
illegally and—given their driving records or physical condition—dangerously.
Second, with the transition to driverless cars eventually complete, the risk among
drivers would be eliminated. To be sure, automobile accidents, even fatal ones, might
still occur. But that would pose a technological instead of a human problem, which our
society has historically found much easier to solve.
Acknowledgments We received valuable comments from Robert Crandall, Parry Frank, Ted Gayer,
Amanda Kowalski, Ashley Langer, Fred Mannering, Robert Noland, Don Pickrell, Chad Shirley, Kenneth
Small, Jia Yan, a referee, and the editor and financial support and useful suggestions from the AAA
Foundation.
278 J Risk Uncertain (2016) 52:255–280
Appendix
References
Abouk, R., & Adams, S. (2013). Texting bans and fatal accidents on roadways: Do they work? Or do drivers
just react to announcements of bans? American Economic Journal: Applied Economics, 5, 179–199.
Anderson, M. L., & Auffhammer, M. (2014). Pounds that kill: The external costs of vehicle weight. Review of
Economic Studies, 81, 535–571.
Bhatti, J. A., Constant, A., Salmi, L. R., Chiron, M., Lafont, S., Zins, M., & Lagarde, E. (2008). Impact of
retirement on risky driving behavior and attitudes toward road safety among a large cohort of French
drivers. Scandinavian Journal of Work, Environment & Health, 34, 307–315.
Coates, J. M. (2008). Endogenous steroids and financial risk taking on a London trading floor. Proceedings of
the National Academy of Sciences, 105, 6167–6172.
Cotti, C., & Tefft, N. (2011). Decomposing the relationship between macroeconomic conditions and fatal car
crashes during the Great Recession: Alcohol and non-alcohol-related accidents. The B.E. Journal of
Economic Analysis & Policy, 11, 1–22.
Crandall, R. W., Gruenspecht, H. K., Keeler, T. E., & Lave, L. B. (1986). Regulating the automobile.
Washington, DC: Brookings Institution.
DeAngelo, G., & Hansen, B. (2014). Life and death in the fast lane: Police enforcement and traffic fatalities.
American Economic Journal: Economic Policy, 6, 231–257.
Dohmen, T., Falk, A., Huffman, D., Sunde, U., Schupp, J., & Wagner, G. G. (2011). Individual risk attitudes:
Measurement, determinants, and behavioral consequences. Journal of the European Economic
Association, 9, 522–550.
Dubin, J. A., & McFadden, D. L. (1984). An econometric analysis of residential electric appliance holdings
and consumption. Econometrica, 52, 345–362.
Edlin, A. S., & Karaca-Mandic, P. (2006). The accident externality from driving. Journal of Political
Economy, 114, 931–955.
0
.2
.4
.6
.8
C
o
e
ff
ic
ie
n
t
o
n
L
a
st
I
n
st
ru
m
e
n
t
0 1 2 3 4 5 6
Number of Lags
Fig. 7 Strength of instruments. Note: Each bar corresponds to a single first stage regression with year-month
fixed effects, county fixed effects and the corresponding number of lagged instruments. For each regression,
we report the estimated coefficient on the most lagged instrument and its 95% confidence interval
J Risk Uncertain (2016) 52:255–280 279
Evans, W. N., & Moore, T. J. (2012). Liquidity, economic activity, and mortality. Review of Economics and
Statistics, 94, 400–418.
Frank, P. (2012). Hour-of-the-week crash trends between the years 2005–2010 for the Chicago, Illinois region.
Chicago Metropolitan Agency for Planning Working Paper.
Hansen, L. P. (1982). Large sample properties of generalized method of moments estimators. Econometrica,
50, 1029–1054.
Huff Stevens, A., Miller, D. L., Page, M. E., & Filipski, M. (2011). The best of times, the worst of times:
Understanding pro-cyclical mortality. NBER working paper 17657.
Jacobsen, M. R. (2013). Fuel economy and safety: The influences of vehicle class and driver behavior.
American Economic Journal: Applied Economics, 5, 1–26.
Langer, A., Maheshri, V., & Winston, C. (2016). From gallons to miles: A short-run disaggregate analysis of
automobile travel and taxation policies. University of Arizona working paper.
Mannering, F. L. (1983). An econometric analysis of vehicle use in multivehicle households. Transportation
Research A, 17A, 183–189.
Mannering, F., & Winston, C. (1985). A dynamic empirical analysis of household vehicle ownership and
utilization. Rand Journal of Economics, 16, 215–236.
Morris, J. R. (2011). Achieving traffic safety goals in the United States: Lessons from other nations.
Transportation Research News 272: New TRB Special Report, 30–33.
Muir, C., Devlin, A., Oxley, J., Kopinathan, C., Charlton, J., & Koppel, S. (2010). Parents as role models in
road safety. Monash University Accident Research Centre, Report No. 302.
National Economic Council. (2014). An economic analysis of transportation infrastructure investment.
Washington, DC: The White House.
National Highway Traffic Safety Administration. (2013). How vehicle age and model year relate to driver
injury severity in fatal crashes. Washington, DC: Traffic Safety Facts, NHTSA, U.S. Department of
Transportation.
Parry, I. W. H., & Small, K. A. (2005). Does Britain or the United States have the right gasoline tax? American
Economic Review, 95, 1276–1289.
Peltzman, S. (1975). The effects of automobile safety regulations. Journal of Political Economy, 83, 677–726.
Peterman, D. R. (2013). Federal traffic safety programs: An overview. Washington, DC: Congressional
Research Service Report for Congress.
Reimer, B. (2014). Driver assistance systems and the transition to automated vehicles: A path to increase older
adult safety and mobility? Public Policy and Aging Report, 24, 27–31.
Ruhm, C. J. (2000). Are recessions good for your health? Quarterly Journal of Economics, 115, 617–650.
Ruhm, C. J. (2013). Recessions, healthy no more? NBER working paper 19287.
Sivak, M. (2014). Road safety in the individual U.S. States: Current status and recent changes. University of
Michigan Transportation Research Institute report no. UMTRI-2014–20.
Stewart, S. T., & Cutler, D. M. (2014). The contribution of behavior change and public health to improved
U.S. population health. NBER Working Paper 20631.
Tefft, B. C. (2012). Motor vehicle crashes, injuries, and deaths in relation to driver age: United States, 1995–
2010. Washington, DC: AAA Foundation for Traffic Safety.
Train, K. E., & Winston, C. (2007). Vehicle choice behavior and the declining market share of U.S.
automakers. International Economic Review, 48, 1469–1496.
TRIP. (2014). Rural connections: Challenges and opportunities in America’s heartland. Washington, DC,
www.tripnet.org.
Winston, C. (2013). On the performance of the U.S. transportation system: Caution ahead. Journal of
Economic Literature, 51, 773–824.
Winston, C., & Mannering, F. (2014). Implementing technology to improve public highway performance: A
leapfrog technology from the private sector is going to be necessary. Economics of Transportation, 3,
158–165.
Winston, C., Maheshri, V., & Mannering, F. (2006). An exploration of the offset hypothesis using disaggregate
data: The case of airbags and antilock brakes. Journal of Rick and Uncertainty, 32, 83–99.
280 J Risk Uncertain (2016) 52:255–280
Journal of Risk & Uncertainty is a copyright of Springer, 2016. All Rights Reserved.
The effect of ambiguity on risk management choices:
An experimental study
Vickie Bajtelsmit1 & Jennifer C. Coats1 & Paul Thistle2
Published online: 24 July 2015
# Springer Science+Business Media New York 2015
Abstract We introduce a model of the decision between precaution and insurance
under an ambiguous probability of loss and employ a novel experimental design to test
its predictions. Our experimental results show that the likelihood of insurance purchase
increases with ambiguous increases in the probability of loss. When insurance is
unavailable, individuals invest more in precaution when the probability of loss is
known than when it is ambiguous. Our results suggest that sources of ambiguity
surrounding liability losses may explain the documented tendency to overinsure against
liability rather than meet a standard of care through precaution. The results provide
support for our theoretical predictions related to risk management decisions under
alternative probabilities of loss and information conditions, and have implications for
liability, environmental, and catastrophe insurance markets.
Keywords Liability. Imperfect information . Design of experiments . Laboratory
experiments
JEL Classifications K130 . D81 . C9 . C92
0
Two apparently conflicting puzzles consistently arise out of the empirical observation
of insurance markets. Both involve a tendency to make suboptimal insurance decisions
and have important implications for environmental risk mitigation, consumer decision
making, public finance, and firm profit maximization. First, there is substantial evi-
dence that individuals and businesses underinsure catastrophe risk (Kunreuther and
J Risk Uncertain (2015) 50:249–2
80
DOI 10.1007/s11166-015-9218-3
* Jennifer C. Coats
Jennifer.Coats@colostate.edu
Vickie Bajtelsmit
Vickie.bajtelsmit@colostate.edu
Paul Thistle
Paul.thistle@unlv.edu
1 Department of Finance and Real Estate, Colorado State University, Fort Collins, CO 80523, USA
2 Department of Finance, University of Nevada Las Vegas, Las Vegas, NV 89154, USA
http://crossmark.crossref.org/dialog/?doi=10.1007/s11166-015-9218-3&domain=pdf
Pauly 2004; 2005). The devastating cost of a failure to insure against catastrophe is
highlighted repeatedly with each natural disaster. Second, individuals and firms pur-
chase liability insurance even when neither law nor contract requires they do so. Given
that injurers are held liable under U.S. law only if they have failed to meet a reasonable
standard of care, expenditure on care could be a less expensive alternative to purchasing
actuarially unfair liability insurance. In the absence of the ability to take precaution
against accident, theory suggests that risk-averse individuals will fully insure when
actuarially fair insurance is available. In situations where insurance is not fairly priced
or where precaution is an alternative, the optimal choice depends on risk aversion,
insurer profit and risk loading, and the cost of precaution.
Although negligence liability can be avoided by exercising an appropriate level of
care, there are many sources of uncertainty that could explain the existence of the
thriving liability insurance market in the U.S. The theoretical literature suggests that
insurance demand may be explained by uncertainty regarding one’s own risk type
(Bajtelsmit and Thistle 2008; 2015), the mechanics of the pooling mechanism
(DeDonder and Hindriks 2009), the cost of taking precaution (Bajtelsmit and Thistle
2009), potential for errors by the courts (Sarath 1991), and the risk of momentary lapses
in judgment by oneself or others (Bajtelsmit and Thistle 2013). Uncertainty may be
especially profound in the face of environmental risks. Riddel (2012) notes that
environmental gambles involve greater uncertainty surrounding the probability,
severity, and welfare loss effects of outcomes. In a comprehensive overview of
environmental risk management, Anderson (2002) highlights the extensive degree of
ambiguity surrounding potential environmental losses, even from the standpoint of
risk-neutral corporations. In addition to the usual risks related to property, liability, life
and health, environmental risks may include ethical, cultural, business, reputational,
and regulatory uncertainty. Anderson also notes that the interpretation of preventive
measures under environmental liability is particularly vague compared to other liability
standards. Therefore, the degree of ambiguity that surrounds the court’s judgment of
whether a defendant has met the standard of care is likely to be higher in environmental
liability cases than under other liability cases. We view a greater understanding, in
general, of precaution and insurance decisions under ambiguity as a crucial step
towards understanding these tradeoffs under particular types of ambiguity, such as that
created by environmental risks.
In this paper, we show theoretically that, when the probability of loss is more
ambiguous, the demand for insurance increases. However, the ambiguity may increase
or decrease expenditure on precaution, depending on assumptions related to the cost
and benefit of precautionary spending. We test these results empirically in a laboratory
experiment in which participants make decisions about insurance and precaution under
different ambiguity conditions.
We extend the literature on the market for insurance in several dimensions.
First, we develop a model which includes mistakes as a source of ambiguity
underlying the decision between precaution and insurance and shows that
ambiguity aversion increases insurance demand. Second, we employ a novel
experimental design to test the predictions of the model. To our knowledge,
ours is the first study to model the effect of ambiguity on precaution and
insurance in this way and to use the experimental method to investigate the
choice between precaution and insurance. Third, the experimental design also
250 J Risk Uncertain (2015) 50:249–280
allows us to test previous theoretical findings related to the choice between
precaution and insurance by individuals with heterogeneous probabilities of
loss. In particular, Bajtelsmit and Thistle (2008) show that the optimal insur-
ance contract leads individuals with high probability of loss to meet the
standard of care and thereby avoid liability, whereas individuals with low
probability of loss prefer to purchase insurance and take less precaution. Their
results imply that individuals who have a preference for taking full precaution
when insurance is unavailable will switch to insurance if it becomes available
at a comparable cost. Finally, our design, parameters, and framing allow us to
contribute additional evidence to existing mixed results related to the decision
to insure against low-probability, high-severity losses.
Our primary motivation is to test whether ambiguity surrounding the prob-
ability of a loss impacts the demand for precaution and insurance, as suggested
by our theoretical model. To our knowledge, ours is the first laboratory study to
allow a choice between buying insurance and exercising a level of precaution
to achieve a desired level of risk of a loss. 1 The experimental design requires
participants to make precaution and insurance decisions under different condi-
tions, some of which involve risks with known probability distributions and
others in which the probability of loss is unknown or ambiguous to both the
experimenter and the participant. Participants make decisions under conditions
of low and high probability of loss. In some treatments, participants can pay for
a desired level of precaution and, in others, they can choose to buy insurance
or alternative levels of precaution. To determine whether ambiguity of the loss
distribution affects participants’ precaution and insurance decisions, in some
treatments the participants are subject to an additional unknown risk of loss.
By using a similar experimental design, as well as similar parameters and
framing, we confirm the experimental results of Laury et al. (2009) that
individuals are more likely to purchase insurance in the low probability treat-
ments, after controlling for other factors such insurance pricing and loss
severity. Empirical analysis of participant decisions under conditions of known
versus ambiguous loss probabilities shows that the likelihood of insurance
purchase increases with ambiguous increases in the probability of loss and that,
when insurance is unavailable, individuals invest more in precaution when
probability of loss is known than when it is unknown. Our results also provide
support for theoretical findings in Bajtelsmit and Thistle (2008): in the absence
of ambiguity, participants are more likely to purchase insurance in the low
probability treatments and those who prefer full precaution when insurance is
unavailable switch to insurance when it is available.
The next section reviews the theoretical and experimental literature related to the
purchase of insurance against liability and catastrophe losses and presents a theoretical
model to analyze the impact of ambiguity on insurance and precaution decisions. The
laboratory experiment, which closely follows the theory setup, is described in Section 2.
We formalize our hypotheses in Section 3, summarize the empirical analysis and results
in Section 4 and provide conclusions in Section 5.
1 However, several papers do examine risk mitigation or endogenous risk, without considering the role of
insurance—such as Fiore et al. (2009) and Harrison et al. (2010).
J Risk Uncertain (2015) 50:249–280 251
1 Background and theory
1.1 Background
The extensive theoretical literature on insurance demand provides several explanations
for the purchase of liability insurance. Under the standard model of expected utility
theory, these include risk aversion of agents, uncertainty/ambiguity related to proba-
bility of loss, cost of care, and operation of the legal system. This literature has
generally distinguished individual insurance decisions from corporate insurance deci-
sions. Theoretically, risk neutral corporations should not be willing to buy insurance at
actuarially unfair prices. However, agency theory suggests that risk-averse managers
might be motivated to do so on behalf of the firm, in order to protect their own
employment and/or reputations (see, for example, Greenwald and Stiglitz 1990; Han
1996; Mayers and Smith 1982).
A second strand of the insurance literature, also based on standard expected
utility theory, focuses on individual decision-making under ambiguity (when the
probability of loss is not objectively known). Although the risk of negligence
liability can be avoided by exercising an appropriate level of care, there are many
sources of ambiguity related to understanding the risk, satisfying the negligence
standard, and judicial enforcement of the standard. For example, potential injurers
may face uncertainty about their own risk type (Bajtelsmit and Thistle 2008), the
mechanics of the pooling mechanism (DeDonder and Hindriks 2009), or the cost of
taking precaution to avoid risks (Bajtelsmit and Thistle 2009). Shavell (2000)
illustrates that uncertainty regarding negligence standards results in a level of care
that exceeds a socially optimal level. The potential for errors by the courts (Sarath
1991) and the possibility of injuries caused by momentary lapses in judgment, either
one’s own mistakes or another agent’s (Bajtelsmit and Thistle 2013), theoretically
have been shown to justify a market for insurance.
A more generalized stream of research investigates decision-making under risk and
uncertainty according to both standard and non-standard risk preferences. While there
are many potential sources of ambiguity in a liability case, as discussed above, our
experimental design and analysis adopts Camerer and Weber’s (1992) definition of
ambiguity: Buncertainty about probability created by missing information that is rele-
vant and could be known^ (p. 330). They note further that Bif ambiguity is caused by
missing information, then the number of possible distributions . . . might vary as the
amount or nature of missing information varies^ (p. 331). In several treatments in our
experiment, participants make decisions that depend on outcomes whose probabilities
they have estimated with varying degrees of missing information, but are unknown at
the time either to themselves or the experimenters.
A vast literature related specifically to risk preferences suggests that Bnonstandard^
features, not included in expected utility theory, drive behavior. Non-expected utility
theories include alternative decision-weighted probability models, prospect theory by
Kahneman and Tversky (1979), and Tversky and Kahneman’s cumulative prospect
theory (1992), which combine probability-weighting with different risk preferences
over gains and losses. 2 Prospect theory suggests that individuals underestimate or
2 See Starmer (2000) for a review.
252 J Risk Uncertain (2015) 50:249–280
ignore very low probability events and the primary explanation in the literature given
for underinsurance of catastrophic loss is that individuals may ignore probabilities
below a certain threshold.3
Laboratory experiments on insurance purchase decisions under different risk and
ambiguity conditions have been conducted under a wide variety of designs and
protocols and the results are highly inconclusive. 4 A few experimental studies
(Ganderton et al. 2000; Laury et al. 2009; McClelland et al. 1993; Slovic et al. 1977)
test the tendency to underinsure against low-probability high-severity losses. However,
the differences in designs, procedures, and parameters employed across the studies limit
the ability to generalize conclusions from their results. The Laury et al. experimental
design, discussed in detail below, implements a choice task to investigate the phenom-
enon of underinsurance for low-probability, high-severity losses, and produces results
that are counter to the notion that individuals ignore very low probabilities.5
1.2 The theoretical effect of ambiguity on precaution and insurance decisions
The underlying theory is based on the standard model of accidents in the law and
economics literature. In the absence of the ability to take precaution against accident,
theory suggests that risk-averse expected utility maximizers will fully insure when
actuarially fair insurance is available. In general, the assumption of risk aversion
implies that individuals will be willing to pay some level of load or risk premium to
avoid risk. Thus, when insurance is not fairly priced, the optimal choice depends on
the level of risk aversion and the insurance loading factor.
We assume that individuals are expected utility maximizers with increasing
concave von Neumann-Morgenstern (vNM) utility u. Individuals have exogenous
initial wealth w and face a potential loss d
π′L(c) > π′H(c). We assume each person knows whether they face high or low risk
and understands how the level of precaution affects the probability of loss. An
insurance policy consists of a premium, pi, paid whether or not loss occurs, and an
indemnity, qi, paid in the event that the loss occurs. The first best levels of
precaution are ci* = argmin ci + πi(ci)d, i = H, L.
3 The behavioral literature also suggests that certain behavioral biases, such as overconfidence or optimism, as
well as the tendency to overreact to recent events, may explain under- and overinsurance for certain types of
losses. See, for example, Kunreuther et al. (2001).
4 See Jaspersen (2014) for a comprehensive review.
5 Many studies attempt to explain insurance markets by designing the experiments as auctions rather than
choice tasks. See, for example, Camerer and Kunreuther (1989) and Hogarth and Kunreuther (1989).
Although this design may work well as a mechanism for eliciting willingness to pay for insurance, and under
a double auction, studying both sides of the insurance markets, the results are not necessarily generalizable to
the insurance marketplace in which consumers face choice tasks rather than pricing tasks, as explained in
Laury et al. (2009).
J Risk Uncertain (2015) 50:249–280 253
If insurance is not available, then expected utility is
Ui cið Þ ¼ 1−πi cið Þð Þu w−cið Þ þ πi cið Þu w−ci−dð Þ ð1Þ
The individual chooses the level of precaution, ci
0, that maximizes expected utility.
Because the individual is risk averse, she is willing to pay some amount PiU to avoid
the risk of loss. The results in Bajtelsmit and Thistle (2008) imply that the willingness
to pay to avoid the risk is given by u(w − PiU)=Ui(ci
0). Willingness to pay can be
written as PiU=ci
0+πi(ci
0)d+ρiU, where ρiU is a risk premium.
Now assume that insurance is available, that insurers can determine risk type ex
ante, and that the expenditure on precaution is observable. In general, the insurance
premium can be written as pi=λπi(ci)qi, where λ is the loading factor; the insurance
premium is actuarially fair if λ=1 and unfair if λ>1. The individual who buys the
insurance policy (pi, qi) and spends ci on care has expected utility given by
Ui pi; qi; cið Þ ¼ 1−πi cið Þð Þu w−pi−cið Þ þ πi cið Þu w−pi−ci−d þ qið Þ ð2Þ
for i=H, L.
The risk of negligence liability presents a special case. If liability is determined In most analyses of liability, as in the analysis described above, the probability of an a
worn valve. Despite effort and expenditure on compliance, managers cannot predict 254 J Risk Uncertain (2015) 50:249–280 deal of uncertainty. Therefore, we model the case in which individuals and firms know Thus, denote ~m as the probability of a mistake, independent of expenditure on care Ui cim ¼ 1−m 1−πi cð Þð Þu w−cið Þ þ πi cið Þu w− ci−dð Þ½ � þ m u w− ci−dð Þ ð3Þ
for i=H,L. The optimal expenditure on care decreases with increasing expected prob- If the individual is ambiguity averse, then decisions are made according to the Vi cið Þ ¼ E Φ Ui ci; ~m ¼ E Φ 1−~m 1−πi cð Þð Þu w−cið Þ þ πi cið Þu w− ci−dð ÞÞ½ � þ ~m u w−ci−dð Þ where the expectation is over the distribution of mistakes (Klibanoff et al. 2005; PiV ≥PiU ; ð5Þ The effect of ambiguity aversion on the optimal level of precaution is theoretically 6 Alary et al. (2010) and Snow (2011) show that ambiguity aversion increases the willingness to pay to avoid J Risk Uncertain (2015) 50:249–280 255 reductions in risk seems at odds with an increased willingness to pay to avoid the risk Now consider the same case when insurance is available. If an individual’s proba- Ui pi; qi; ci; m ¼ 1−m 1−πi cið ÞÞ½ u w−pi−cið Þ þ πi cið Þu w−pi−ci−d þ qið Þ ð6Þ
For an individual who is ambiguity averse, the second order expected utility is V(pi, qi, In the following section we discuss our use of the experimental method to investi- 2 Experimental design and procedures
In this section we present the experimental design and briefly discuss the procedures The risk of loss was implemented as a computer-generated random number— 256 J Risk Uncertain (2015) 50:249–280 a higher initial risk of loss, all else equal. To introduce ambiguity and determine Table 1 Experimental treatments and corresponding initial probabilities of loss prior to risk mitigation, by Panel A: Main Treatments
Level of ambiguity Loss Risk mitigation Initial probability of loss
Treatment Low Risk/ Treatment High Risk/ No ambiguity-known 45.00 Precaution only #1 0.10 #2 0.32
No ambiguity-known 45.00 Precaution or #3 0.10 #4 0.32
Ambiguity due to unknown 45.00 Precaution only #5 ≥0.10 #6 ≥0.32
Ambiguity due to unknown #7 ≥0.10 #8 ≥0.32
Ambiguity due to 45.00 Precaution only #9 ≥0.10 #10 ≥0.32
Ambiguity due to #11 ≥0.10 #12 ≥0.32
Panel B: Replication treatments b
Level of ambiguity High Load Treatment Low Load
No ambiguity- 45.00 Insurance only #13 0.01 #17 0.01
No ambiguity- 4.50 Insurance only #14 0.10 #18 0. 10
No ambiguity- 60.00 Insurance only #15 0.01 #19 0.01
No ambiguity- 6.00 Insurance only #16 0.10 #20 0.10
a In the no ambiguity treatments, prior to making the risk mitigation decision, participants are given the initial J Risk Uncertain (2015) 50:249–280 257 insurance, in some treatments the draw of a white ball could still result in a loss, 2.1 Earnings task
Similar to Laury et al. (2009), participants received earnings in several installments. We 2.2 Risk management treatments
Table 1 summarizes the 20 treatments in the experiments. The baseline treatments, The manipulations that comprise the main precaution and insurance treatments 7 A driving quiz was chosen for the earnings task to ensure that all participants would be familiar with the 258 J Risk Uncertain (2015) 50:249–280 unknown errors on a participant’s own driving quiz, or ambiguity resulting from Table 2 Losses, probabilities and expected losses under precaution only/no ambiguity treatmentsa
No loss Loss Low risk treatments High risk treatments Decision Total $ cost Total $ cost Probability of loss Cost of risk Probability of loss A 0.00 45.00 10% 4.50 0.32 14. 40
B 1.50 46.50 9% 5.55 0.28 14.10
C 3.00 48.00 8% 6.60 0.24 13.80
D 4.50 49.50 7% 7.65 0.2 13. 50
E 6.00 51.00 6% 8.70 0.16 13. 20
F 7.50 52.50 5% 9.75 0.12 12.90
G 9.00 54.00 4% 10.80 0.08 12. 60
H 10.50 55.50 3% 11.85 0.04 12. 30
I 12.00 57.00 2% 12.90 0 12.00
J 13.50 58.50 1% 13.95 NA NA
K 15.00 NA 0% 15.00 NA NA
a The table summarizes the costs and benefits of risk mitigation in alternatives in Treatments #1 and #2 in J Risk Uncertain (2015) 50:249–280 259 design includes a high loss severity ($45) relative to quiz earnings ($60) in order to In all of the main treatments summarized in Table 1, participants were offered 2.3 Procedures
Participants were recruited for pay from business classes at a large university. The In the induction, we paid participants as described above, summarized procedures in 8 Participants were presented with decisions, probabilities of loss, and total cost of precaution for each 260 J Risk Uncertain (2015) 50:249–280 performance and to encourage participants to develop a subjective probability estimate Studies eliciting subjective probabilities must provide salient incentives for Next, we read the risk management decision task instructions aloud, and the partic- Participants then entered their driving quiz answers into the computer, including 9 For example, a risk-neutral subject with an initial 10% probability of loss who estimates a score of 90% on Induction
paid $15 Earnings Task
$60 endowment estimate quiz Risk Management decisions for 20 Payment
select Scenario •Subjects are •Subjects earn •Subjects make •Random draw to
•Subjects
Fig. 1 Sequence of experimental procedures
J Risk Uncertain (2015) 50:249–280 261 management decision task in which they were required to make insurance and In the final stage of the experiment, we randomly selected the scenario that would The sessions, including payment, lasted approximately 135 minutes and participants 3 Hypotheses
Based on the previous literature and the theoretical model in the previous section, we Hypothesis 1 Individuals who prefer zero risk will choose the more efficient risk Discussion Bajtelsmit and Thistle (2008) show that heterogeneity of potential injurers, 11 The order of presentation of the seven treatment types in Tables 1 and 2 was varied randomly for each 262 J Risk Uncertain (2015) 50:249–280 equivalent in cost. Therefore, we expect that participants who prefer zero risk will buy Hypothesis 2 Risk mitigation decisions will be consistent in otherwise similar treat- Discussion Similar to the discussion regarding Hypothesis 1, we expect that partici- Hypothesis 3 The likelihood of insurance purchase is higher under ambiguous increases Discussion The model in Section 2 shows that ambiguity will increase the Hypothesis 4 Individuals will exercise more precaution when the probability of loss is Discussion The effect of ambiguity on the amount of precaution is sensitive to the 12 The average probability of mistakes on the driving quiz was 25%, resulting in an expected probability of J Risk Uncertain (2015) 50:249–280 263 participant-specific estimates of the risk of mistakes. For example, the higher the Hypothesis 5 The likelihood of insurance purchase will increase with the degree Discussion The unknown risk of mistakes in our experiment design results in an 4 Results
We begin with an overview of the risk mitigation choices made by participants in our 4.1 Overview and nonparametric results
Figure 2 summarizes the experiment participants’ precaution and insurance choices 13 In the other categories, full precaution is no longer a perfect substitute for insurance because of the risk of 264 J Risk Uncertain (2015) 50:249–280 that the difference in risk mitigation approach between the initial loss probabilities We can compare within the figures to examine the impact of mistakes on 14 Analysis of subject-level data reveals only 12 reversals between the partial and full risk mitigation decisions 0 p=.10 p=.32 p=.10+own p=.32+own p=.10+others’ p=.32+others’ None Some Full
0 70
80 None Some Full Insurance
a Fig. 2 Precaution and insurance choices. Panel A Percentage choosing no, some, or full precaution, J Risk Uncertain (2015) 50:249–280 265 also of Hypothesis 5, that insurance uptake is higher when ambiguity is higher, i.e., Hypothesis 3 predicts that ambiguous increases in the probability of loss will have a 4.2 Tests of baseline treatment predictions
As discussed above, a common explanation given for the underinsurance of low- Prediction 1: Participants are equally likely to purchase insurance for low proba- Table 3 presents the data from the baseline replication treatments (insurance 266 J Risk Uncertain (2015) 50:249–280 under high and low probabilities of loss. The results suggest participants do not 15 These results are comparable to those found in the Laury et al. 2009 study. Table 3 Baseline replication treatments (insurance only)a and test statistics for differences based on Treatment Insurance Loss Loss E(Loss) % Buying McNemar 13 3 1% $45.00 $0.45 78% 10.71*** 15 3 1% $60.00 $0.60 82% 12.25*** 17 1 1% $45.00 $0.45 83% 0.4 19 1 1% $60.00 $0.60 90% 1.28 a The baseline treatments (#13–20) replicate treatments used in Laury et al. (2009) in which the participants J Risk Uncertain (2015) 50:249–280 267 4.3 Risk attitudes
We now turn our attention to what the results suggest about participants’ risk attitudes In the aggregate, the results appear to suggest that participants are risk averse for the Table 4 Participant risk management decisions for treatments with known probabilities of loss (No Low risk treatments High risk treatments Participants’ choices (%) Participants’ choices (%)
Decision Total up- p(loss) Treatment 1: Treatment 3: p(loss) Treatment 2: Treatment 4: A 0.00 10% 23.3% 18.3% 0.32 0.0% 0.0%
B 1.50 9% 0.0% 0.0% 0.28 1.7% 3.3%
C 3.00 8% 3.3% 1.7% 0.24 3.3% 0.0%
D 4.50 7% 1.7% 3.3% 0.2 8.3% 3.3%
E 6.00 6% 10.0% 6.7% 0.16 3.3% 6.7%
F 7.50 5% 8.3% 11.7% 0.12 3.3% 5.0%
G 9.00 4% 3.3% 11.7% 0.08 18.3% 10.0%
H 10.50 3% 3.3% 0.0% 0.04 6.7% 6.7%
I 12.00 2% 3.3% 1.7% 0 55.0% 48.3%
J 13.50 1% 0.0% 1.7% NA NA NA
K 15.00 0% 43.3% 1.7% NA NA NA
Insure 14.50 0% NA 43.3% NA NA 16.7%
a The table summarizes the experimental outcomes for Treatments 1, 2, 3, and 4 in which the 60 participants 268 J Risk Uncertain (2015) 50:249–280 choices under the low probability treatment by paying for some risk mitigation. The Table 5 combines the decisions made by each participant across Treatments 1 and 2 In Table 6, we take a closer look at the 17 participants who appear to change their Table 5 Participant choices and risk attitudesa in precaution only/no ambiguity treatments
High initial probability (32%)
Risk-seeking Not risk-seeking
Low initial probability (10%) Risk averse 28% (n=17) 48% (n=29)
Not risk averse 17% (n=10) 7% (n=4)
a This table summarizes the combined decisions made by 60 participants in the two precaution-only / no 17 We note that the 17 participants are distributed across all six sessions, with 1–4 instances in each session. J Risk Uncertain (2015) 50:249–280 269 entertainment value in preserving a small chance of loss.19 The reduction of probability 4.4 Precaution and insurance decisions
We next consider the consistency of risk management decisions and whether the 19 For example, participants may anticipate that choosing some risk-mitigation will lessen regret if a loss Table 6 Expected payoffs and foregone earnings for participants who made risk averse decisions in the low Low risk High risk Average Median Average Median
Probability of loss after risk mitigation 5% 5% 9% 8%
Expected payoff ($) 50.50 50.25 47.31 47.40
Foregone earnings = expected payoff with no precaution – expected 5.00 5.25 −0.69 −0.60
a This table summarizes results for the 17 participants who chose to pay for any precaution in Treatment #1 270 J Risk Uncertain (2015) 50:249–280 We perform a logit regression in which the dependent variable is the decision to In the first column of Table 7, we limit the analysis to the no-ambiguity treatments in In the second and third columns of Table 7, we report regression results including all Because the insurance load is much higher in the low risk treatments, there These empirical results support our theoretical predictions as formalized in J Risk Uncertain (2015) 50:249–280 271 4.5 Subjective probabilities
Another potentially confounding factor is that participants have different subjective Table 7 Logit regression results: Determinants of the decision to buy insurancea
Coefficient Estimates Independent Variables Unambiguous All precaution/insurance treatments
Constant −19.203*** −0.491 −0.435 Full precaution when 21.609*** 4.311*** 4.592*** Part precaution when 17.086*** 0.088 0.181 Initial probability of loss 1.225* 0.079 −0.232 No Mistakes treatment=1 −1.160*** −2.064*** Others’ Mistakes treatment=1 0.603** 0.726*** High Risk X No Mistakes 1.674** High Risk X Others’ −0.315 (0.429)
Subject-treatments n=120 n=360 n=360
Mean dependent variable 0.55 0.617 0.617
Log-likelihood −34.468 −156.289 −152.59 Adjusted R-squared 0.534 0.323 0.329
a This table reports results of logit regressions in which the dependent variable is a dummy variable equal to 1 *** significant at the .01 level ** significant at the .05 level * significant at the .1 level
272 J Risk Uncertain (2015) 50:249–280 experiment, we can estimate a unique subjective probability of loss for each participant Subjective Probability of Loss ¼ p þ 1−pð Þm ð7Þ p ¼ initial probability of drawing an orange ball; given the precaution=insurance choice ¼ 1 − Estimated Own Quiz Score � � ¼ 1 − Estimated Average Quiz Score � � We expect that those who have a higher subjective probability of loss due to higher To investigate this issue empirically, we next estimate logit regressions in which the Table 8 Subjective probability of loss (SPL)a, by treatment type (n=60 participants)
Treatment Type Mean Standard Deviation Minimum Maximum
Initial Probability Ambiguity Type
Low Risk No Mistakes 0.100 0.000 0.100 0.100
Own Mistakes 0.263 0.093 0.145 0.505
Others’ Mistakes 0.306 0.087 0.190 0.550
High Risk No Mistakes 0.320 0.000 0.320 0.320
Own Mistakes 0.443 0.070 0.354 0.626
Others’ Mistakes 0.476 0.066 0.388 0.660
a SPL is the subjective probability of loss prior to any spending on risk mitigation. For the No Mistakes J Risk Uncertain (2015) 50:249–280 273 one if the treatment type is either of the mistakes treatment types; reference category: Table 9 Logit regression results: Determinants of the decision to buy insurance,a controlling for subjective Independent Variables Coefficient Estimate Constant −2.084*** −1.386** Participant paid for full precaution when 4.038*** 4.093*** Participant paid for some precaution when −0.249 −0.191 Any Mistakes Treatment=1 0.900** No Mistakes Treatment=1 −0.695* Others’ Mistakes Treatment=1 0.496** Subjective Probability of Loss (SPL) 3.543** 3.296* Mean dependent variable 0.617 0.617
Log-likelihood −154.245 −153.073 Adjusted R-squared 0.335 0.336
a This table reports results of logit regressions in which the dependent variable is a dummy variable equal to 1 *** significant at the .01 level ** significant at the .05 level * significant at the .1 level 274 J Risk Uncertain (2015) 50:249–280 4.6 The effect of ambiguity on level of precaution
The theoretical model suggests that ambiguity should decrease the incentive to take Table 10 Tobit regression results: The effect of ambiguity on precaution (precaution-only treatments)
Independent Variables Coefficient Estimates Constant 6.812*** 3.554** Subjective Probability of 11.011*** 11.418*** Any Mistakes Treatment=1 −3.548*** No Mistakes Treatment=1 3.173*** Others’ Mistakes Treatment=1 −0.886 Probability > Chi Square 0.000 0.000
Log-likelihood −1012.88 −1012.19
a This table reports results of tobit regressions in which the dependent variable is the amount spent on J Risk Uncertain (2015) 50:249–280 275 5 Conclusions
We develop a theoretical model of the decision between precaution and insurance under an We test whether experiment participants prefer insurance in cases when taking full Our results contribute to better understanding of risk management decision-making Observed underinsurance against catastrophic losses has often been explained as There are important policy implications for cases in which individuals and firms may While we find evidence in favor of ambiguity aversion, our experiment is not 276 J Risk Uncertain (2015) 50:249–280 menus of risk mitigation alternatives, we find that nearly half of the participants make Acknowledgments The authors would like to thank the anonymous referee, the Editor Kip Viscusi, Glenn Appendices
Appendix 1: Ambiguity aversion increases willingness to pay
In this Appendix we show that ambiguity aversion increases the willingness to pay to The probability of a loss is π(c, ε) where ε is a random variable with distribution F. Ui c; εð Þ ¼ 1−π c; εð Þð Þu w−cð Þ þ π c; εð Þu w−c−dð Þ; ðA:1Þ V cð Þ ¼ E F Φ U c; εð Þð Þf g ¼ E F Φ � � n o Let c* denote the optimal value of care. The willingness to pay to avoid the risk, P, is
Φ u w−Pð Þð Þ ¼ E F Φ U c*; εð Þð Þf g ¼ Φ E F U c*; εð Þf g−Að Þ ðA:3Þ u w−Pð Þ ¼ E F U c*; εð Þf g−A: ðA:4Þ u w−P0 ¼ E F U c0; ε : ðA:5Þ
For an ambiguity averse individual the ambiguity premium is positive and the optimal u w−P1 ¼ E F U c1; ε −A ðA:6Þ
Since EF{U(c 1, ε)} and A>0, we have P1>P0, ambiguity aversion J Risk Uncertain (2015) 50:249–280 277 Now suppose that π is free of c, so that c0=c1=0. Then EF{U(c 1, ε)}. Appendix 2: Examples of precaution-only and precaution and insurance Menu of choices in a precaution-only treatment: Choose ONE of the following options below.
Decision Up-front Cost to Replace New # of Orange Balls New # of White Balls Probability Orange A $0.00 10 90 10%
B $1.50 9 91 9%
C $3.00 8 92 8%
D $4.50 7 93 7%
E $6.00 6 94 6%
F $7.50 5 95 5%
G $9.00 4 96 4%
H $10.50 3 97 3%
I $12.00 2 98 2%
J $13.50 1 99 1%
K $15.00 0 100 0
Your decision in Scenario 1 Choose ONE of the following options below. New # of Orange Balls New # of White Balls Probability orange A $0.00 10 90 10% L (Insurance) $14.50 10 90 N/A
Your decision in Scenario 7
278 J Risk Uncertain (2015) 50:249–280 References
Alary, D., Gollier, C., & Treich, N. (2010). The effect of ambiguity aversion on insurance demand. Working Andersen, S., Harrison, G. W., Lau, M. I., & Ruström, E. (2006). Elicitation using multiple price list formats. Andersen, S., Fountain, J., Harrison, G. W., & Ruström, E. (2013). Estimating subjective probabilities. Anderson, D. (2002). Environmental risk management: a critical part of corporate strategy. The Geneva Bajtelsmit, V., & Thistle, P. (2008). The reasonable person negligence standard and liability insurance. Journal Bajtelsmit, V., & Thistle, P. (2009). Negligence, ignorance and the demand for liability insurance. Geneva Risk Bajtelsmit, V., & Thistle, P. (2013). Mistakes, negligence, and liability. Working Paper. Geneva Risk and Insurance Review, forthcoming. Working paper. Harvard University. (Eds.), Choices, Values, and Frames. Cambridge University Press. 2, 265–300. Journal of Risk and Uncertainty, 5, 325–370. frictions. The Geneva Papers on Risk and Insurance, 35, 391–415. Risk and Uncertainty, 38, 73–86. mental policy. Journal of Environmental Economics and Management, 57, 65–86. Economics, 10(2), 171–178. type risks: experimental evidence. Journal of Risk and Uncertainty, 20, 271–289. constraints and risk behavior. American Economic Review, 80(2), 160–165. Insurance, 63, 381–404. tasks. The Economic Journal, 120, 595–611. experiment. Scandinavian Journal of Economics, 109(2), 341–368. experiments: a case study of risk aversion. Econometrica, 75(2), 433–458. Working Paper. Georgia State University Center for the Economic Analysis of Risk. probability distributions. Working paper. Georgia State University. 35. University of Munich. 47(2), 263–291. Econometrica, 73(6), 1849–1892.
J Risk Uncertain (2015) 50:249–280 279 Kunreuther, H., & Pauly, M. (2004). Neglecting disaster: why don’t people insure against large losses? Kunreuther, H., & Pauly, M. (2005). Terrorism losses and all perils insurance. Journal of Insurance Kunreuther, H., Novemsky, N., & Kahneman, D. (2001). Making low probabilities useful. Journal of Risk and Laury, S., McInnes, M., & Swarthout, J. (2009). Insurance decisions for low-probability losses. Journal of Mayers, D., & Smith Jr., C. (1982). On the corporate demand for insurance. The Journal of Business, 55(2), McClelland, G., Schulze, W., & Coursey, D. (1993). Insurance for low-probability hazards: a bimodal Neilson, W. S. (2010). A simplified axiomatic approach to ambiguity aversion. Journal of Risk and Quiggin, J. (1982). A theory of anticipated utility. Journal of Economic Behavior and Organization, 3, 323– Riddel, M. (2012). Comparing risk preferences over financial and environmental lotteries. Journal of Risk and Sarath, B. (1991). Uncertain litigation and liability insurance. Rand Journal of Economics, 2, 218–231. Insurance Issues and Practice, 25. probable small losses: insurance implications. Journal of Risk and Insurance, 44, 237–258. Risk and Uncertainty, 42, 27–43. under risk. Journal of Economic Literature, 38, 332–382. Journal of Risk and Uncertainty, 5, 297–323.
280 J Risk Uncertain (2015) 50:249–280 Copyright of Journal of Risk & Uncertainty is the property of Springer Science & Business Abstract C© Risk Management and Insurance Review, 2005, Vol. 8, No. 1, 141 -150
THE COLUMBIA SPACE SHUTTLE TRAGEDY: ABSTRACT
Space flights are no longer rare events, but the commonplace is not necessarily INTRODUCTION Piotr Manikowski is with the Poznań University of Economics, Insurance Department, 141 142 RISK MANAGEMENT AND INSURANCE REVIEW
at a speed of 21,000 kilometers an hour in the upper layers of the atmosphere above Debris from the space shuttle fell to the ground, but did not cause serious damage. GENESIS OF SPACE (SATELLITE) INSURANCE Insurance for space activities has evolved over many years through the collaboration In the formative years of the space age, projects were uninsurable: launch vehicles were In time, and with increasing experience of insurers and the insured, the insurance market 1. Property insurance: (pre-launch, launch, in-orbit insurance);
2. Third-party liability insurance;
3. Warranty insurance (loss of revenue, launch re-flight (risk) guarantee, incentive The third group is supplementary to property cover. In this study only third-party li- RISK OF THIRD-PARTY LIABILITY FOR LOSSES MADE BY SPACE OBJECTS THE COLUMBIA SPACE SHUT TLE TRAGEDY 143
of the explosion of a rocket only a few meters above the ground, the potential loss could In connection with the specificity of space activity and its “over-territorial” character, it � The Treaty on Principles Governing the Activities of States in the Exploration and � The Agreement on the Rescue of Astronauts, the Return of Astronauts and the � The Convention on International Liability for Damage Caused by Space Objects � The Convention on Registration of Objects Launched into Outer Space (the “Reg- � The Agreement Governing the Activities of States on the Moon and Other Celestial These acts constitute the bulk of what is referred to as “space law,” intended as that branch The first of these acts (“Outer Space Treaty”) already includes article VII, which concerns 144 RISK MANAGEMENT AND INSURANCE REVIEW
That basic rule was even enlarged upon in the “Liability Convention,” according to Moreover, this distinction in space law also requires a definition of where “outer The compensation provided for in the “Liability Convention,” depends on the identifica- Damages inflicted on third parties occur more often on the earth. During take-off, there THE COLUMBIA SPACE SHUT TLE TRAGEDY 145
1. the failure of a Long March 3B in 1996, which pitched over before clearing the launch 2. the second stage of a Thor Able Star rocket fell to the ground in Cuba and killed a 3. the failure of a Proton launcher on July 7, 1999, which resulted in an 80-ton 4. another failure of a Proton rocket on October 27, 1999, 3 minutes 40 seconds into 5. at least 21 people were killed in August 2003 in Alcantara (Brazil) after the explosion It is also possible during the operation of spacecraft for harm to be inflicted on third A spacecraft could suffer damage (both partial and total loss) as a result of collision with � with another operating satellite; The chance of a collision between two operating spacecrafts is small. These objects are Human activity in outer space has resulted in the appearance of many objects orbiting 146 RISK MANAGEMENT AND INSURANCE REVIEW
alerts its space shuttles of a possible collision when any other object comes within 50 Article II of the “Registration Convention” imposes on launch operations the obligation Currently, the possibility of an operational satellite being damaged or destroyed by For large close-to-earth orbiting spacecraft and for space debris there is a risk of a fall to 1. the spent stage of a Saturn V rocket, weighing about 22 tons, which fell into the 2. the American Skylab, weighing approximately 80 tons, crashed over the western However, in reality, despite the large size of these objects, the risk of damage to the earth What causes more concern is the environmental damage that can be caused by space- THE COLUMBIA SPACE SHUT TLE TRAGEDY 147
The service and/or repair of spacecrafts in orbit could cause liability of the owner of the SPACE THIRD-PARTY LIABILITY INSURANCE IN THE WORLD INSURANCE MARKET In general, liability insurance covers the insured against potential claims and ensures It covers the legal liability arising from damage to a third party during the preparations The launch service providers typically purchase third-party liability insurance for the Exclusions that are typically applied to a third-party liability policy, include (Margo, 148 RISK MANAGEMENT AND INSURANCE REVIEW
� war risks; insured or any carrier as his insurer may be liable to his own employees, under any � any damages to the property of the insured; whatever cause thereof; The limits recently purchased vary from around US$60 million to US$500 million. For Rates differ considerably. They are affected by trends in the overall liability market and CONCLUSIONS So again it should be emphasized—with the development of space transportation—both THE COLUMBIA SPACE SHUT TLE TRAGEDY 149
participants and the enormity of damages that may occur. In addition to the risk involved REFERENCES Technology, 150(22): 30-31. PWN). on Risk and Insurance, 10(35): 51-86. Space Markets, Winter: 211-14. 68-72. merciale de l’Espace (Paris: LITEC). OPRES). Industrial Activities in Space—Insurance Implications (Trieste: Generali), pp. 41-49. cyjne, 3: 3-13. Hovercraft and Spacecraft Insurance, 3rd edition (London, Edinburgh, Dublin: Butter- Meredith, P., and G. Robinson, 1992, Space Law: A Case Study for the Practitioner: Imple- Pagnanelli, B., 2001, Space Insurance Towards the Next Decade. In: Commercial and In- Pino, R., 1997, With the Continued Development of Space, the Satellite Industry will En- Schmid, T., and D. B. Downie, 2000, Assessing Third Party Liability Claims, In: The 9th Space Flight and Insurance, 1993, 2nd edition (Munich Re).
Space Insurance Briefing, 2001, (London: Marsh Space Projects Ltd.). 1-4. 150 RISK MANAGEMENT AND INSURANCE REVIEW
Zocher, H., 1988, Neuere Internationale Entwicklungen in der Raumfahrt und ihrer Zocher, H., 1988, Neuere Internationale Entwicklungen in der Raumfahrt und ihrer 336 The Journal of Risk and Insn^rance
TEACHERS, COMPUTERS, James A. Wickman
An increasingly familiar sight along the Computer technology is an unsettling COMPUTER USER: “I wrote this pro- LISTENER: “Cee whiz!” twelve runs to de-bug this.. .” can do two plus two three thousand times LISTENER: “CEE WHIZ!”
On the other hand, worship of peri- One can raise psychological defenses puter can be instructed to do various com- Becoming a Computer User
Happily, it is not necessary to become Information About Programs
One of the more useful “families” of An eflBcient index to many existing com- ^ These programs are described in BMD— Communications 337
title, resulting in an ability to scan the Many campus computer installations A special-purpose index of “canned” “Canned^’ Programs and Teaching
“Canned” programs offer many oppor- Even if a “canned program” is not read- and the desired format of results to a Additional Computer Features
Beyond the saving in computational The computer can be told what pro- ^ “Qualified programmer,” in a pragmatic ^ Several imiversides are adopting remote con- 338 The Journal of Risk and Insurance
speed is virtually undiscernible in the Even without these “Gee Whiz” addi- Risk and Insurance Courses the instructor must refer frequently to sta- “Capital Investment” instructor in developing illustrations which “Statistical Block” or willing to utilize their prior training in statistics is clouded with a “statistical A risk and insurance teacher can avoid Illustrative Teaching Problem tegrated with risk and insurance problems The formal reasoning lying behind this A mortality table displaying number of statistics is not a prerequisite to courses in risk Communications 339
tion is the average age at death for each To express sucb a line of reasoning ^ This program, written by the author, derives can use reproductions of this tabular and Appendix A presents an abbreviated Summary
Rapid evolution of computer technol- Appendix A
LFXP is relatively simple to use. Four * Perhaps to be published, ultimately, as “Ex- 340 The Journal of Risk and Insurance
gram;” others may provide the data for The first calculation performed by the Next, the standard deviation around the If graphic output is requested by the ‘These are: 1941 CSO; 1958 CSO; 1937 tabular summary. At this point the main LFXP is written in the FORTRAN IV This brief discussion deals with the ma- 0
cards) of the source program can be ob- 8 SHARE, Distribution No. 1085. Communications 341
Chart 1 ( 95.000 0/0 CONFIDENCE LIMITS)
1 00 .0 + U U- + —.i^.^-..-..-.-.- … .( … U–^-
A 83.a
66.3
49.5
I I 32.7 L-
I I I I I
I u uI U U 1 I * # • I I I L
L
. J, U U I » * t I I L I
L I U [ * L KEY TO PLOTTING CHARACTERS
# = AVERAGE AGE AT DEATH LOWER CONFIDENCE LIMIT I
25 50
– PRESENT AGE –
75 100
SOURCE — LFXP 342 The Journal of Risk and Insurarwe
Table 1
AVERAGE AGE AT DEATH FOR PERSONS NOW AGE X f. .– [ I 0-
: 5
: 10
: 15
[ 20
25
30
35
40
45
50 55
60
65
70
75
80 ]
85 I
90 :
95 :
100 ] I NUMBER ALIVE I 10000000
: 9868375
: 9805870
: 9743175
9664994
9575636
9480358
9373807-
9241359
9048999
8762306
8331317
7698698
6800531
5592012
4129906
2626372
1311348
,468174
97165
0 I I I I NUMBER DYING D(X)
70800
13322
1 1865
14225
17300
18481
20193
23528
32622
48412
72902
108,307
156592
215917
278426
303011
288648
211311
106809
34128
0 I I H 68.3
69.2
69.6
7 0.0
70.4
70.8
71.3
71.7
72.2
72.8
73.6
74.7 I
76.1 I
77.9 :
80.1 I
82.8 I
.8 5.9
89.3 I
93.1 I
96.8 ]
0.0 I
), -„.
COEF. OF V(X)
0.266
0.239
0.228
0.218
0.207
0.196
0.186
0.176
0.167
0.1 5fe
0.144
0.130
0.114
0.098
0.081
0.065
0.050
0.037
0.026
0.014
0.000
+ •-••
1 —. + E 68.3 64.2
59.6
55.0
50.4
45.8
41.3
36.7
32.2
2 7.8
23.6
19.7
16.1
12.9
10.1
7.8 4.3
3.1
1 •B
0.0 i I SOURCE — LFXP 132 The Journal of Risk and Insurance
way of financing care and also to expand The final consensus of the conference This is a most useful book for any INFLATION, TECHNOLOGY AND Reviewer: J. D. Hammond, Professor of The general title of this new book sug- ‘Page 259.
and Neumarm approach is a serious at- The volume was written as a part of A statement by a University executive The book contains 319 pages of text Publications 133
to find only one graph. Labor economists The entire findings of the research rest Mehr and Neumann have adhered containing 73 questions about various as- Second round responses were then cir- The 58 finishers represented a cross- Two of die first three chapters of the 134 The Journal of Risk and Insurance
capt auto. A summary is presented in the The general tone of most of the ques- All responses are given in terms of a So much for the content and the ap- While the Delphi Technique is gen- A second problem concerns any fore- cluding insurance—is so high that any The investment in time by panel mem- It would be very helpful to know th« Publications 135
if all of the areas are equally represented The book is interesting to read and in- Professors Mehr and Neumann have ^DAMENTALS OF RISK AND IN- ^viewer: William M. Howard, Professor fundamentals of Risk and Insurance is course in risk and insurance. The stated The section on life and health insur- The authors have recognized the prob- What knowledge may the authors of
by a negligence rule, individuals who exercise a Breasonable^ level of care will
have a zero probability of loss. More specifically, under a negligence rule where
the negligence standard of care is z, an individual is liable for damages if their
level of precaution is less than the negligence standard, ci
accident is a function of care or precaution and is deterministic. Now suppose that it is
possible to make a mistake that, despite expenditure on care, can result in an accident.
We can think of this as a momentary lapse in judgment, such as a driver glancing away
from the road just before a dog crosses the street or an oil rig worker failing to notice
precisely how the courts will assess liability and damages from environmental losses.
As discussed at length in Anderson (2002), these types of losses expose firms to a great
that there is a random chance of a mistake, but they do not know exactly how it will
impact the probability of loss.
or precaution, which results in loss d, and assume that the probability of a mistake is
unknown. We deliberately do not distinguish the sources of this mistake. It could be
one’s own mistake, the mistake of another agent, or an error by the courts. The fact that
the probability of a mistake is unknown introduces ambiguity. Letting m = E ~mf g be
the expected probability of a mistake, expected utility is given by:
� �
� �
ability of mistake. As m approaches 1, expected utility is optimized with zero expen-
diture on care. For a very small expected probability of a mistake, the problem reduces
to Eq. (1) and the individual will select the level of care that minimizes total cost of loss
and precaution.
second order expected utility function
� �� �n o
� ��
n o ð4Þ
Neilson 2010). The vNM utility function u captures the attitude toward risk while Φ
captures the attitude toward ambiguity. If the individual is ambiguity neutral then Φ is
linear and if the individual is ambiguity averse then Φ is concave. An ambiguity-averse
individual is willing to pay to eliminate the risk; the willingness to pay to avoid the risk
is given by Φ(u(w − PiV)=max E{Φ(Ui(ci, ~m). We show that ambiguity aversion
increases the willingness to pay to avoid the risk,
the proof is given in Appendix 1.6 In sum, ambiguity aversion is shown to increase the
demand for insurance.
indeterminant and depends on the fine detail of the theoretical model. Snow (2011)
shows that if individuals have unbiased beliefs (i.e., E{π(c, ~m)} equals the objective
loss probability), then the loss probability must be either multiplicatively separable
(π(c, ~m)=α(c)π(~m)) or additively separable (π(c, ~m)=π(~m)+β(c)). Snow further shows
that multiplicative separability implies ambiguity aversion increases the expenditure on
care. Snow (2011) and Alary et al. (2010) show that additive separability decreases the
expenditure on care. The effect of ambiguity aversion on the expenditure on care is
therefore an empirical question. However, decreased willingness to pay for small
the risk when the distribution of the risk is fixed. Their result does not apply directly here because individuals
can shift the distribution of risk by exercising care.
and implies a discontinuity in behavior between small risk reductions and risk elimi-
nation. This suggests that ambiguity will lead to lower expenditures on care.
bility of loss depends both on risk type and the chance of mistake, then the expected
utility for a person who buys the insurance policy (pi, qi) and spends ci on care is given
by:
� �
� �
þ m u w−pi−ci−d þ qið Þ
ci)=E{Φ(Ui(pi, qi, ci, ~m)}. Given the risk of mistakes, the actuarially fair premium is
pi=(πi + m (1 − πi))d. If the premium is actuarially fair, then the individual will fully
insure (q=d), and receive utility u(w − ci* − pi).
gate the theoretical predictions developed above and formally present a set of testable
hypotheses in the context of the experimental design. To summarize, under a setting of
a clearly-defined negligence standard with no risk of mistakes, we test the predictions
that individuals will not insure if it is more efficient to simply meet the standard of care,
and that individuals are less likely to insure as the size of the insurance loading factor
increases. We introduce mistakes into the design, and investigate the impact of ambig-
uous increases in the probability of loss on insurance and precaution decisions.
that were used to implement the design in the laboratory. Where applicable, the design
and procedures follow those used in the Laury et al. (2009) experiments. In our within-
subject design, each participant made independent decisions in twenty treatments. A
random draw of one treatment at the end of the experiment determined actual payoffs.
explained with the analogy of a random draw from 100 white and orange ping pong
balls, where a draw of an orange ball resulted in a loss of a specific dollar amount from
their experiment earnings. Participants were told the probability of loss through a
description of the number of orange and white balls respectively in each treatment as
well as the numerical probability of loss. In some treatments they could reduce their
probability of loss by paying for units of precaution, described as the option to pay to
replace orange balls with white balls. In other treatments, participants could choose
between precaution, insurance, and no risk mitigation. An actuarially fair premium in a
competitive insurance market is based on the expected loss in a population of
policyholders in which some face higher risks of loss than others. Therefore, the
insurance load associated with a single premium will vary across individuals. In our
main treatments, we hold constant the loss severity, insurance premium, and cost per
unit of precaution, which implies the insurance (or equivalent precaution) load will
necessarily be higher in treatments with a lower initial risk of loss than treatments with
whether the chance of mistakes changes participants’ choices over precaution and
ambiguitya and risk type
in treatment
amo-
unt
($)
alternatives
available
#
High
Load
#
Low
Load
probability
probability
insurance
probability of own
mistake
probability of own
mistake
45.00 Precaution or
insurance
unknown probability
of other’s mistake
unknown probability
of other’s mistake
45.00 Precaution or
insurance
in treatment
Loss
amo-
unt
($)
Risk mitigation
alternatives
available
Initial probability of loss
Treatment
#
#
known probability
known probability
known probability
known probability
probabilities and the effect that their risk mitigation decision will have on the probability of loss. In the Own
Mistake treatments, participants know the initial probability of loss, but are subject to an additional unknown
risk of loss that depends on their own performance on the driving quiz. In the Others’ Mistake treatments,
participants know the initial probability of loss, but are subject to an additional unknown risk of loss that
depends on the performance of another participant on the driving quiz. Because the secondary risk is
participant-specific, the probability of loss for the ambiguity treatments is not known for certain but is
greater than or equal to the initial probability of loss that is given in the treatment
b The replication treatments use the loss amounts and probabilities given in Laury et al. (2009). These
treatments were included in the experiment for purposes of validation of the experimental design, but are not
used in any of the main empirical models in this paper
depending on mistakes made during the earnings task. These elements of the experi-
ment are described more fully in this section.
paid a $15 participation payment in cash at the start of the experiment, and collected a
signed receipt from each participant. We encouraged them to put this money away and
emphasized that the $15 was payment for their participation and would not be at risk in
the experiment. We also clearly framed the risky environment to require decisions over
losses of their earnings, rather than gambles over gains. This design feature was
intended to more closely resemble decision-making in the actual insurance market.
Prior to receiving any instructions or information about the risk management and
insurance task, participants earned their endowment by successfully completing an
earnings task, which required taking a written quiz covering basic knowledge about
state driving rules. Upon completion of the driving quiz, they were asked to estimate
their own score and the average score for the group. 7 Following the earnings task, they
received instructions and completed an assessment to ensure that they fully understood
the decisions they would be asked to make in the experiment. After the assessment,
they reviewed their earnings task answers and estimated scores and entered them into
computers. Lastly, they participated in the precaution and insurance decision-making
task which, together with chance, determined whether they experienced a loss from the
money they earned in the earnings task.
based on replication of Laury et al. (2009), are given in Panel B of Table 1. In the
baseline treatments, insurance is the only available form of risk mitigation and the
manipulations include probability of loss, loss amount, and insurance load. The
combinations of treatment manipulations result in eight baseline treatments. As in
Laury et al., the expected loss is set to $0.45 and $0.60 and insurance loads are set to
1 (actuarially fair insurance) and approximately 3 (3.22 and 3.25 to facilitate stating
premiums in increments of $.05).
are summarized in Panel A of Table 1. These treatments include: type of risk
mitigation (precaution only or a choice between precaution and insurance), initial
probability of loss and corresponding insurance load (either high probability-low
load or low probability-high load), and ambiguity (none, ambiguity resulting from
subject matter and could successfully complete the quiz but still have some risk of making mistakes. The
median quiz score was 75%. The median estimates for own score and others’ scores were 85 and 78%
respectively. We required that participants had a driver’s license. The questions on the quiz were similar to
those that would appear on a written state driving test. Although the risk of errors was therefore clearly related
to auto accident risk, all instructions and the loss scenarios were framed in neutral language and not in the
context of decisions over auto insurance per se.
unknown errors on a different, unknown participant’s driving quiz). In the treat-
ments without ambiguity, the probability of loss, both before and after any precau-
tionary spending, is known by the participants. In the treatments with ambiguity,
the participants do not have full information about the probability of loss. In
particular, participants’ own scores, other participants’ scores, and the distribution
of quiz scores are all unknown to participants and also unknown to the experi-
menter. For the ambiguity treatments, it was explained to participants that if an
orange ball was drawn, they would experience a loss for certain. However if a
white ball was drawn, then the outcome would depend on an additional random
draw from an unknown distribution (driving quiz questions). In the Own-Mistakes
treatments, a quiz question was randomly selected and each participant incurred a
loss if their own answer to the selected question was incorrect. In the Others’-
Mistakes treatments, a quiz question was randomly selected, and another participant
was randomly selected, and a loss occurred if the other participant’s quiz question
was answered incorrectly. Therefore, the quality of information across the ambiguity
treatments varies. The combinations of type of risk mitigation, initial probability of
loss, and ambiguity manipulation result in 12 main treatments. The experiment
(Initial probability of
loss = 10%)
(Initial probability of
loss = 32%)
(risk
mitigation)
(risk
mitigation +
actual loss)
after risk
mitigation
mitigation +
E(loss)
after risk
mitigation
Cost of risk
mitigation +
E(loss)
which the 60 participants were exposed to a known probability of loss (no ambiguity) and were given a menu
of 11 risk mitigation alternatives (Decisions A-K). They could do nothing (Decision A) or they could reduce
the probability of the bad outcome in Decisions B-K) by paying $1.50 to replace orange balls with white balls
($1.50 for one ball in the low risk treatments and $1.50 for 4 balls in the high risk treatments). Their total costs
were therefore either the cost for the level of risk mitigation they selected (Decision A-K) or, in the event that
an orange ball ended up being drawn, the cost of the risk mitigation plus the cost of the loss itself
simulate catastrophic loss.
a menu of precaution options, from which they could select to incrementally
reduce the probability of loss at a cost of $1.50 per unit of precaution. In the
treatments with both precaution and insurance, the option to purchase insurance
for $14.50 was added as an alternative to the precaution choices. Table 2
summarizes the costs, probabilities, and expected losses under the menu of
alternatives available in the No Ambiguity treatments.8 In the treatments with
ambiguity, the menu of precaution options was the same, but probabilities and
expected losses were unknown. Consistent with our theoretical model and with
intuition, the Bmarginal product of care^ was higher for the high probability
treatments. In the low probability treatments, each $1.50 resulted in a one
percentage point reduction in probability of loss, and in the high probability
treatments, it resulted in a 4 percentage point reduction in probability of loss.
The reduction in probability was presented to the participants numerically and
also with the analogy of Bremoving orange balls and replacing them with white
balls.^ The initial 32% and 10% probabilities of loss for the High Risk and Low
Risk manipulations correspond to expected losses of $14.40 and $4.50 respec-
tively in the No Ambiguity treatments. Since participants could eliminate risk
through buying full precaution in these treatments, the equivalent insurance loads
under full precaution are 3.33 under the low risk of loss, and 0.83 under the high
risk of loss. In the precaution/insurance treatments, the insurance premium was
uniformly set at $14.50. This implies an insurance load of 3.22 in the low-
probability treatment and a load of approximately 1 in the high-probability
treatment. These insurance loads also facilitate comparison with Laury et al.
(2009) who investigated behavior under a loading factor of 1 compared to a
loading factor of 3.
experiment was programmed and implemented with a Z-tree application (Fischbacher
2007) and all sessions were conducted in a networked computer lab with partitioned
stations. Six sessions, each with ten participants, were conducted between June and
October 2013. As in Laury et al. (2009), we conducted the experiment in a four-phase
sequence: induction, earnings task, risk management decision task, and payment, as
summarized in Fig. 1.
a Power Point presentation at the front of the room, and read the instructions aloud. In
the earnings task, participants earned $60 for correctly answering 8 or more out of 20
questions on the driving quiz described above. To assess confidence in their
decision, separately for each scenario (presentation of a treatment). They were not presented with the expected
loss. We did not use terms such as ‘precaution’ or ‘risk mitigation’, just the phrase ‘reduce your probability of
loss.’
for their chance of mistake, they were asked after every question to indicate whether
they were certain they had answered it correctly. At the end of the quiz, they were also
asked to estimate their total correct score and to estimate the average score for the other
participants in the session.
participants to form their best estimates and, in our design, a more accurate
estimate allowed for higher expected payoffs.9 We explained to participants that
they needed to try to answer as many quiz questions as possible correctly because
later in the experiment, answering more questions correctly would improve their
chances of earning more money. We note that participants only recorded their
estimated scores after they received all the instructions for the experiment and
completed an instructions assessment which covered how driving quiz scores affect
earnings. Therefore, participants appeared to comprehend that they were rewarded
for accuracy in their estimates, and we view their reported estimates as the beliefs
generating subjective probabilities in our analysis.10
ipants took an instructions-assessment to ensure that they fully understood the different
treatment types (referred to as Bscenarios^ in the instructions) in which they would be
making decisions. The next stage of the experiment did not begin until all participants
completed the assessment correctly and indicated they had no further questions.
their confidence assessment for each answer, estimated their own total score and
the average score for the group. The computer calculated their actual score, and
reported their $60 earnings to them on the screen. Although participants did not
know their scores, by earning $60, they necessarily knew that they had answered
at least 40% of the quiz questions correctly. The participants then began the risk
the driving quiz is better off not purchasing insurance. But if the participant’s actual score on the driving quiz
is 70% then the expected payoff is higher with insurance and such a subject is, therefore, penalized for error.
Scoring rules are often applied in experiments to reward accurate reports of subjective probabilities, typically
in the form of a fixed reward for the estimate plus a penalty for error. Yet in these cases subjects’ risk
preferences can affect their reports. See, for example, Andersen et al. (2013) and Harrison et al. (2013).
10 Technically, because we don’t reward and penalize reported scores directly, participants could estimate one
score, but report a different score. However, given the 20 different treatments and careful attention to detail
required throughout the experiment we note this would be very cognitively costly. Combined with the lack of
financial incentive to record a particular score different from a true estimate, we view this as highly unlikely.
participation fee
that is not at risk
in the
experiment
for successfully
completing a
Driving Quiz
scores for self
and others.
Decision Task
Scenarios which
place their
earnings at risk.
that will
determine their
net earnings
from the
experiment
precaution decisions for the twenty treatments described in Tables 1 and 2 in the
previous subsection. The treatments were randomized and participants were
allowed to make revisions after completion of all twenty scenarios. This minimized
the risk of order effects and data entry errors.11
determine experiment earnings with a public draw by a participant from a basket of
twenty numbered ping pong balls. All participants entered the scenario number into the
program and the computer simulated the draw from the individual distributions that
would determine their earnings, given their own expenditure on precaution or insurance
for that scenario. Although all participants’ outcomes were determined by the same
treatment, their individual decisions related to precaution and insurance resulted in
participant-specific net earnings. The participants were then given an on-screen sum-
mary of the outcome of the draw and their personal earnings. Finally, they completed a
demographic survey and were then privately paid their net earnings in addition to the
participation fee received in the induction, by the experimenters.
earned an average of $67 each, including their $15 participation fee. Although the
sessions were relatively long, per hour compensation was fairly high and many partic-
ipants indicated that, independent of earning the money, they enjoyed the experience.
develop several hypotheses that are tested in the experiment. Hypotheses 1 and 2 are
tests of theoretical results from Bajtelsmit and Thistle (2008). Hypotheses 3, 4 and 5
relate to the effect of ambiguity on insurance and precaution decisions.
management method to accomplish this goal.
either in probability of loss or cost of taking precaution to reduce the risk of loss, can
create a market for liability insurance. They find that for some individuals and firms—
those with high cost of care and/or low probability of loss—it may be more efficient to
buy insurance rather than to take optimal care. We hypothesize that rational expected
utility maximizers will select the risk management choice that most efficiently achieves
their desired outcome. In our experimental design, participants can reduce their risk to
zero in the unambiguous precaution treatments by paying for full precaution and, in the
precaution/insurance treatments, by purchasing insurance. We therefore expect that
participants who prefer full precaution when insurance is unavailable will be more
likely to buy insurance when it is available. Although full precaution and insurance in
the treatments without ambiguity can accomplish the goal of zero risk, they are not
subject. Within treatment type, the order of the treatments was also varied randomly.
insurance to achieve their goal of risk reduction only if it is the lowest cost alternative
for achieving that outcome i.e., in the low-probability treatments.
ments with and without insurance.
pants will exhibit consistent risk preferences across the different insurance treatments.
Therefore, those who prefer less than full precaution in the treatments in which
insurance is unavailable will be less likely to purchase insurance when it is available.
in the probability of loss than the likelihood of insurance purchase (or the equivalent of
full precaution) under objectively known increases in the probability of loss.
demand for insurance. In our experimental design, the risk of mistakes introduces
ambiguity, but simultaneously increases the expected probability of loss. Thus, for
the same initial probability (10% or 32%), the demand for insurance is both a
function of ambiguity and the increase in probability that results from the ambig-
uous risk of mistakes by self or others. A more direct test of this hypothesis is
possible because the ambiguous precaution/insurance treatments with 10% initial
probability have approximately the same expected probability as the unambiguous
32% probability of loss treatments. 12 The lower insurance load for the 32%
probability treatments could make insurance more attractive as compared to the
10% probability treatments. The net effect is unknown, a priori, but a finding that
an ambiguous increase in the probability of loss from 10% has a greater impact
on the demand for insurance than the known increase from 10% to 32% would be
a stronger result because the insurance load is three times larger under the initial
probability of 10% than 32%.
known than when it is ambiguous.
nuances of the theoretical model design. Similar to Hypothesis 2 above, participants are
expected to respond to the price of precaution in the sense that a given amount spent on
precaution does not reduce the probability of loss by as much in the ambiguity
treatments as it does in the known-probability treatments. Even after paying for the
maximum precaution, reducing the initial probability from 10% or 32% to zero, they
are still subject to a positive, but ambiguous risk of loss. Therefore, as compared with
treatments without ambiguity, the amount spent on precaution is expected to be lower
in the ambiguity treatments. The degree of this difference should be related to
loss of 32.5% before risk mitigation.
estimated driving quiz score, the lower the expected probability of loss.
of ambiguity.
ambiguous increase in the probability of loss. Although this is true for all the mistakes
treatments, participants generally will have more information about their own risk of
errors on the driving test than they do about the risk of errors by others. Therefore, the
treatments in which losses depend on the risk of mistakes by others introduce greater
ambiguity than those in which losses depend on the participant’s own mistakes.
main treatments and report corresponding nonparametric tests of the hypotheses. Next,
we address our baseline treatments and discuss risk attitudes suggested by the data.
Finally, we present formal statistical tests of our main hypotheses.
in our main treatments (#1–12) and offers strong evidence in favor of Hypotheses
1 and 2, that participants will make efficient and consistent decisions, given the
risk management techniques available to them. Panel A shows the proportion
choosing various levels of precaution when insurance is not available. Panel B
shows the proportions for treatments in which insurance was also an option.
Comparison of the p=.10 and p=.32 categories across the two figures suggests
that under an initial 10% probability of loss, all participants who choose full
precaution switch to the more efficient alternative of insurance when it becomes
available. 13 Those who choose full precaution to reduce an initial 32% probability
of loss to zero continue to do so after insurance becomes available because
precaution remains the more efficient means to reduce the probability of loss to
zero, although 10 participants (17%) purchase insurance. Comparison of the same
categories reveals almost no change in the portion of participants choosing zero or
partial precaution when insurance becomes available under an initial loss probabil-
ity of 10%, though under an initial probability of 32%, six participants change
their level of precaution from partial to either full or insurance. On net, these
results suggest evidence in favor of Hypothesis 1, which predicts that participants
will choose the more efficient risk mitigation approach; and also in favor of
Hypothesis 2, which predicts that participants’ level of risk mitigation will be
consistent across treatments with and without insurance. McNemar tests confirm
mistakes.
of 32% versus 10% (no mistakes) treatments is significant (p=.002), and that there
is no significant difference in proportions choosing full risk mitigation when
insurance is available versus when it is not (p=0.7789). 14
precaution and insurance. We see evidence in favor of Hypothesis 4, that partici-
pants are less likely to take precaution in the ambiguous mistakes treatments; and
across 120 decisions in the four no-mistakes treatments. There are four switches when the initial probability is
10% and eight under an initial probability of 32%. Separate McNemar tests by initial probability also show no
significant difference in proportions.
10
20
30
40
50
60
mistakes
mistakes
mistakes
mistakes
10
20
30
40
50
60
p=.10 p=.32 p=.10+own
mistakes
p=.32+own
mistakes
p=.10+others’
mistakes
p=.32+others’
mistakes
b
precaution-only treatments (#1, 2, 5, 6, 9, and 10). Panel B Percentage choosing insurance or no, some, or
full precaution, precaution and insurance treatments (#3, 4, 7, 8, 11, and 12)
when the risk of loss depends on others’ mistakes as compared to one’s own
mistakes. In Panel A, for each initial loss probability, the proportion of participants
taking less than full precaution increases, and the proportion taking full precaution
decreases, under both mistakes treatments. Chi-square tests for differences in pro-
portions of precaution/insurance levels are significant at p=0.019 for precaution only
treatments and p<0.001 for the precaution/insurance treatments. Panel B reveals
that, given an initial probability of loss, insurance uptake is considerably higher
under the mistakes treatments (compared to the no-mistakes treatments) and the
increase is higher under the others’ mistakes treatment. When a loss depends on
another participant’s quiz, 67% purchase insurance, compared to 56% when a loss
depends on a participant’s own quiz results. These proportions are significantly
different from each other under the McNemar test (p=0.0193).
larger positive effect on the likelihood of insurance purchase than objectively known
increases. To consider this hypothesis, we compare an initial 10% objective probability
of loss to three different increases in the loss probability: the increase in objective
probability to 32%, the ambiguous increase due to own mistakes, and the ambiguous
increase due to others’ mistakes. Under the objective probability of 32%, insurance and
full precaution are perfect substitutes, and 65% of participants reduce the probability of
loss to zero through one of these approaches. When the probability of loss increases
above 10% due to the own mistakes treatment, 55% of participants fully risk mitigate,
but when the chance of loss depends on others’ mistakes, 67% choose full insurance.
On the surface, there does not appear to be much support for Hypothesis 3, but we
discuss estimates of subjective probabilities of loss and their impact on insurance
purchase in greater detail below.
probability high-severity losses is that individuals ignore or underweight extremely
low probabilities. In contrast to previous studies, under a given expected loss and
insurance load, Laury et al. (2009) find no support for this explanation. We use
nearly identical design elements and parameters in our baseline treatments (13–20,
described in Table 1 Panel B) as in their study to evaluate evidence of this type of
probability weighting by participants in our experiment. In particular, we test the
following two predictions.
bility losses as they are for high probability losses, holding constant insurance load
and expected loss.
Prediction 2: For a given probability and size of loss, participants are less likely to
purchase insurance under a higher load than a lower load. That is, participants
respond to the price of insurance.
only) and McNemar tests of Prediction 1 for differences in insurance purchases
appear to ignore the very low probability of 1% compared to the higher probability
of 10% and furthermore, that they respond in the predictable direction of purchas-
ing less insurance when the price (load) increases.15 When insurance is fairly
priced, they do not appear to overweight the worse outcome (a loss of $45.00 or
$60.00 compared to a loss of $4.50 or $6.00 respectively). Therefore, we fail to
reject Prediction 1 under actuarially fair insurance. We do reject Prediction 1 when
the insurance load increases to 3, but for the reason that insurance purchase is
higher under the low probability of a loss. 16 The percentage of participants
purchasing insurance declines under a high insurance load compared to the low
load, all else equal, but the decrease is not statistically significant in all treatments.
McNemar tests of the hypotheses that participants are equally likely to purchase
insurance under a load of 3 as they are under a load of 1 are significant for a
probability of loss of 10% and expected losses of $0.45 and $0.60 (p<0.0001).
When the probability of loss is 1%, the difference in insurance purchase is weakly
significant for an expected loss of $0.60 (p=.0625), but is not significant when the
expected loss is $0.45 (p=.3750). On net, the baseline results suggest evidence
against probability weighting behavior by participants in this treatment.
16 We interpret this simply as a substitution away from insurance—insuring a realized loss has a relatively
higher price increase under the low-loss event compared to the high-loss event. Given the loss occurs, then the
insurance costs an additional $0.22 per dollar covered under the low loss event, but only an additional $0.02
per dollar covered under the high loss event.
probability of loss
load b
probability
amount
insurance
(p-value) c
(0.0015)14 3 10% $4.50 $0.45 53%
(0.0085)16 3 10% $6.00 $0.60 58%
(0.7539)18 1 10% $4.50 $0.45 87%
(0.4531)20 1 10% $6.00 $0.60 85%
are given the probability of an orange ball being drawn (no ambiguity) and have the option to purchase
insurance against the risk of loss. These treatments alternatively vary the loss probability, the insurance load,
and the loss amount. (n=60 for each treatment.)
b Insurance load is the insurance premium divided by the expected loss (1 = fairly priced)
c The last column shows the McNemar test statistic and p-value for differences in the percent of participants
purchasing insurance in the otherwise-equivalent low and high probability treatments. *** represents signif-
icance at the .01 level
under the experiment parameters. In the precaution-only treatments without ambiguity
(Treatments 1 and 2), participants have menus of choices for reducing the objective
probability of loss from 10% in the low-probability treatment and from 32% in the
high-probability treatment. To examine whether behavior is consistent with the
Bajtelsmit and Thistle (2008) predictions for liability insurance, we compare decisions
from the treatment in which the expected payoff is higher without precaution (low
probability) with decisions from the treatment in which expected payoff is higher with
precaution (high probability). Therefore, the low probability treatment can reveal risk
averse behavior by participants who exercise precaution, while the high probability
treatment can reveal risk seeking behavior by participants who exercise less than full
precaution. Table 4 presents the percent of participants who make each risk mitigation
decision in Treatments 1 and 2.
decisions in the experiment. Over three-quarters of the participants make risk averse
Mistakes)a
(Initial probability of loss = 10%)
(Initial probability of loss = 32%)
front
cost
Precaution
only
Precaution or
insurance
Precaution
only
Precaution or
insurance
were exposed to a known probability of loss (no ambiguity) and were given a menu of risk mitigation
alternatives. In Treatments 1 and 2, they could do nothing (Decision A) or they could reduce the probability of
the bad outcome in Decisions B-K by paying $1.50 to replace orange balls with white balls ($1.50 for one ball
in the low risk treatments and $1.50 for 4 balls in the high risk treatments). In Treatments 3 and 4, they also
had the option of buying insurance for $14.50
modal response in the high and low probability treatments is to reduce the risk of loss to
zero. However, within the high probability treatments, nearly half of the participants do
appear to be risk-seeking in that they opt for less than full precaution which, in this
treatment, provides the highest expected payoff.
(precaution only-known probability) and allows us to better identify consistency with
different risk attitudes. We find that 48% made both risk mitigation decisions consistent
with risk-averse behavior, 17% made both decisions consistent with risk-seeking
behavior, and 7% appear risk neutral in these decisions. However, 28% make risk-
seeking decisions when the probability of loss is high but risk-averse decisions under
the lower probability of loss.
risk attitudes over treatments (those who make a risk-averse choice in Treatment 1 and
a risk-seeking choice in Treatment 2.) This table shows average expected payoffs, given
the actual precaution expenditures, and the average cost of that risk management
decision in terms of foregone expected payoff. On average, these participants reduce
the probability of loss to a level of 5–9%, but not to zero, suggesting a preference for a
lower than initial, but still positive, risk of loss.17 Risk mitigation is cheaper under the
initial high probability and participants consume more of it, on average reducing the
initial risk of loss by half in the initial p=10% treatment, and by about three-quarters in
the initial p=32% treatment. As a result, the expected forgone earnings from their risk
management choices (compared to the choice which maximizes the expected payoff) is
much higher in the low probability case at $5.00 than it is in the high probability case at
$0.69. These participants appear to behave consistently with the predictions of cumu-
lative prospect theory for behavior under losses, i.e., they are risk averse over low-
probability losses and risk-seeking over high-probability losses.18 However, it may also
be the case that some other behavioral effect (such as regret or illusion of control) or
framing effect is influencing their decisions, or that they may simply have found some
ambiguity treatments. In the low probability treatment (#1), expected payoff is higher without precaution, so a
participant is labeled as Brisk averse^ if they choose to pay for any precaution. In the high probability treatment
(#2), the expected payoff is highest with full precaution, so a participant is labeled as Brisk-seeking^ if they
choose to take less than full precaution
18 See, for example, Tversky and Kahneman (1992), Camerer (1998), Starmer (2000), and Harbaugh et al.
(2010).
of loss to 8% was by far the modal choice in the high initial probability treatment and
the frame may have somehow made 8% a focal point for these individuals. Furthermore
the scale of expected losses, which varies substantially between the treatments (ranging
from $4.50 to $15.00 under p=10% and $12.00 to $14.40 under p=32%), combined
with a choice-task frame, has been shown to impact decisions. Beauchamp et al.
(2012), Harrison et al. (2007a), (2007b), and Andersen et al. (2006), among others,
all find that scaling manipulations affect estimates of risk aversion. In sum, before
presenting our formal analysis of decisions involving insurance and ambiguity, we note
that participants’ behavior under risk, with objectively known loss probabilities, ap-
pears generally in line with the existing literature, and while the question of which
expected or non-expected utility specification best represents preferences is an impor-
tant one, it is beyond the scope of this paper.20
ambiguity with respect to the probability of loss impacts precaution and insurance
decisions as suggested by our theoretical model and formalized in Hypotheses 2, 3, and
5. In the design of our experiment, there are three levels of ambiguity. There is no
ambiguity in the No Mistakes treatments because the probability of loss is explicitly
stated. Ambiguity was greatest in the treatments where the loss probability depended on
the risk of mistake by another participant. Thus, we consider the level of ambiguity to
be increasing from No Mistakes to Own Mistakes to Others’ Mistakes treatments.
occurs, or may have an illusion of control resulting from taking a risk-mitigating action. See Jaspersen (2014)
for discussion of entertainment value in hypothetical settings.
20 Because the ranking of outcomes remains constant across the design, we are unable to rule out rank-
dependent expected utility (see Quiggin 1982), even if we find support for another representation.
probability treatment and risk-seeking decisions in the high probability treatmenta
treatment
(Initial
probability of
loss = 10%)
treatment
(Initial
probability of
loss = 32%)
payoff with full precaution ($)
(risk averse) and chose less than full precaution in Treatment #2 (risk-seeking)
purchase insurance in the treatments where it is available. In the No Mistakes
treatments, taking full precaution and insurance are perfect substitutes with respect
to the impact on risk, so the dependent variable is equal to 1 if the participants
buy insurance OR take full precaution in those treatments. We include an inde-
pendent categorical variable for the level of precaution selected in the parallel
precaution-only treatment (Full precaution; Part precaution; reference category =
No precaution). Based on predictions in Bajtelsmit and Thistle (2008), we expect
that those who prefer full precaution when insurance is not available will switch to
insurance when it becomes available. We control for the initial probability of loss
(High risk = 1 in treatments where the initial probability of loss p=0.32; reference
category = Low risk p=0.1) and, where applicable, the level of ambiguity (No
Mistakes; Others’ Mistakes; reference category = Own mistakes). Table 7 reports
the results of these estimations. The coefficients are estimated log-odds ratios of
the included category to the reference category.
which the participants know the probability of loss. Consistent with Hypothesis 2, we
find that participants who choose to pay to reduce their probability of loss when
insurance is not an option are significantly more likely to buy insurance when it is
available, as compared to those who took no precaution. In this regression, the
participants are significantly more likely to pay to reduce their risk to zero in the high
risk treatments.
treatment types, controlling for the degree of ambiguity. As compared to the treatments
with known probabilities of loss, the initial probability of loss is no longer a significant
factor in the decision to buy insurance. Consistent with our theoretical predictions, and
Hypotheses 3 and 5, insurance take-up is increasing in the degree of ambiguity.
Participants in the No Mistakes treatments were significantly less likely to buy insur-
ance and those in the Others’ Mistakes treatments were significantly more likely to buy
insurance, as compared to decisions in the Own Mistakes treatments.
could be an interaction between the effects of risk treatment and ambiguity
treatment. To control for this, we include interaction terms for the initial probability
loss at different ambiguity levels (High Risk X No Mistakes; High Risk X Others’
Mistakes; Reference Category: High Risk X Own Mistakes). The results of this
regression are reported in the third column of Table 7. Although the signs and
significance of the other control variables are unchanged, the interaction term for
High Risk X No Mistakes is positive and significant and we see a larger negative
coefficient on No Mistakes. This implies that the positive effect of high risk is
primarily found in the treatments without ambiguity.
Hypotheses 3, 4 and 5. First, participants who prefer full precaution when insur-
ance is unavailable are more likely to buy insurance when it is an available option
for them. Second, we find evidence consistent with Hypothesis 3: ambiguity
increases the demand for insurance. Finally we find that higher ambiguity is
associated with a larger likelihood of insurance purchase compared to lower
ambiguity, as predicted by Hypothesis 5.
probabilities of loss, given their own risk management choice and their estimate of the
risk of mistakes by themselves or others. In the previous section, we used only a
categorical measure of ambiguity based on treatment type. However, the effect of the
categorical measure of ambiguity may differ by participant due to individual differ-
ences in subjective estimates of the probability of mistakes. Because we asked the
participants to estimate their own driving quiz score and the average for others in the
(Robust standard errors clustered at the subject level)
precaution/insurance
treatments
(1.358)
(0.347)
(0.387)
insurance unavailable
(1.493)
(0.670)
(0.669)
insurance unavailable
(1.499)
(0.426)
(0.462)
(High Risk=1)
(0.725)
(0.331)
(0.379)
(0.297)
(0.448)
(0.250)
(0.240)
(0.697)
Mistakes
Probability > Chi-square 0.000 0.000 0.000
if the participant chose to buy insurance or the equivalent. The model in the first column includes the
unambiguous No Mistakes treatments only. In those, the participants face a known initial probability of loss
and can take precaution only (Treatments 1 and 2) or choose between precaution and insurance (Treatments 3
and 4). We test the prediction that participants who prefer full precaution in the treatments without insurance
will switch to insurance in the treatments where that is an option. The results in the two right-hand columns
pool the results for all the treatments, including No Mistakes, and the ambiguity treatments Own Mistakes and
Others’ Mistakes (Treatments 1–12). The dependent variable in those models is the participant’s decision to
buy insurance in Treatments 3, 4, 7, 8, 11, and 12
by treatment type. We calculate subjective probability of loss (SPL) according to:
where
m ¼ probability of mistake ¼ 0 for No Mistakes treatments;
20
for Own Mistakes Treatments ;
20
for Others’ Mistakes Treatments
expected risk of mistake will be more likely to purchase insurance. The calculated
subjective probabilities of loss for the main treatments (precaution/insurance), prior to
any spending on risk mitigation, are summarized in Table 8 below. For each participant,
the SPL is the same in treatments that differ only by the addition of an insurance option
(e.g., #1 and #3), so there are 6 different SPLs for each participant. For the unambig-
uous treatments, we assume that the SPL is equal to the actual probability of loss, as
defined in the treatment presentation. This table shows that the risk of mistakes
increases the participants’ SPL relative to the unambiguous probability of loss. We
expect that individuals who have a higher subjective probability of loss due to higher
expected risk of mistake will be more likely to purchase insurance.
dependent variable is a dummy variable equal to one if the participant purchased
insurance or paid for full precaution in the treatments where insurance was available.
The subjective probability of loss (SPL), prior to making the precaution or insurance
decision, is included as a control variable. We include two alternative specifications
here, one with a single categorical variable for ambiguity (a dummy variable equal to
p=0.1
p=0.32
treatments, it is a known probability and is therefore the same for each participant. For the Own Mistakes and
Others’ Mistakes treatments, SPL for each participant is calculated as p + (1 − p)m, where p is the initial
probability of loss and m is the participant’s subjective estimate of the probability of mistake. The probability
of mistake is calculated as one minus the participant’s estimate of their own quiz score for the Own Mistakes
treatments and one minus the participant’s estimate of others’ quiz scores for the Others’ Mistakes treatments
No Mistakes) and the other with separate dummy variables by degree of ambiguity
(reference category: Own Mistakes). The first model may be preferable because it
avoids the issue of the correlation between SPL and the degree of ambiguity. As
reported in Table 9, the results of this analysis show that subjective probability is a
significant factor influencing precaution and insurance decisions. However, even after
controlling for this factor, we find that participants are more likely to insure in the
ambiguous treatments. Comparing the controls for level of ambiguity in the last
column, we find that Others’ Mistakes treatments significantly increase the likelihood
of insuring or taking full care. In contrast, the likelihood is significantly lower in the
unambiguous, No Mistakes treatments.
probability of loss (SPL)b
(Robust standard errors
clustered at the subject level)
(0.442)
(0.538)
insurance was unavailable
(0.602)
(0.607)
insurance was unavailable
(0.387)
(0.391)
(0.387)
(0.374)
(0.243)
Before Precaution/Insurance b
(1.689)
(1.722)
Probability > Chi-square 0.000 0.000
if the participant chose to buy insurance or the equivalent. Alternative specifications include a general control
for ambiguity in the first model (Any Mistakes=1) versus separate dummy variables by degree of ambiguity in
the second model (omitted category is Own Mistakes). Both models pool all 12 precaution/insurance
treatments (n=720)
b For the No Mistakes treatments, SPL is the initial probability of loss prior to spending money to reduce the
risk or buy insurance. For the Own Mistakes and Others’ Mistakes treatments, SPL is calculated as p + (1 −
p)m, where p is the initial probability of loss and m is the participant’s subjective estimate of the probability of
mistake. The probability of mistake is calculated as one minus the participant’s estimate of their own quiz
score for the Own Mistakes treatments and one minus the participant’s estimate of others’ quiz scores for the
Others’ Mistakes treatments
care because, to the extent that the precaution has no impact on the additional unknown
risk, the marginal benefit of taking precaution is lower. In our model and experiment
design, the risk of mistakes reduces the benefit of precaution because it only affects the
initial probability of loss and has no impact on the additional risk from mistakes. We
hypothesize that the risk of mistakes will reduce the incentive to spend on precaution
(Hypothesis 4). To investigate this issue, we estimate a tobit regression in which the
dependent variable is the amount spent on precaution in the treatments where insurance
is not available. A tobit regression is selected for this estimation because the dependent
variable is truncated. The minimum amount spent on precaution is 0 and the maximum
amount of precaution is limited by the choices offered to the participants in the given
treatment. Controls are included for subjective probability of loss and mistakes treat-
ment type. The results are shown in Table 10. After controlling for SPL, we find that
participants spent significantly less on care in the more ambiguous treatments. How-
ever, the amount spent on care in the most ambiguous Others’ Mistakes treatments is
not found to be significantly different from the amount spent in the Own Mistakes
treatments. As in the previous section, SPL is a significant and positive factor.
(Robust Standard Errors
Clustered at the Subject Level)
(1.134)
(1.506)
Loss (SPL)b
(3.095)
(3.119)
(0.596)
(0.643)
(0.687)
precaution in the precaution-only treatments (Treatments 1, 2, 5, 6, 9, and 10.) Alternative specifications
include a general control for ambiguity in the first model (Any Mistakes=1) versus separate dummy variables
by degree of ambiguity in the second model (omitted category is Own Mistakes). Both models pool all the
precaution-only treatments (n=360)
b SPL is the subjective probability of loss prior to any spending on risk mitigation. For the No Mistakes
treatments, it is a known probability and is therefore the same for each participant. For the Own Mistakes and
Others’ Mistakes treatments, SPL for each participant is calculated as p+(1 − p)m, where p is the initial
probability of loss and m is the participant’s subjective estimate of the probability of mistake. The probability
of mistake is calculated as one minus the participant’s estimate of their own quiz score for the Own Mistakes
treatments and one minus the participant’s estimate of others’ quiz scores for the Others’ Mistakes treatments
ambiguous probability of loss and we employ a novel experimental design to test its
predictions. This is the first study which allows participants to choose between multiple
levels of costly risk mitigation and insurance in a controlled environment. We find that
ambiguous increases in loss probability increase insurance uptake by more than similar but
known increases in loss probability, suggesting evidence in favor of ambiguity aversion.
precaution can also result in full risk mitigation. We also test whether participants are
less responsive to lower probabilities of loss, holding constant the expected loss.
Finally, we introduce ambiguity surrounding the probability of loss and examine the
impact on insurance and precaution decisions. Therefore, participants make risk miti-
gation decisions under conditions of both known and uncertain probabilities of loss.
in the presence of ambiguity, and provide evidence that may inform two puzzling
observations regarding insurance decisions: the purchase of liability insurance and
underinsurance against catastrophic loss. Paying for risk management in our experi-
ment is similar to investing in risk mitigation to meet the standard of care and thereby
avoiding liability. We find that when the probability of loss is known, participants
choose the more efficient way to achieve their desired level of risk mitigation. When
the probability of loss is ambiguous, participants are more likely to buy insurance.
These results suggest that the tendency to overinsure against liability rather than meet a
standard of care through precaution may be partially explained, as suggested by our
model, by sources of ambiguity surrounding liability losses.
resulting from the tendency to ignore very low probabilities. Controlling for expected
loss under insurance-only treatments, we find that participants neither ignore nor
underweight (known) low probability-high severity losses. Our results also reveal that
participants do not overweight high-probability losses. The results lend further support
to the Laury et al. (2009) findings that probability misperceptions are not an adequate
explanation for observed underinsurance against catastrophe.
substitute liability insurance in place of meeting a standard of care. High transparency and
consistency regarding compliance with a standard of care, when possible, may increase
precaution and decrease the risk of loss due to accidents, whereas unclear standards and
relatively unpredictable enforcement may deter expenditure on loss prevention. This is
important, especially under environmental loss liability, where investment in precaution
may be more expensive and damages more extensive, and yet liability standards are
relatively unclear. For example, in addition to the usual risks related to property, liability,
life and health, individuals and firms facing liability from environmental risks are also
exposed to ethical, cultural, business, reputational, and regulatory uncertainty. Castellano
(2010) anticipates an increase in systematic risk of catastrophes, spreading through new
networks between people, markets, and networks, which are particularly difficult to antic-
ipate because they have never occurred in the past.
designed to test for consistency with specific preference types. In treatments with
decisions that are consistent with risk-averse preferences, while almost a third appear
risk averse under an initial lower probability of loss but slightly risk seeking under a
higher initial probability of loss. Additional research is needed to carefully examine the
effect of risk and ambiguity attitudes on the expenditure on care.
Harrison, James Sundali, Bill Rankin, and seminar participants at Colorado State University, Ludwig-
Maximilian University, University of Münster, and at a Behavioral Insurance Workshop sponsored by the
Georgia State University Center for the Economic Analysis of Risk for helpful comments on earlier drafts of
this paper. They are grateful for financial support from the Colorado State University College of Business and
the Nevada Insurance Education Foundation.
avoid risk when individuals can exercise care.
We do not restrict the dependence of π on ε, nor do we require that beliefs be unbiased.
Let
the argument is still valid if utility is separable in effort. The individual has the second
order expected utility function
�
1−π c; εð Þ
u w−cð Þ þ π c; εð Þu w−c−dð ÞÞ
ðA:2Þ
where A is an ambiguity premium. Then we have
For an ambiguity neutral individual, the ambiguity premium is zero and the optimal
level of care, c0, maximizes EF{U(c, ε)}. Then
� �
� ��
level of care, c1, maximizes EF{Φ(U(c, ε))} Willingness to pay is given by
� �
� ��
0, ε)}>EF{U(c
increases the willingness to pay to avoid risk when an individual’s ability to take
care affects the probability of a loss.
0, ε)}=EF{U(c
Then A>0 implies that P1>P0. The results in Alary et al. (2010) and Snow (2011) are
special cases of the result here.
treatments
Orange Balls
Ball is Drawn
Menu of choices in a precaution/insurance treatment:
Decision Up-front Cost to Replace
Orange Balls
ball is drawn
B $1.50 9 91 9%
C $3.00 8 92 8%
D $4.50 7 93 7%
E $6.00 6 94 6%
F $7.50 5 95 5%
G $9.00 4 96 4%
H $10.50 3 97 3%
I $12.00 2 98 2%
J $13.50 1 99 1%
K $15.00 0 100 0
paper. Toulouse School of Economics.
Experimental Economics, 9, 383–405.
Journal of Risk and Uncertainty, 48(3), 207–229.
Papers on Risk and Insurance, 27(2), 152–180.
of Risk and Insurance, 75(5), 815–823.
and Insurance Review, 34, 105–116.
Bajtelsmit, V., & Thistle, P. (2015). Liability, insurance, and the incentive to obtain information about risk.
Beauchamp, J., Benjamin, D., & Chabri, C. (2012). How malleable are risk preferences and loss aversion?
Camerer, C. (1998). Prospect theory in the wild: Evidence from the field. In D. Kahneman & A. Tversky
Camerer, C., & Kunreuther, H. (1989). Experimental markets for insurance. Journal of Risk and Uncertainty,
Camerer, C., & Weber, M. (1992). Recent developments in modeling preferences: uncertainty and ambiguity.
Castellano, G. (2010). Governing ignorance: emerging catastrophic risks—industry responses and policy
DeDonder, P., & Hindriks, J. (2009). Adverse selection, moral hazard, and propitious selection. Journal of
Fiore, S. M., Harrison, G. W., Hughes, C. E., & Rutström, E. E. (2009). Virtual experiments and environ-
Fischbacher, U. (2007). z-Tree: Zurich toolbox for readymade economic experiments. Experimental
Ganderton, P., Brookshire, D., McKee, M., Stewart, S., & Thurston, H. (2000). Buying insurance for disaster-
Greenwald, B. C., & Stiglitz, J. E. (1990). Asymmetric information and the new theory of the firm: financial
Han, L.-M. (1996). Managerial compensation and the corporate demand for insurance. Journal of Risk and
Harbaugh, W., Krause, K., & Vesterlund, L. (2010). The fourfold pattern of risk attitudes in choice and pricing
Harrison, G. W., Lau, M. I., & Ruström, E. E. (2007a). Estimating risk attitudes in Denmark: a field
Harrison, G. W., List, J. A., & Towe, C. (2007b). Naturally occurring preferences and exogenous laboratory
Harrison, G. W., Rutstrom, E. E., & Sen, S. (2010). Behavior towards endogenous risk in the laboratory.
Harrison, G. W., Martinez-Correa, J., Swarthout, J. T., & Ulm, E. R. (2013). Scoring rules for subjective
Hogarth, R., & Kunreuther, H. (1989). Risk, ambiguity, and insurance. Journal of Risk and Uncertainty, 2, 5–
Jaspersen, J. (2014). Experimental studies of insurance demand: A review. Mimeo: Ludwig Maximilians
Kahneman, D., & Tversky, A. (1979). Prospect theory: an analysis of decision under risk. Econometrica,
Klibanoff, P., Marinacci, M., & Murkerji, S. (2005). A smooth model of decision making under ambiguity.
Journal of Risk and Uncertainty, 28(1), 5–21.
Regulation, 23(4), 1–18.
Uncertainty, 23, 103–120.
Risk and Uncertainty, 39, 17–44.
281–296.
response to unlikely events. Journal of Risk and Uncertainty, 7, 95–116.
Uncertainty, 41, 113–124.
343.
Uncertainty, 45(2), 135–157.
Shavell, S. (2000). On the social function and the regulation of liability insurance. Geneva Papers on Risk and
Slovic, P., Fischhoff, B., Lichtenstein, S., Corrigan, B., & Combs, B. (1977). Peference for insuring against
Snow, A. (2011). Ambiguity aversion and the propensities for self-insurance and self-protection. Journal of
Starmer, C. (2000). Developments in non-expected utility theory: the hunt for a descriptive theory of choice
Tversky, A., & Kahneman, D. (1992). Advances in prospect theory: cumulative representation of uncertainty.
Media B.V. and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Background and theory
Background
The theoretical effect of ambiguity on precaution and insurance decisions
Experimental design and procedures
Earnings task
Risk management treatments
Procedures
Hypotheses
Results
Overview and nonparametric results
Tests of baseline treatment predictions
Risk attitudes
Precaution and insurance decisions
Subjective probabilities
The effect of ambiguity on level of precaution
Conclusions
Appendices
Appendix 1: Ambiguity aversion increases willingness to pay
Appendix 2: Examples of precaution-only and precaution and insurance treatments
References
THIRD-PARTY LIABILITY IMPLICATIONS
FOR THE INSURANCE OF SPACE LOSSES
Piotr Manikowski
safe. When disaster strikes, as in the Columbia Space Shuttle disaster of 2003,
third parties as well as those directly involved are financially affected. This
article considers how these issues are treated under international law. It also
analyzes what products the insurance markets offer as protection against such
third-party liabilities.
On February 1, 2003 the Columbia space shuttle, the oldest of a fleet of four, was destroyed
during reentry into the earth’s atmosphere, causing the death of all seven crew. The total
damage is estimated at about US$3 billion. During the International Space Insurance
Conference that took place in Florence (April 3–4, 2003), Paul Pastorek, General Counsel
of U.S. space agency NASA reported the latest findings of the investigations into the
loss of the Columbia space shuttle (Stahler, 2003). NASA had recovered 45,000 pieces
of wreckage from an area 100 miles long and 10 miles wide. The material recovered
comprised in terms of weight almost half the lost shuttle. The initial suspicion was that
one of the brittle ceramic tiles on the underside of the wing had been damaged during
take-off, allowing heat to enter into the wheel chamber. A video tape was recovered, but
this stopped transmitting shortly before the crew realized that there were problems with
the re-entry. NASA subsequently recovered an instrument used on the shuttle to record
a multitude of technical data during each flight. These data revealed that the build-up of
heat inside the right wing came from the leading edge of the wing, which was made of
an extremely hard and tough material. The initial ceramic-tile theory thus seemed to be
disproved. However, the official report has yet to be released. Was Columbia the victim
of a collision with space debris, of which thousands of items are now littering the earth’s
orbital paths? It may never be established with absolute certainty what really happened
al. Niepodleglosci 10, 60-967 Poznań, Poland (e-mail: piotr.manikowski@ae.poznan.pl). This ar-
ticle was subject to anonymous peer review.
The author wishes to thank Peter Birks for his language revision of the text.
Texas.
However, it remains possible that space exploration could inflict harm on third parties
on the ground. This could evoke the civil liability of the guilty party. It is possible to buy
third-party liability insurance for space losses.
Until the mid-1960s the insurance market was not interested in the space industry, since
it had been focused on the military aims of the United States and the Soviet Union.
The launching of the first artificial earth satellite on October 4, 1957 and the sending of
the first man—Yuri Gagarin—into space on April 12, 1961, accelerated the development
of the space industry—including its commercial arm. It became clear to the insurance
industry that there would soon be a commercial space market available for exploitation.
of aerospace clients, brokers, and the underwriting community worldwide. The goal of
that work was to provide flexible forms of insurance for a volatile class of exposure,
which was not yet quantified by loss data.
unreliable and most of the payloads were experimental—the risk was self-insured by
governments and space agencies that financed the flights. The first company to devote
its attention to the use of this new technology for commercial purposes and to show
an interest in obtaining insurance protection was American Communication Satellite
Corporation (ACSC), founded in 1962. On April 6, 1965 ACSC obtained the first space
insurance policy to protect the first commercial geostationary communication satellite
Early Bird (Intelsat I-F1). The policy covered only material damages to the satellite prior
to lift-off (pre-launch insurance for US$3.5 million) and third-party liability insurance
for US$ 5 million (Daouphars, 1999).
developed a wider scope of space insurance cover. There are currently three basic groups:
payments insurance).
ability insurance is taken into consideration. It should be emphasized that, since the
early days of satellite insurance, little notice has been taken of the issues connected with
liability for space damages.
Space activity and the use of spacecraft entail the possibility of inflicting damage on third
parties, for which the owner or the user of a satellite is usually responsible. In the event
be enormous.
was decided that the responsibility for damages should be regulated by international
law. From the late 1960s a series of five treaties and conventions were agreed upon that
covered the exploration of space and the legal ramifications for events on the ground:
Use of Outer Space, including the Moon and Other Celestial Bodies (the “Outer
Space Treaty,” adopted by the General Assembly in its resolution 2222 (XXI)),
opened for signature on January 27, 1967, entered into force on October 10, 1967,
98 ratifications and 27 signatures (as of January 1, 2003);
Return of Objects Launched into Outer Space (the “Rescue Agreement,” adopted by
the General Assembly in its resolution 2345 (XXII)), opened for signature on April
22, 1968, entered into force on December 3, 1968, 88 ratifications, 25 signatures, and
1 acceptance of rights and obligations (as of January 1, 2003);
(the “Liability Convention,” adopted by the General Assembly in its resolution 2777
(XXVI)), opened for signature on March 29, 1972, entered into force on September
1, 1972, 82 ratifications, 25 signatures, and 2 acceptances of rights and obligations
(as of January 1, 2003);
istration Convention,” adopted by the General Assembly in its resolution 3235
(XXIX)), opened for signature on January 14, 1975, entered into force on September
15, 1976, 44 ratifications, 4 signatures, and 2 acceptances of rights and obligations
(as of January 1, 2003);
Bodies (the “Moon Agreement,” adopted by the General Assembly in its resolution
34/68), opened for signature on December 18, 1979, entered into force on July 11,
1984, 10 ratifications and 5 signatures (as of January 1, 2003).
of public law that deals with activities which occur outside the earth’s atmosphere. From
a practical point of view, the effect of these treaties is somewhat limited. The main reasons
for their ineffectuality is that they mostly deal with issues of principle and not with the
day-to-day activities of aerospace companies (d’Angelo, 1994).
third-party liability and states that: “Each State Party to the Treaty that launches or
procures the launching of an object into outer space, including the moon and other
celestial bodies, and each State Party from whose territory or facility an object is launched,
is internationally liable for damage to another State Party to the Treaty or to its natural
or juridical persons by such object or its component parts on the earth, in air or in outer
space, including the moon and other celestial bodies.”
which the signatory states are responsible for all acts and omissions of their government
agencies and of all their natural or juridical persons. Article II of the “Liability Conven-
tion” states that: “A launching State shall be absolutely liable to pay compensation for
damage caused by its space object on the surface of the earth or to aircraft flight.” There
is no limit to the amount of indemnity, but compensation is restricted to damage caused
directly by space objects. In addition, damage on the earth is clearly distinguished from
damage in outer space. The first applies if a space object inflicts damage on the surface
of the earth or to aircraft in flight. In such a case the liability of a launching state shall be
absolute. However, liability for damage to other space objects in outer space is based on
fault (Articles III, IV, VI). In consequence such regulations of space law usually cause the
necessity of buying an insurance policy against third-party liability. Also, treating dam-
age on the earth and damage in outer space differently is very important when assessing
the liability risk, because, according to Kowalewski (2002), the intra-space liability based
on fault creates a less-intensive risk of third-party liability.
space” starts. Here there are many different opinions, and this has created both sci-
entific and legal problems. Simply speaking, outer space begins where airspace finishes
(Antonowicz, 1998). Another definition is that outer space begins at the lowest altitude
at which it is technically feasible for a satellite to orbit the earth, which is currently
about 80 kilometers above sea level (Space Flight and Insurance, 1992). According to
this definition, the true birth of space flight was in 1942 when a German A-4 (also called
V2) rocket was launched, because its altitude exceeded 80 kilometers. Another source
(Encyklopedia Geograficzna Świata, 1997) announces that space begins at about 180 kilo-
meters, which is where the density of atmosphere becomes so thin that it is possible for a
few days’ free flight around the earth. Although there is no clear-cut lower limit of outer
space, international practice assumes that outer space “begins” at the altitude of about
100 kilometers above see level (Antonowicz, 1998).
tion of the space object that is responsible for the damage. It is to assure that such identifi-
cation is possible that a “Registration Convention” demands that each state launching an
object into outer space register the said object. If it is possible to confirm who launched the
given space object, the injured party can claim its compensation on the basis of principles
given in the “Liability Convention” (Articles VIII–XX).
is a possibility that the launch vehicle or its parts (e.g., external tanks, strap-on boosters)
can cause damage to any objects on the ground, sea, or to aircraft in flight. For this reason,
satellites are usually launched in a seaward direction, sometimes indeed from a platform
on the sea (e.g., a Sea Launch rocket). Shipping lanes nearby and airspace in the region of
the launch are closed during launching time. If a launch vehicle deviates from its nominal
trajectory and threatens to cause damage, it can be blown up by a built-in self-destruction
device, thus minimizing the risk of damage. The most dangerous are those accidents that
arise on the launch pad or within a minute or thereabouts of take-off. This happened in
1986 when a Titan rocket exploded at a height of only 240 meters, destroying both the
launch pad and the launch facilities. In another case a farmer from Georgetown in Texas
had a 500-pound fuel tank from a Delta II booster rocket land nearly intact just 150 feet
from his house (Coffin, 1997). Other examples include:
tower. It crashed into a hillside 22 seconds into flight, killing at least 100 people and
destroying the attached Intelsat 708 satellite (Anselmo, 1999);
cow—the U.S. Government had to pay to Cuba US$2 million in compensation, thus
creating one of the more expensive cows in history (Bulloch, 1988);
rocket fragment plummeting to the ground, 6 miles from the town of Salamalkol
(Kazakhstan), with a further 440-pound piece falling into a yard of a home in a
nearby village—Kazakh authorities presented a claim to the Russian Government
in the amount varying between US$270,000 and US$288,000;
its flight, with the reported claim paid by Russia to Kazakhstan in the region of
US$400,000 (for these and more examples of accidents, see Schmid, 2000);
of a VLS-3 rocket on the launch pad. The rocket booster was mistakenly ignited
during tests, three days prior to the scheduled launch.
parties. Damages in outer space are usually connected with either a collision or through
electromagnetic interference in transmissions of one satellite or terrestrial radio links
caused by the system of another satellite. However, there is no doubt that a guilty party
is obligated to compensate for that damage.
another object. A crash is possible with three kinds of objects:
� with space debris;
� with a heavenly body such as a meteor, in which case there would be no liability.
under the constant control of ground stations that track their orbits. It has been rec-
ommended for several years that satellites that have reached the end of their working
life-span be moved away from their geostationary orbit. Satellites from low orbits are
usually de-orbited. They partly or completely burn up in the atmosphere, with any debris
theoretically falling into oceans. One example of a space object being treated in this way
was the Space Station MIR, taken out of commission in 2001. Other satellites are shifted
to higher orbits. In the second case the altitude increase should be at least 150 kilometers.
The fuel required for that operation is equivalent to the amount needed for six weeks
active station-keeping (Blassel, 1985).
the earth. The majority no longer serve any useful purpose—old satellites, fragments of
rockets—but are a danger to functioning spacecrafts. One example occurred in August
1997, when a 500-pound discarded rocket motor floating in earth’s orbit passed within
2.5 kilometers of an ozone-measuring satellite worth tens of millions of dollars. NASA
kilometers of the orbiters (Coffin, 1997).
to catalogue all objects sent into space. Since 1957 about 9,000 objects have been logged
that are still being tracked. More than 100,000 bits of debris are still in space that are too
small to follow. Such debris includes pieces of aluminum chuffed from satellite boost
stages, blobs of liquid metal coolant that leaks from discarded space reactors, debris
resulting from satellite explosions, and lens covers and other hardware discarded during
normal satellite operations. Some of this material will remain in earth orbit for hundreds
or even thousands of years (Ailor, 2000). However, only 7 percent of the registered
objects are still functioning—the rest are nonfunctional satellites (20 percent), rockets’
upper stages (16 percent), remains after missions (12 percent), and different fragments
(45 percent). This means that over 90 percent of objects sent into outer space are now
nonfunctional debris. Space (orbital) debris is technically defined as any man-made
earth-orbiting object, which is nonfunctional with no reasonable expectation of assuming
or resuming its intended function or any other function for which it is or can be expected
to be authorized, including fragments and parts thereof (Flury, 1999).
space debris is small (estimated by actuaries at about 0.01 percent), but as the amount
of debris in space increases, the possibility of an operational satellite being hit is rising.
This process is irreversible, since the cleaning-up of space is economically (and also
technically) unfeasible. Most space debris is located in orbital regions that are frequently
used for a multitude of applications (low orbits: 800 to 1,600 kilometers and geostationary
orbit of about 36,000 kilometers above the earth’s surface).
earth. The lower the orbit and the greater the mass, the greater the chance of a reentry.
A satellite falling to the earth has the same effect as a natural meteor. When it passes
through the atmosphere, huge heat and pressure develops and the object is broken up
into numerous pieces, most of which are completely burnt up. Only a very few large
pieces survive to reach the ground. Some examples of reentries from outer space:
Atlantic Ocean east of the Azores in January 1978;
coast of Australia in July 1979 (Space Flight and Insurance, 1992).
is quite low—over two-thirds of the earth’s surface is sea and much of the land is sparsely
populated.
craft with nuclear power generators on board. On January 24, 1978 the Russian satellite
Cosmos 954 crashed in Northwest Canada, contaminating large areas with radioactivity.
Based on the provisions of the “Liability Convention” and general principles of inter-
national law, a claim in the total amount Can$6.04 million was submitted, although the
matter was settled some time later following negotiation, in the amount of Can$3 million.
There are still spacecraft that use nuclear materials for power supplies. This constitutes
a serious risk.
device for potential damage. It is unclear what would happen if, during replacement of a
broken part, the astronaut-mechanic destroyed the repaired module. How can companies
that have spent huge sums of money in the manufacturing of such equipment protect
themselves against the risk of sharing multipurpose platforms or space stations? How
can the “earth” (national) law be applied to these situations? International space law has
not solved this problem yet. This matter should engage not only lawyers, but also other
interested parties, including the insurance community.
The need to procure third-party liability insurance is based on protection against fi-
nancial claims resulting from certain fundamental principles of international space law
(mainly the “Outer Space Treaty” and the “Liability Convention”) as well as national leg-
islation, executive orders, administrative regulations, and judicial decisions that control
or otherwise influence the conduct of activities in space (Meredith, 1992). The require-
ment for and scope of liability cover is dependent on the Launch Services Contract with
the launching agency. In some cases the satellite owner is responsible for the purchase
of insurance, but the majority of launch suppliers now include the arrangement of the
appropriate coverage as part of the launch services supplied by them.
compensation for the victim. Therefore, liability insurances fulfill a double protection
function. Space third-party liability insurance has the same purpose.
for launch, the lift-off itself, in-orbit operations of a satellite program, and finally the
reentry. This type of insurance will provide compensation in the event of personal injury
and property damage to third parties, both on the ground and in space, caused by the
launch vehicle sections or the satellite. So the space third-party liability insurance applies
to damages to a third party in connection with such events as: falling of a satellite or
a rocket or elements thereof on the ground, fire during ignition, explosion of a satellite
in orbit, collision with another spacecraft, etc. (Zocher II, 1988; Zocher IV, 1988). The
launch pad is usually not covered. Neither is damage to payloads, since there is often a
clause in the underlying contracts in which all parties agree to a cross-waiver of liability.
According to Pino (1997) this applies also even in the case of gross negligence. Therefore,
insurance covers the period from the delivery of a spacecraft to a launch pad till the day
of expiration of that policy or the destruction of the satellite, whichever comes first.
Contracts are extended to the end of a spacecraft’s life.
launch of a satellite and for a set period thereafter. They will add the satellite operator to
the liability insurance they hold as an additional named insured. The satellite operator
will also occasionally purchase in-orbit third-party cover, which comes into operation
when the launch coverage expires. This insurance is taken out either to comply with leg-
islation in certain countries, or for the satellite operator’s own peace of mind. Sometimes
producers, launching states, or other related organizations could be coinsured.
2000):
� claims caused by radioactive contamination of any nature whatsoever;
� noise, pollution, and related risks;
� any obligation of the insured to his employees or any obligation for which the
workers’ compensation, death, or disability benefits law, equal opportunity laws,
or under any similar law;
� claims resulting from an interruption in telecommunications service to satellites,
� liability of any insured as a manufacturer;
� claims made for the failure of the spacecraft to provide communications service.
example, in the United States, the government has renewed legislation that limits com-
mercial operations liability for damage caused by a launch failure to US$200 million,
with the U.S. government responsible for the balance of up to US$1.5 billion in liability
specified by international treaties (Pagnanelli, 2001).
the capacity required as well as specific liability issues. In the context of the launch (14
percent to 18 percent of the sum insured) and in-orbit (2 percent to 4.5 percent of the sum
insured) premiums, liability premiums are relatively small amounts and are typically at
a level of around 0.1 percent (per year) of the required limit of liability (Space Insurance
Briefing, 2001). However, when Russians protected themselves against the failure of
the falling of the MIR Station into the Pacific ocean (March 23, 2001), they had to pay
about US$1 million premium for US$200 million limit of responsibility. The high level of
premium required could have shown the degree of confidence of the insurance market
in the reliability of MIR.
Thus far there have been only a few cases of third-party liability for space losses. It should
also be noted that there has never been a substantial claim on a space liability insurance
policy. It remains to be seen if this type of coverage would remain available if a major
accident was to occur. The tragedy of the Columbia space shuttle shows that potential
damage could be enormous (if the catastrophe had occurred above a city). The debris
of the orbiter fell on a sparsely populated area near the Texas/Arizona border. In total,
NASA received 66 claims for property damage and loss of cattle, totaling US$500,000.
The corridor of debris passed 15 miles south of Houston and Fort Worth. However, it
also has to be said that the debris of the space shuttle Columbia did not hit or hurt a
single person. According to Mr. Pastorek, NASA self-insures what it flies (Stahler, 2003).
commercial and noncommercial (governmental, scientific, etc.)—issues of risk manage-
ment are very important in view of the considerable financial commitments of launch
in the loss or failure of spacecraft that we have frequently observed, space activities cre-
ate exposure to potentially “astronomical” (or even “out of this world”) liability to third
parties injured by the malfunctioning spaceship or rocket boosters.
Ailor, W., 2000, New Hazards for a New Age, Crosslink, 1(1): 20-23.
Anselmo, J., 1999, Cox: Companies Broke Law—and Knew It. Aviation Week & Space
Antonowicz, L., 1998, Podręcznik prawa międzynarodowego (Warsaw: Wyd. Prawnicze
Blassel, P., 1985. Space Projects and the Coverage of Associated Risks. The Geneva Papers
Bulloch, C., 1988, Commercial Space Launches. Liability Questions Resolved at Last.
Coffin, B., 1997, Lost in Space. Best’s Review/Property-Casualty Insurance Edition, 98(7):
d’Angelo, G., 1994, Aerospace Business Law (Westport: Quorum Books).
Daouphars, P., 1992, L’assurance des Risques Spatiales’, in: Kahn, P., L’exploitation Com-
Jelonek, A., ed., 1997, Encyklopedia Geograficzna Świata (Krakow: Tom VIII—Wszechświat,
Flury, W., 1999, Space Debris a Hazard to Operational Spacecraft? In: Commercial and
Kowalewski, E., 2002, Istota ubezpieczenia odpowiedzialności cywilnej, Prawo Asekura-
Margo, R., 2000, Aviation Insurance. The Law and Practice of Aviation Insurance, Including
worths).
menting a Telecommunications Satellite Business Concept (Amsterdam: Martinus Nijhoff
Publishers).
dustrial Activities in Space—Insurance Implications (Trieste: Generali), pp. 25-33.
counter new Frontiers in the Legal Claims Area. In: Commercial and Industrial Activities
in Space Insurance Implications (Trieste: Generali), pp. 189-97.
International Space Conference (London: IBC).
Stahler, W., 2003, Of New Risks, Unknown Risks and Uncertainty. Risk Management, 33:
Versicherung (II), Versicherungswirtschaft, 43(2): 147-55.
Versicherung (IV), Versicherungswirtschaft, 43(4): 284-90.
AND TEACHING
the paths of academia are a number of
hunched figures with output paper and
punch cards askew, invoking “do-loops,”
“diagnostics” and “Hollerith counts.”
innovation to many who have only re-
cently acquired creditable speed and ac-
curacy in using a desk calculator. Fur-
thermore, the reactions of colleagues and
students can often be predicted by refer-
ence to the “Cee Whiz Syndrome.” The
nature of the “Cee Whiz Syndrome” can
be approximated by imagining the follow-
ing conversation:
gram in FORTAN, rather than FAP
becau. . .”
COMPUTER USER: “. . . so it took me
LISTENER: “Cee Whiz!”
COMPUTER USER: “. . . and now I
in 37 microseconds.”
pheral input-output devices and central
processing units is not the inevitable result
of using the high speed data-manipulation
powers of data processing systems. The
relative newness of computers and the
obvious complexity of their inner mechan-
isms do seem to reduce some causal users
of computer facilities to a state of hysteria
bordering upon absolute reverence.
against these forms of idol-worship by in-
sisting and believing that the modem
computer is essentially a large, ultra-high
speed, printing calculator with logical ca-
pacity to make “yes-no” decisions. A com-
putational series, has the power to remem-
ber what it has calculated and to use these
values in later calculations. These com-
prise a fair intuitive understanding of
the basic elements of raodern computer
technology. Increasing familiarity with
computers can even breed a feeling akin
to “contempt” when the computer slav-
ishly follows illogical instructions to pro-
duce meaningless answers. To student and
professor alike, there is utility (and per-
haps sanity) in becoming acquainted with
the powers and shortcomings of data proc-
essing equipment.
a computer programmer to be a success-
ful and prolific computer user, any more
than it is necessary to become a proficient
automobile mechanic to be a capable auto-
mobile driver. One who wants to try his
hand at using the computer will often find
that an existing set of computer instruc-
tions can be utilized to solve his problem.
There are a great many such “canned pro-
grams” available which will solve general
or specialized types of problems.
“canned” programs is the BMD series of
computer programs.^ These cover a broad
range of typical statistical computations,
as well as several advanced statistical com-
putation programs.
puter proigrams is the Key-Word-In-Con-
text (KWIC) Index published by IBM.
This source lists programs in a format
which emphasizes each key word in the
Biomedical Computer Programs, W. J. Dixon,
editor. The latest edition was published January
1, 1964, by the Health Sciences Computing
Facility, Department of Preventive Medicine and
Public Health, School of Medicine, University of
California, Los Angeles.
index rapidly in search of a program or
programs which have sought-for capabil-
ities. Each program is also described in a
brief abstract in another section of this
publication, along with instructions for
ordering a copy of the program.
have acquired some of these programs as
a service for their users. Additional pro-
grams can be acquired and made availa-
ble on request. Typically, the computer
installation will also maintain a library of
lists and indexes regarding available pro-
grams.
programs dealing with insurance and risk
problems, for research or classroom dem-
onstration purposes, would be useful.
While none is known to exist at the pre-
sent time, the American Risk and Insur-
ance Association, in the author’s opinion,
should consider creating a clearinghouse
for information about existing programs.
Perhaps space in this Journal could be
devoted to brief listings so that interested
teachers could be informed of the eflForts
of others.
tunities to a teacher to develop a variety
of classroom demonstrations which would
otherwise represent a prohibitive invest-
ment of time and energy to perform the
calculations. Supplied with these demon-
strations, a teacher can concentrate his
major eflEorts on explaining the rationale
of methodology and the interpretation of
results to students. Students can also use
such programs to work problems that
would have been inappropriate if the com-
putational work had to be done by hand
or by desk calculator.
ily available, a teacher still does not have
to develop programming ability himself.
He can describe the desired computations
qualified programmer.^ The programmer
then takes over the “ritualistic” task of
preparing a formal set of computer in-
structions to solve the problem and com-
municate the results. In this fashion, a
teacher can avoid getting involved in the
mechanical aspects of computer program-
ming and reserve his time for concentrat-
ing on analytic method.
time offered by computer programs,
“canned” or otherwise, additional features
must be considered in assessing the teach-
ing usefulness of the computer. Today’s
technology will be widely available on the
campus tomorrow (three to five years) to
allow the instructor to communicate with
the computer from the classroom. He can
ask the proper questions of the central
computing facility and get an immediate
response in the form of printed output,
displays of frequency distributions on a
cathode-ray tube, etc., using pre-stored
programs and data. Or the students can
do so.
gram to use; it will ask the students for
appropriate information, do the computa-
tions, and report the results. AH of this
can occur simultaneously in many class-
rooms on the same campus. Actually, the
computer will work on the problem for
one class for a few thousandths of a sec-
ond, go to the next, and so on through
the list of problems and back to the be-
ginning of the circuit.^ The effect of this
time-switching arrangement on computer
sense, means someone who is able to “perform
the ritual” of expressing instructions in appro-
priate language for the computer. Students make
excellent “qualified” programmers.
soles and time-switching arrangements within the
next year; among these are MIT, Carnegie, and
Michigan.
classroom. Thus neither the students nor
the instructor need to know programming
(but the instructor may need to know a
programmer).
tions to computer technology, special pro-
grams can be incorporated along with
computational instructions to portray the
results of calculations in graphic form.
The calculational results and graphic out-
put can be reproduced for classroom dis-
tribution using additional features of the
normal computer installation.
In teaching risk and insurance courses,
tistical concepts and measures. The
teacher who wants to include course ma-
terials dealing with the application of
basic and advanced statistics to risk man-
agement and insurance concepts faces
two major difficulties, here referred to as
the “capital investment” and “statistical
block” problems.
First of all, “capital investment” by the
show the application of statistics will be
great. Developing any one illustration will
involve a lot of calculational time. Even
slight variations in the assumptions under-
lying the illustration will usually require
complete recalculation. At this rate, it will
take a long time for an instructor to de-
velop a reasonably complete kit of illustra-
tions to cover even one course. “Canned”
programs, such as the one described be-
low, can be used to reduce the “capital
investment” required of any single in-
structor.
Secondly, many students are not able
statistics to investigate risk and insurance
principles because their prior training in
block.” Their first training in statistics did
not “take” as well as might be hoped, giv-
ing these students great difficulty in ap-
plying a statistical frame of reference to
the principles and problems of a different
subject matter area.*
confronting this awkwardness by eliminat-
ing all but the mildest of statistical refer-
ences in his course materials. In doing so,
the instructor may weaken significantly
the vigor of the course. A more satisfac-
tory way of dealing with both of these
problems lies in using the computational
power of computer programs, “canned” or
otherwise, to alleviate tedious calcula-
tions and allow greater emphasis on inter-
preting the results.
For example, basic statistics can be in-
by exploring the common observation that
“the mortality table portrays a risk con-
verging on a certainty over time.” This ob-
servation is intuitively correct, as will be
explained, but how does a teacher effec-
tively communicate this understanding to
a non-intuitive student? The phrase can
be repeated again and again, using differ-
ent words, but this pedagogical device
may not be too helpful.
observation could be explored and ex-
plained verbally:
deaths by age is a specialized portrayal of
a frequency distribution. As with many
other frequency distributions, it is possible
and logical to compute the mean. The mean
in this instance represents the average age
at death for those at the initial age of the
mortality table. For each greater age the
frequency distribution is obtained by trun-
cating to eliminate earlier ages from con-
sideration. The mean of e;ach such distribu-
^ Editor’s note: At some universities, of course,
management and insurance.
new initial age.
The average age at death is a useful meas-
ure for many purposes, but it does not
adequently demonstrate that some people
die well before attaining the average age
and others live considerably longer than
the average age at death for persons in
their group. There is, therefore, risk in such
a situation since actual ages at death are
dispersed around the most likely result, the
average age at death. To understand the
statement that ‘the mortality table por-
trays a risk converging on certainty over
time,’ the dispersion of actual ages at death
should be examined to see if this dispersion
does in fact narrow or converge, over time,
upon the average age at death.
The standard deviation is a common meas-
ure of dispersion. The standard deviation
can be used to measure and express the
concentration or scatter of data around
its mean value. By calculating, for each
age, the standard deviation as well as the
average age at death, absolute dispersion
can be expressed. Confidence intervals can
be estimated.
Another way of looking at variability in a
set of data uses the coeiBcient of variation
as an indicator of relative dispersion or
scatter. The standard deviation is divided
by the mean to calculate the coefficient of
variation. A decreasing coefficient of varia-
tion signifies that the relative dispersion is
lessening.
Computing the standard deviation and the
coefficient of variation should show that as
age increases actual deaths occur more and
more closely to the average age at death.
The coefficient of variation approaches
zero as a limit. Thus, ‘mortality is a risk
converging upon a certainty over time.’
verbally in a classroom without specific
measures of tbe mean, standard deviation,
and coefficient of variation would be fool-
hardy. On the other band, the calcula-
tional work will be extensive and tedious.
Table 1 and Chart 1 are exact reproduc-
tions of the output of a computer pro-
gram, LFXP, written to perform this
multitude of calculations.’ An instructor
its code name from LiFe EXPectation. Purists
graphic output to demonstrate the results
of the calculation process as well as the
logic of the argument. By using the same
computer program but different mortality
tables, certain of the differences between
mortality tables can be demonstrated and
examined.
description of the computer program used
to calculate and produce the information
contained in Table 1 and Chart 1. Addi-
tional computer programs are being pre-
pared to investigate and demonstrate
other applications of mortality tables.*
ogy, although often bewildering, need
not be terrifying. Teachers and students
both will benefit from a thorough exploita-
tion of the high speed data manipulating
capacity of modern computers. Teaching
many of the statistical aspects of risk and
insurance can be highlighted and assisted
through the use of prepared computer
programs with tabular and graphic pre-
sentation of output. The use of such pro-
grams does not require programming abil-
ity. By avoiding the monumental task of
hand calculation, the instructor can con-
centrate on demonstrating the relevance
of statistical measures to risk and insur-
ance problems with less effort and greater
probable success.
mortality tables are “built in” the pro-
may object to the use of upper-case letters in
place of the customary lower-case form of actu-
arial notation. This is defended pragmatically on
grounds of second-best. Computer-related print-
ers only print in upper-case; the choice is to
have no symbols, or to have symbols in uncon-
ventional form.
ploring Mortality Tables with Punch Card and
Computer.”
calculations at the instructor’s option. A
single card is prepared to instruct the
program what to do; this problem card
selects the mortality table, specifies the
confidence limits desired for graphic out-
put, and specifies the age-interval for tab-
ular output. This problem card is included
with the program deck and submitted to
the campus computer installation for proc-
essing.
program computes the complete expecta-
tion of life, beginning with initial age
equal to birth and then increasing initial
age by one until the limiting age of the
mortality table is reached. The complete
expectation of life for each initial age is
added to the initial age to estimate the
average age at death.
average age at death is calculated for each
initial age. This is used to compute the
coefficient of variation and to estimate the
confidence limits.
user, the program next calls upon the plot-
ting subroutine to prepare and print out
the requested graph. Following this, the
program instructs the computer to print a
Standard Annuity, set back five years; and 1959-
61 U.S. Life Table for the Total Population.
work of the program is completed. The
computer is instructed to check for an-
other problem to be run, performing the
same sequence of operations on a differ-
ent set of data. When no further problems
are requested, the computer turns its at-
tention to other jobs waiting for process-
ing.
language. Version 13, for the IBM 7094-
7040 DCS system at the Research Com-
puter Laboratory of the University of
Washington. The program uses several
standard systems routines in performing
the calculations. The graphic output is
obtained by calling on the UM PLOT sub-
routine.^ as modified for the University of
Washington system. The graph of output
is optional with the user.
jor aspects of the program. More extensive
documentation may be obtained by writ-
ing to the author. Progiram listings and
punched-card decks (approximately 50
tained for the cost of materials and mail-
ing charges. Within limits, the author will
attempt to assist interested instructors in
adapting the program to be compatible
with their campus computer requirements.
AVERAGE AGE AT DEATH FOR PERSONS NOW AGE X
dASED UPON THE 1958 CSO MORTALITY TABLE
V
E
R
A
G
E
I
I
I
I
I
1
I
I
I
I
I
I
I
I
I
I
I
I *
*
I
I
I
I
I
I
I
I
1
I
I
I
I
1
I
I
I
I
I
I
* • *
I
I
I
I
I
I
I
I
I
I L
IJ 1
I U U U U I
I
I
I
I
I
I *
I *
I »
I *
* 1
I L
I L
I
I L
I
I L
I
L
I
I
[ U *
[ U L
J »
I L
1 *
U = UPPER CONFIDENCE LIMIT
BASED UPON THE 1958 CSO MORTALITY TABLE
I AGE
I
I AT AGE X
t L(X)
.1
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
II
I
I
I
I
I
I
I
I
I
I
I
I
I
I
WHILE AGE X
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
1
AVERAGE AGE
AT DEATH
VARIATION
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
1
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
YEARS OF LIFE’ I
REMAINING
5.9
]
—t
I
I
I
I
I
I
i
I
I
I
.1
I
I
I
I
I
I
I
I
1
I
I
I
I
I
•I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
I
the amount of medical care received by
some parts of the population.
may be stated in the words of one of the
participants, “When I came into the con-
ference the other day I said We are going
to come out of here with a recommenda-
tion that the situation be further stud-
ied.'”^ With the unresolved questions
concerning this type of program still be-
fore us, it is hoped many of these studies
will be completed before the politicians
make their decision.
person interested in the implications of
a national health insurance program.
Many changes have taken place since
November 1970, but the conference pro-
ceedings provide a most helpful source
of information.
GROWTH: POSSIBLE LONG RANGE
IMPLICATIONS FOR INSURANCE. By
Robert I. Mehr and Seev Neumann. Grad-
uate School of Business, Bloomington,
Indiana: Division of Research, Indiana
University, 1972, $15.00.
Business Administration, The Pennsyl-
vania State University.
gests a rather traditional macro level re-
view of the insurance industry as it is
beset by economic and technological
forces. Such is not the case. Professor
Mehr, the senior author of the book, and
Professor Neumann have employed the
Delphi technique in an attempt to iden-
tify various characteristics of the insur-
ance industry in the year 2000. Although
the cynic may suggest this to be an easy
task for the insurance industry, the Mehr
tempt to apply a relatively new forecast-
ing device (the Delphi Technique) to a
particular set of questions about the in-
surance industry. As such, it deserves seri-
ous attention.
the 1970 Sesquicentennial celebration of
Indiana University. The Mehr-Neumann
volume is one of four companion pieces
representing the School of Business con-
tribution to the celebration. The three
other works are not identified. Financial
assistance for the series came from sev-
eral grants from insurance companies. The
stated purpose of the book “is to make
some cautious, documented speculations
about the long-range effects of infiation,
technology, and growth on private insur-
ance in the United States.” Its objective,
we are told, “is to identify both the pres-
ent characteristics that are hkely to pre-
vail until the end of the century and any
new characteristics that are Hkely to
emerge sometime between now and then.”
in the foreword gives added scope. Mr.
George Pinnell, Vice President and Treas-
urer of Indiana University states: “I fully
anticipate that in the years to come these
volumes will be increasingly useful to
planners and will clearly demonstrate the
insight and vision of the authors. Whether
time will corroborate their projections and
prophesies is a matter that we will watch
with fascination.” Thus, there is the hope
by at least one person that the Mehr-
Neumann book and its companion vol-
umes will be of use to planners in the in-
surance world. It is a fair assessment of
the most likely use of the book.
with an additional 184 pages of support-
ing material in several appendixes. The
authors have assembled 111 tables, 88 of
which contain data generated by the
study. Graph lovers will be disappointed
will be pleased, however. It has a Phillips
curve.
upon the use of the Delphi method. So
far as the reviewer knows, this is the first
application of the Delphi method in any-
thing which might be called the insurance
literature. Basically, the technique pro-
vides for a systematic method of eliciting
expert opinion. It was developed by the
Rand Corporation as a device to be used
for long-range forecasting, a situation
where extrapolation of statistieal series is
of doubtful value. The procedure calls for
a group of experts to be polled repetitively
concerning their opinions on a particular
forecast. For example, such a group might
be asked their opinion about various ef-
fects of say, women’s liberation, preemp-
tive nuclear strikes, or the ecumenical re-
ligious movement. In general, past use of
the Delphi Teehnique has centered upon
those questions where the use of statistical
data is not possible or inappropriate. In
any event, the opinions are compiled and
are fed back to the panel for another
round of opinion response. The feed-back
procedure is then repeated until consen-
sus is apparent. The technique is thus
characterized by the need to develop con-
sensus through a series of iterative exer-
cises and by the use of experts.
strictly to the Delphi procedure. Invita-
tions were sent to a group of 70 experts
to participate in the study. Of this num-
oer, 64 accepted and 58 eventually com-
pleted the project. It is not unreasonable
to think that the six drop-outs r^ulted
from exhaustion. After receiving detailed
inputs of background information on the
American economy and possible techno-
logical developments (panelists were also
‘fee to develop additional background in-
formation in these areas), each panel
‘ ‘ b received a 25-page questionnaire
pects of the insurance business. A sum-
mary of these first round responses was
then compiled and sent to eaeh panel
member. Each panelist had the chance to
reconsider and revise his first round re-
sponse and was asked to explain why his
judgments deviated from the norm of the
round one responses.
culated again to eaeh panel member, to-
gether with a summary of the reasons
underlying the deviating opinions. Mem-
bers were asked to reconsider their sec-
ond round opinions in light of the new
information and again to revise their re-
sponse to the question if that was felt
necessary. For the atypical third round
responses, members were asked to explain
why they were unimpressed with the
stated reasons underlying such responses.
These responses were again summarized
and returned to the members where each
had a final opportunity to modify his re-
sponse. At this point, the median of the
fourth round response was taken to be
the consensus of the panel.
section of expert opinion. The oracles rep-
resented universities, government bodies,
corporate insurance buyers, journalists,
and executives from both property-liabil-
ity and life insurance.
book are devoted to the presentation of
background material on technology and
the economy. The first chapter discusses
the difficulties of long-range prediction
and a discussion of the Delphi method.
The remaining eight chapters are devoted
to the presentation of the research re-
sults. Here, we are able to learn the panel
responses to sets of questions dealing with
the entire industry, life insurance, health
insurance, the property and liability in-
surance industry, automobile insurance,
property and liability insurance lines ex-
final chapter.
tions asked of the panel can be seen from
a sample of the responses. We learn that
the panel consensus sees social insurance
to be the dominant insurance form in the
year 2000; that the purchase of life in-
surance policies characterized by high and
moderate savings wQ! decline; that the
percentage share of health insurance pre-
miums written by private insurers wHI in-
crease (from 53.7 to 60 percent); that the
premiums to policyholder surplus ratio
for property and liability insurers will in-
crease only slightly; that the percentage
of total auto premiums written by the top
ten insurers wiU increase; and that direct-
writing insurers will further increase their
share of the market.
point estimate but the authors have also
provided a statement of the response vari-
ance about the estimate. For example,
panel members were asked to forecast the
premiums to policyholder surplus ratio
and the 1966 value of that ratio was taken
as the starting point—about 1.4. The con-
sensus forecast value was 1.7. The 95 per-
cent confidence interval presented in the
results is 1.65 to 1.91.
proach of the book. Though the approach
is innovative for the insurance literature,
it is not without some Hmitations.
erally recognized as a useful forecasting
device, the value of using experts has
been subject to question. Stated differ-
ently, if one were to use any reasonably
intelligent group of people, the consensus
answers finally arrived at may be little
different than those generated by the ex-
perts. It is an interesting possibiHty and
one which has some support in the Delphi
Hterature.
cast for the year 2000. The rate of change
in all things affecting any institution—in-
forecast by any method must be suspect.
Most Delphi research has dealt with ques-
tions not amenable to traditional statis-
tical analysis and where long-range pre-
dictions deal more with shifts in values
rather than time-series projections. For
example, Delphi studies have dealt with
anticipated changes in American values
in the year 2000 and with changes in the
goals of educational institutions. The
Mehr-Neumann work does not deal with
those or similar phenomena directly. In-
stead, panelists were asked to forecast a
particular point value for several eco-
nomic projections deaHng with insurance.
Although considerations of value changes
and similar shifts within the economy
were considered in arriving at forecast
values, the consideration was not syste-
matic. The resulting forecast for various
time series for a point 30 years in the fu-
ture is an exercise requiring more faith in
judgment than even actuarial science.
bers precludes the asking of questions to
satisfy every reader. Still, some areas were
omitted from consideration. For example,
there is no direct consideration of lapse
rates nor of the distribution costs in life
insurance. The related major problem of
turnover among life insurance agents was
not included. If one is interested in panel
consensus, on such problems, he must in-
fer them from questions dealing with gen-
eral operating efficiency or the prospec-
tive growth in group coverage. Such
questions were more directly considered
for property and Hability insurance than
for life insurance. Still, it is diBBcult to
fault a 73 item questionnaire for a Delphi
process for errors of omission.
identity of the panelists. We are assured
they are experts but nonetheless one would
like to make his own assessment of such
quaHfications. Further, the number of eJi’
perts from each of the categories repiC’
sented is not given. Thus, we do not know
or whether one group might have a dis-
proportionate impact on the process. Since
the Mehr-Neumann questionnaire is so
comprehensive, one wonders whether
each of the experts is expert in all of the
aspects of the insurance covered in the
investigation. One suspects not.
tellectual curiosity is stimulated by the
large number of questions and the re-
sponses of the panel. The reader cannot
help but project his own responses and
compare them with those of the panel.
Herein lies the chief value of the book.
While the panel projections for a point
nearly 30 years distant are simply too
speculative for use by executives or regu-
lators, one would hope that sueh groups
would study the research. They may dis-
agree with the projections or feel insulted
at not being consulted, but a serious read-
ing of the book where one role-plays the
panel may be for insurance executives,
policy-makers,—and educators too—a
unique thinking experience.
provided us with a thorough application
of a relatively new research tool which
has not previously appeared in the insur-
ance literature. The research methodology
is detailed and sound and its presentation
clear and concise. The projected values
of the research will not likely serve as
diiect inputs to corporate planning models
is insurance (there may be none) but it
cannot help but make planners better
thinkers.
SURANCE. By the late Curtis M. Elliott
ind Emmett J. Vaughn, John Wiley and
5ons, Inc., 1972, x and 703 pages.
f Finance and Insurance, University of
i for use in a college-level survey
intent of the authors has been to create a
text that is consumer oriented. The types
of consumer the authors apparently have
in mind are individuals and families. For
example, there is an entire chapter of 24
pages on general liability insurance for
the individual. Chapters on property and
liabiHty insurance for business firms,
surety bonds and credit insurance are
largely independent of other chapters and
may be omitted.
ance, 7 chapters, seems to be aimed al-
most exclusively at individuals and fam-
ilies. Only two and a half pages are allotted
to forms of group life insurance and group
annuities. Group health insurance is men-
tioned casually in a paragraph on meth-
ods of marketing health insurance.
lem of handling the subject of risk
management in an elementary text and
have chosen to avoid extensive treatment
of statistical techniques and utility theory.
A 15-page chapter entitled “Risk Manage-
ment” describes the nature and function
of risk management. It appears to be ade-
quate for individuals and families; it pro-
vides an introduction of the subject to
those who may pursue it more deeply,
and is consistent with the stated purpose
of the book.
insurance texts reasonably assume their
readers bring to the subject? Can they
assume a knowledge of elementary prob-
ability, principles of statistics and busi-
ness law? EUiott and Vaughn assume no
knowledge of probability and statistics.
They include just enough on these sub-
jects to allow the reader to understand
the nature of insurance. A chapter on
“Negligence and Legal Liability” makes
one wonders again why teachers of insur-
ance (including this reviewer) seem to
feel that students must understand the
causes of liability losses but not neces-
sarily of property losses. Most of us—in-
We provide professional writing services to help you score straight A’s by submitting custom written assignments that mirror your guidelines.
Get result-oriented writing and never worry about grades anymore. We follow the highest quality standards to make sure that you get perfect assignments.
Our writers have experience in dealing with papers of every educational level. You can surely rely on the expertise of our qualified professionals.
Your deadline is our threshold for success and we take it very seriously. We make sure you receive your papers before your predefined time.
Someone from our customer support team is always here to respond to your questions. So, hit us up if you have got any ambiguity or concern.
Sit back and relax while we help you out with writing your papers. We have an ultimate policy for keeping your personal and order-related details a secret.
We assure you that your document will be thoroughly checked for plagiarism and grammatical errors as we use highly authentic and licit sources.
Still reluctant about placing an order? Our 100% Moneyback Guarantee backs you up on rare occasions where you aren’t satisfied with the writing.
You don’t have to wait for an update for hours; you can track the progress of your order any time you want. We share the status after each step.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
Although you can leverage our expertise for any writing task, we have a knack for creating flawless papers for the following document types.
From brainstorming your paper's outline to perfecting its grammar, we perform every step carefully to make your paper worthy of A grade.
Hire your preferred writer anytime. Simply specify if you want your preferred expert to write your paper and we’ll make that happen.
Get an elaborate and authentic grammar check report with your work to have the grammar goodness sealed in your document.
You can purchase this feature if you want our writers to sum up your paper in the form of a concise and well-articulated summary.
You don’t have to worry about plagiarism anymore. Get a plagiarism report to certify the uniqueness of your work.
Join us for the best experience while seeking writing assistance in your college life. A good grade is all you need to boost up your academic excellence and we are all about it.
We create perfect papers according to the guidelines.
We seamlessly edit out errors from your papers.
We thoroughly read your final draft to identify errors.
Work with ultimate peace of mind because we ensure that your academic work is our responsibility and your grades are a top concern for us!
Dedication. Quality. Commitment. Punctuality
Here is what we have achieved so far. These numbers are evidence that we go the extra mile to make your college journey successful.
We have the most intuitive and minimalistic process so that you can easily place an order. Just follow a few steps to unlock success.
We understand your guidelines first before delivering any writing service. You can discuss your writing needs and we will have them evaluated by our dedicated team.
We write your papers in a standardized way. We complete your work in such a way that it turns out to be a perfect description of your guidelines.
We promise you excellent grades and academic excellence that you always longed for. Our writers stay in touch with you via email.