“There are three
kinds of lies: lies, damned lies, and statistics.” (Mark Twain, attributing to
Benjamin Disraeli)
Here is how it went:
In 1975, Isaac Ehrlich published in The American Economic Review a paper on “The Deterrent Effect of Capital Punishment: A Question of Life and Death.” He was the first to employ econometric tools in a study of
this contentious issue. All previous studies had showed no correlation between
capital punishment and deterrence of murders. In fact, Ehrlich himself admitted
that raw data showed no deterrent effect at all. This is, however, where
econometrics came in to help: Ehrlich created a model.
The Ehrlich model is based on the assumption that
murderers are rational people who respond to incentives. In other words, they
kill because they think they will derive a benefit or utility (be it material
or emotional). Society can alter this criminal behavior by offering
countervailing incentives (say, a victim could bribe the murderer) or outright
disincentives (the classical case in point would seem to be capital
punishment). Ehrlich employed all the august tools of economic prediction: a
consumption function based on the probabilities of various outcomes of the
consequences of a murder, partial elasticities of the expected utility from
crime, a social loss function, marginal cost and revenue from execution, a murder
supply function, etc. The model used a range of variables that may seem fairly
random (why, for example, choose the population at risk of becoming murderers
to be aged 14-24?), especially that not all the data needed was, in fact,
available – so the author simply made up some values by ‘estimating’ or
interpolating, or substituting them). Pages upon pages of complicated (although
still arbitrary) formulae and data manipulations later, the reader is presented
with tables of data that now magically purport to show a deterrent effect. Not
only that: the author even quantifies this deterrent effect, claiming that a
single execution is worth ‘eight saved lives.’
Predictably, this novel approach raised some objections,
such as those promptly published by Peter Passell and John B. Taylor in the American Economic Review, “The Deterrent Effect of Capital Punishment: Another view.” This rather short paper resoundingly discredited
Ehrlich’s approach for using arbitrary data and variables that could hand the
researcher just about any result he desires. More specifically, Passell and
Taylor criticized Ehrlich for not using an established theory-based approach (after
all, the use of data and of variables needed to be justified theoretically and
models needed to reflect behavioral expectations) and instead plugging in
whatever made his particular formulae yield the numerical result he happened to
be looking for – here, a positive correlation between the number of executions
and the number of ‘prevented’ murders. Not only did Ehrlich’s model show
precious nothing, but his paper was published at a time when legislatures and
courts re-examined their death penalty policy, suggesting that the results
might have been produced ‘on demand’ to give support to one policy choice over
another.
Fast forward to 2003, when Hashem Dezhbakhsh, Paul H.
Rubin, and Joanna M. Shepherd published in the American Law and Economics Review
Vol. 5 No. 2 “Does Capital Punishment Have a Deterrent Effect? New Evidence from Postmoratorium Panel Data.” That paper is a continuation of the thread started by
Ehrlich, ignoring all scholarship that dismissed the methodology used in “The Deterrent Effect of Capital Punishment: A Question of Life and Death.”
Having at their disposal 28 years of developments in both
econometrics and law and economics writing, the authors take the Ehrlich model
and improve on it – to the tune of now eighteen
saved lives (plus or minus ten, what’s a rounding difference, after all) for
each additional execution. As Ehrlich did before, Dezhbakhsh et al also
concede that an analysis of raw data comparing the number of murders in
executing and non-executing states does not show a deterrent effect, hence they
recognize a need to use “more sophisticated empirical techniques” (349) to
determine a possible deterrent effect of capital punishment. The superiority of
Dezhbakhsh’s approach is stressed by providing a stated rationale for many of
their choices of particular function forms and variables (as opposed to
“studies [that] often choose the functional form of murder supply rather
haphazardly.”(353)) A careful reader will still be puzzled by the authors’
(wholly unsubstantiated) presumptions of what exactly constitutes risk factors
for murders: “violent TV programming or movies” (354), National Rifle
Association membership rate, population density, per capita income, and
demographic variables, such as the percentage of males, of African Americans,
and the age of the sample (the population under consideration in their research
is aged not even 14-24, as in Ehrlich’s model, but 10-29. Apparently, according
to the authors’ implicit logic, ten year olds are much more likely to become
murderers – or to respond rationally to the deterrent effect of capital
punishment – than do thirty year olds). Population density is, rather oddly,
“included to capture any relationship between drug activities in inner cities
and murder rate” (358). The higher crime rate in cities is explained as a
function of, among other things, “the presence of more female-headed
households” (367), and the inclusion of per capita income is explained by “the
role of illegal drugs in homicides during this time period. Drug consumption is
expensive and may increase with income.” (366) Equally biased are some of the
criteria deemed responsible for lowering the incidence of murders: Republican
votes and non-African American minorities.
Further speaking to plausibility, and considering that
the authors examine the population of 10-29 year olds, it is only slightly
surprising for them to include data on retirement payments, along with income,
unemployment, and income maintenance. The authors also aggregate other crimes committed
along with murders (even though the ostensible purpose of the article is to
show the deterrent effect of executions on murders), and “to address the
problem of underreporting” they decide to “use the logarithms of crime rates,
which are usually proportional to true crime rates” (emphasis added)
(360). Moreover, Dezhbakhsh et al.
use “forward-looking and backward-looking expectations” to reflect the
conditional execution probability apparently considered by the murderers, and
“given the absence of an arrest lag, no lag displacement is used to measure the
arrest probability” (361). And apparently, in that model, all murder cases are
solved at once.
Obvious contradictions in their obtained results do not
deter the authors from stating blithely on page 367 that “expenditure on the
judicial-legal system has a positive and significant effect on the conditional
probability of receiving a death penalty sentence in all six models of equation
(5),” only to appear to reverse themselves just a page later by concluding that
“The expenditure on the judicial-legal system has a negative and significant
effect on the conditional probability of execution in all six models (equation
[6]). This result implies that more spending on appeals and public defenders
results in fewer executions.”
The substitution of data used here is also rather
peculiar: “In the absence of conviction data, sentencing is a viable
alternative that covers the intervening stage between arrest and execution.”
Also, “The estimated coefficients for year and county dummies are not shown.”
(362). A problem arises when there happen to be no murders or no death
sentences in particular (actually, in several) years in individual counties
examined, and Dezhbakhsh et al. deal with it in one of two ways:
“Estimates in Table 3 are obtained excluding these observations,” or by
substituting “the relevant probability from the most recent year when the
probability was not undefined.” In other words, the model excludes the
possibility of zero murders and zero death sentences in certain counties, which
has, of course, rather dramatic effects on the estimations of the deterrent
effects of capital punishment produced by that model. This is how Dezhbakhsh et al. justify it: “The assumption
underlying such substitution is that criminals will use the most recent
information available in forming their expectations.” (364) It begs the
question whether the authors ever tried to imagine, much less to verify
empirically, the notion of a murderer planning his crime by researching recent
arrest, conviction, and execution statistics for his county, and actually
calculating the probability of his execution following conviction.
The entire model is based on specific presumptions of its
authors: “Strictly speaking, these measures are not true probabilities.
However, they are closer to the probabilities as viewed by potential murderers
than would be the “correct” measures. Our formulation is consistent with Sah’s
(1991) argument that criminals form perceptions based on observations of
friends and acquaintances.” (364) Let us reiterate: the model is based not on
facts, but on what the authors think the murderers consider facts based on the
experiences of their friends and acquaintances. In other words: the authors try
to model the mindset of a murderer, and conclude from it that one additional
exercise of capital punishment will dissuade other murderers from killing
eighteen (or eight, or twenty eight, or any number in between) innocent people,
and all that happens because the prospective murderer of these eighteen victims
is presumed to be a friend or acquaintance of an executed person. That is a
rather bold statement for scholars who are neither criminologists, nor forensic
psychologists, but rather economists.
The purpose of the study is clearly expressed by the
authors in their concluding remarks: “our study offers results that are
relevant for analyzing current crime levels and useful for policy purposes. Our
study is timely because several states are currently considering either a
moratorium on executions or new laws allowing execution of criminals.” Given
the social divisiveness of capital punishment, the latter would appear to be
true at almost any given moment, rendering any such studies ‘timely’ by
default. Starting from the assumption that the Ehrlich study was indeed correct
in its approach, and then more than doubling Ehrlich’s prediction, the authors
clearly took sides in the death penalty debate. In the end, Dezhbakhsh et al.’s specific methodologies are not
what matters. Even if they are later dismissed by other scholars as not rigorous
enough, as happened with the Ehrlich paper, the mere fact of publication of the
research of Dezhbakhsh et al. in a scholarly journal gives its finding,
namely, the magical number of eighteen ‘saved lives,’ enough gravitas to be
quoted as a “scientific fact” not only, and indeed not so much, by other
scholars, but, most importantly, by politicians and death penalty advocates all
over. And that is precisely what we see happen in the tendentious paper of
Cass R. Sunstein and Adrian Varmeule, “Is Capital Punishment Morally Required?The Relevance of Life-Life Tradeoffs.”
No comments:
Post a Comment