Pages

2013-01-11

Aspirin, cancer, and economic forces in research


A NYT article on aspirin research points to promising studies suggesting that aspirin reduces cancer risk. It certainly sounds like good news to thousands of researchers worldwide dedicating their work to curing, or at least preventing, cancer. Or does it?

The most striking line in this NYT article can be found at the end: “Some cancer doctors commended the new research, saying that despite the limitations of the analyses, no other long-term clinical trials of aspirin and cancer are likely to be done because of the enormous expense involved and the fact that aspirin is a cheap generic drug.” This also seems to sum up what truly drives medical research: if it is not a significant investment opportunity, the new treatment will simply not be investigated.

There are various conspiracy theories about actually effective cancer treatments not being introduced to the market because the pharmaceutical industry would forgo staggering profits (since monthly cost of chemotherapy can reach as much as $20,000, for drugs with negligible production cost). Much as one should take such claims with a grain of salt, the conclusions of the Oxford study on aspirin are yet another example of how medical research is driven solely by the commercialization potential of its results – simply put, by money. This is especially striking in both of the quoted US studies, where similar research on unreasonably cheap aspirin was set up in such a way as to preclude any positive relationship between this drug and its effectiveness for cancer treatment – the researchers simply manipulated the dosage in such a way as to render the “treatment” predictably ineffective (although the recommended dosage is one baby aspirin a day for cardiac patients, here the researchers arbitrarily decided to give to their subjects one baby aspirin only every other day). One can only imagine the utter tragedy for pharmaceutical industry profits if half of their future cancer patients suddenly did not need extortionately expensive treatments due to daily consumption of one very cheap generic pill without any known significant side-effects. Understandably, such revolutionary prevention would be fought tooth and nail, and with any number of ‘studies,’ to prevent it from becoming part of the official recommendation of various government agencies. That is, after all, the raison d’être of lobbying and industry funding of the regulatory process.

Another question is why exactly the U.S. standard dosage for so-called baby aspirin used in prevention of cardiac disease is 81 mg, almost exactly 20% lower than that in the EU (100 mg). Is it another preventive measure of the FDA to make sure that aspirin does not become too effective in the U.S. market, or are manufacturers simply aiming to increase profits by selling a drug with reduced potency?

Gang killings and 19th century European culture


Food for thought in crime deterrence and capital punishment debate: An article in The Economist on gang killing statistics. Gang violence: Turf wars. Gang killings have less to do with drugs and crime than expected.

Apparently, the presumption that gang killings are strongly linked to drug wars and organized criminal activity has no empirical basis asides from one anomalous case, the one of Newark, where gangs do, in fact, control drug trade. According to data cited by The Economist, and quite surprisingly, young gang members mostly kill one another over exaggerated notions of honor and respect.

This finding brings up an unexpected analogy for those familiar with European history – gang “honor” killings are not unlike the plague of duels that wiped out many of the finest scions of the European nobility and middle class during the 19th century. One most regrettable example is the premature demise of Évariste Galois, the founder of abstract algebra, killed in a thoroughly senseless duel at the age of 20. At least Évariste had notice of his impending doom and he was able to spend the night leading up to the duel writing down many but not all of his groundbreaking mathematical findings. Unfortunately, the night was far too short to allow him to reduce all of his realizations and discoveries to paper.

To play devil’s advocate, one possible answer to the argument that “guns and youth do not mix” is to point to the existence of duels before firearms became easily available. Still, killing somebody in a mêlée or with a ranged weapon requires considerably more effort and skill than simply pointing in your opponent’s general direction at point-blank distance and pulling a trigger. Consequently, prospects of survival of a confrontation in the more chivalrous and sporting age of Scaramouche seemed to be higher. 

Three kinds of lies: application


The article of Cass R. Sunstein and Adrian Varmeule, “Is Capital Punishment Morally Required? The Relevance of Life-Life Tradeoffs” is a rare example of outright preposterous sophistry. The whole argument is based on what Sunstein and Varmeule refer to as “facts,” “evidence,” and “empirical findings”. Well, no - those statements most assuredly are not facts. Taking an arbitrary number out of an arbitrarily chosen article on a very contentious issue does not render it a “fact.”  For an overview of the article by Dezhbakhsh et al., “Does Capital Punishment Have a Deterrent Effect? New Evidence from Postmoratorium Panel Data,” which is the basis of Sunstein and Vermeule’s claims, see my post “Three kinds of lies.

Sunstein and Vermeule do not attack death penalty opponents directly – instead, they start with targeting moderate, non-radical defenders of capital punishment whom they call consequentialists, and try to prove that the concept of mere moral permissibility of capital punishment is a deficiency of “cognitive processes” (aka stupidity) and “a serious moral error” (aka lack of moral logic, or outright immorality). To accomplish this, Sunstein and Vermeule use a rare combination of just about every type of fallacy known to man, supported staunchly by the “fact” of a life-for-a-life tradeoff of eighteen innocent murder victims for each and every executed murderer. To Sunstein and Vermeule, the death penalty is not optional – rather, it is obligatory and its imposition imperative to save very tangible innocent lives that would otherwise undoubtedly be lost. In fact, Sunstein and Vermeule repeatedly accuse de-facto-non-executing states and jurisdictions that do not provide for capital punishment of “ensuring the deaths of a large number of innocent people,” and their alarmist rhetoric does not shy away from even more drastic terms and comparisons.

I believe that Cass R. Sunstein and Adrian Vermeule could benefit from a closer look at some readily available data that is not manipulated by regression analysis, especially in the international context. According to their theory, the European Union, along with the majority of countries globally that have banned capital punishment, should be drowning in rivers of blood spilled by undeterred, not ‘disincentivized’ murderers that are not swayed  by a mere life sentence or other non-lethal penalties. Conversely, China, with about 5000 executions per year, should be a virtually crimeless model society where homicide should be extinct by now, or where at least 90,000 potential victims would be saved each year under the Sunstein-Vermeule Equation. By the same logic, picking some 5% of the US population Dezhbakhsh et al deem to be at risk of becoming murderers (mostly young urban African American Democrats with National Rifle Association memberships) would ensure that the remainder of the population experience virtually no fatalities from homicide at all, since their potential murderers will be executed even before they harm any of the rest.

Sunstein and Vermeule claim that it is the government’s moral obligation to execute every single murderer, and to do it as quickly as possible, since each 2.75 years of delay costs one life. Miscarriages of justice (i.e., executions of innocent people) are simply to be considered part of the cost of saving other lives. Also dangerous according to Sunstein and Vermeule is the execution of too few convicts, since this actually increases the national murder rate due to a “brutalization effect.” Significantly, Sunstein and Vermeule do not mention what should happen when there are too few offenders to be executed to begin with. Is this a sign of reliable immunity from homicide? Instead, they quote the social benefits of putting 500 convicts to death next year.  To put this in perspective: in 2010, the US executed 48 individuals, placing it in the august company of such progressive democracies as China (up to 5000 executions), Iran (252+), North Korea (60+), Yemen (53+), and well ahead of such squeamish jurisdictions as Saudi Arabia (27+), Libya (18+), and Syria (17+). The United States was one of the few countries (a group including the People’s Republic of China, India, and Indonesia) that repeatedly voted against United Nations General Assembly resolutions to abolish the death penalty. (Source: Wikipedia, quoting Amnesty International data).

Sunstein and Vermeule’s fervent activism for capital punishment seems incomprehensible until the reader reaches pages 28-29 of their article. Here, the true reasons behind their impassioned appeal for increased application of the death penalty are revealed: the authors think that it is insupportably costly - and, frankly, unnecessary - to introduce social welfare programs such as job training and education to prevent violent crime from occurring in the first place. Why bother eliminating inequality of opportunity and income if the government can simply terrorize its citizens by increasing executions? And that is what is happening right now: the United States reports 2.3 million prisoners, compared to 1.6 million in China, almost twice the Chinese number although the US population is only a quarter of China’s. With US population accounting for a mere 5% of the world population, 25% of the world’s inmates are situated in US prisons. Perhaps Sunstein and Vermeule should give us their thoughts on that?

For more information on US prison statistics, see a NYT article “U.S. prison population dwarfs that of other nations.”  

New book on capital punishment


The Economist published a review of a new book on justice gone awry in a murder case, Anatomy of Injustice: A Murder Case Gone Wrong by Raymond Bonner: 

Three kinds of lies


“There are three kinds of lies: lies, damned lies, and statistics.” (Mark Twain, attributing to Benjamin Disraeli)

Here is how it went:

In 1975, Isaac Ehrlich published in The American Economic Review a paper on “The Deterrent Effect of Capital Punishment: A Question of Life and Death.” He was the first to employ econometric tools in a study of this contentious issue. All previous studies had showed no correlation between capital punishment and deterrence of murders. In fact, Ehrlich himself admitted that raw data showed no deterrent effect at all. This is, however, where econometrics came in to help: Ehrlich created a model.

The Ehrlich model is based on the assumption that murderers are rational people who respond to incentives. In other words, they kill because they think they will derive a benefit or utility (be it material or emotional). Society can alter this criminal behavior by offering countervailing incentives (say, a victim could bribe the murderer) or outright disincentives (the classical case in point would seem to be capital punishment). Ehrlich employed all the august tools of economic prediction: a consumption function based on the probabilities of various outcomes of the consequences of a murder, partial elasticities of the expected utility from crime, a social loss function, marginal cost and revenue from execution, a murder supply function, etc. The model used a range of variables that may seem fairly random (why, for example, choose the population at risk of becoming murderers to be aged 14-24?), especially that not all the data needed was, in fact, available – so the author simply made up some values by ‘estimating’ or interpolating, or substituting them). Pages upon pages of complicated (although still arbitrary) formulae and data manipulations later, the reader is presented with tables of data that now magically purport to show a deterrent effect. Not only that: the author even quantifies this deterrent effect, claiming that a single execution is worth ‘eight saved lives.’

Predictably, this novel approach raised some objections, such as those promptly published by Peter Passell and John B. Taylor in the American Economic Review, “The Deterrent Effect of Capital Punishment: Another view.” This rather short paper resoundingly discredited Ehrlich’s approach for using arbitrary data and variables that could hand the researcher just about any result he desires. More specifically, Passell and Taylor criticized Ehrlich for not using an established theory-based approach (after all, the use of data and of variables needed to be justified theoretically and models needed to reflect behavioral expectations) and instead plugging in whatever made his particular formulae yield the numerical result he happened to be looking for – here, a positive correlation between the number of executions and the number of ‘prevented’ murders. Not only did Ehrlich’s model show precious nothing, but his paper was published at a time when legislatures and courts re-examined their death penalty policy, suggesting that the results might have been produced ‘on demand’ to give support to one policy choice over another.

Fast forward to 2003, when Hashem Dezhbakhsh, Paul H. Rubin, and Joanna M. Shepherd published in the American Law and Economics Review  Vol. 5 No. 2 “Does Capital Punishment Have a Deterrent Effect? New Evidence from Postmoratorium Panel Data.” That paper is a continuation of the thread started by Ehrlich, ignoring all scholarship that dismissed the methodology used in “The Deterrent Effect of Capital Punishment: A Question of Life and Death.

Having at their disposal 28 years of developments in both econometrics and law and economics writing, the authors take the Ehrlich model and improve on it – to the tune of now eighteen saved lives (plus or minus ten, what’s a rounding difference, after all) for each additional execution. As Ehrlich did before, Dezhbakhsh et al also concede that an analysis of raw data comparing the number of murders in executing and non-executing states does not show a deterrent effect, hence they recognize a need to use “more sophisticated empirical techniques” (349) to determine a possible deterrent effect of capital punishment. The superiority of Dezhbakhsh’s approach is stressed by providing a stated rationale for many of their choices of particular function forms and variables (as opposed to “studies [that] often choose the functional form of murder supply rather haphazardly.”(353)) A careful reader will still be puzzled by the authors’ (wholly unsubstantiated) presumptions of what exactly constitutes risk factors for murders: “violent TV programming or movies” (354), National Rifle Association membership rate, population density, per capita income, and demographic variables, such as the percentage of males, of African Americans, and the age of the sample (the population under consideration in their research is aged not even 14-24, as in Ehrlich’s model, but 10-29. Apparently, according to the authors’ implicit logic, ten year olds are much more likely to become murderers – or to respond rationally to the deterrent effect of capital punishment – than do thirty year olds). Population density is, rather oddly, “included to capture any relationship between drug activities in inner cities and murder rate” (358). The higher crime rate in cities is explained as a function of, among other things, “the presence of more female-headed households” (367), and the inclusion of per capita income is explained by “the role of illegal drugs in homicides during this time period. Drug consumption is expensive and may increase with income.” (366) Equally biased are some of the criteria deemed responsible for lowering the incidence of murders: Republican votes and non-African American minorities.

Further speaking to plausibility, and considering that the authors examine the population of 10-29 year olds, it is only slightly surprising for them to include data on retirement payments, along with income, unemployment, and income maintenance. The authors also aggregate other crimes committed along with murders (even though the ostensible purpose of the article is to show the deterrent effect of executions on murders), and “to address the problem of underreporting” they decide to “use the logarithms of crime rates, which are usually proportional to true crime rates” (emphasis added) (360). Moreover, Dezhbakhsh et al. use “forward-looking and backward-looking expectations” to reflect the conditional execution probability apparently considered by the murderers, and “given the absence of an arrest lag, no lag displacement is used to measure the arrest probability” (361). And apparently, in that model, all murder cases are solved at once.

Obvious contradictions in their obtained results do not deter the authors from stating blithely on page 367 that “expenditure on the judicial-legal system has a positive and significant effect on the conditional probability of receiving a death penalty sentence in all six models of equation (5),” only to appear to reverse themselves just a page later by concluding that “The expenditure on the judicial-legal system has a negative and significant effect on the conditional probability of execution in all six models (equation [6]). This result implies that more spending on appeals and public defenders results in fewer executions.”

The substitution of data used here is also rather peculiar: “In the absence of conviction data, sentencing is a viable alternative that covers the intervening stage between arrest and execution.” Also, “The estimated coefficients for year and county dummies are not shown.” (362). A problem arises when there happen to be no murders or no death sentences in particular (actually, in several) years in individual counties examined, and Dezhbakhsh et al. deal with it in one of two ways: “Estimates in Table 3 are obtained excluding these observations,” or by substituting “the relevant probability from the most recent year when the probability was not undefined.” In other words, the model excludes the possibility of zero murders and zero death sentences in certain counties, which has, of course, rather dramatic effects on the estimations of the deterrent effects of capital punishment produced by that model. This is how Dezhbakhsh et al. justify it: “The assumption underlying such substitution is that criminals will use the most recent information available in forming their expectations.” (364) It begs the question whether the authors ever tried to imagine, much less to verify empirically, the notion of a murderer planning his crime by researching recent arrest, conviction, and execution statistics for his county, and actually calculating the probability of his execution following conviction.

The entire model is based on specific presumptions of its authors: “Strictly speaking, these measures are not true probabilities. However, they are closer to the probabilities as viewed by potential murderers than would be the “correct” measures. Our formulation is consistent with Sah’s (1991) argument that criminals form perceptions based on observations of friends and acquaintances.” (364) Let us reiterate: the model is based not on facts, but on what the authors think the murderers consider facts based on the experiences of their friends and acquaintances. In other words: the authors try to model the mindset of a murderer, and conclude from it that one additional exercise of capital punishment will dissuade other murderers from killing eighteen (or eight, or twenty eight, or any number in between) innocent people, and all that happens because the prospective murderer of these eighteen victims is presumed to be a friend or acquaintance of an executed person. That is a rather bold statement for scholars who are neither criminologists, nor forensic psychologists, but rather economists.

The purpose of the study is clearly expressed by the authors in their concluding remarks: “our study offers results that are relevant for analyzing current crime levels and useful for policy purposes. Our study is timely because several states are currently considering either a moratorium on executions or new laws allowing execution of criminals.” Given the social divisiveness of capital punishment, the latter would appear to be true at almost any given moment, rendering any such studies ‘timely’ by default. Starting from the assumption that the Ehrlich study was indeed correct in its approach, and then more than doubling Ehrlich’s prediction, the authors clearly took sides in the death penalty debate. In the end, Dezhbakhsh et al.’s specific methodologies are not what matters. Even if they are later dismissed by other scholars as not rigorous enough, as happened with the Ehrlich paper, the mere fact of publication of the research of Dezhbakhsh et al. in a scholarly journal gives its finding, namely, the magical number of eighteen ‘saved lives,’ enough gravitas to be quoted as a “scientific fact” not only, and indeed not so much, by other scholars, but, most importantly, by politicians and death penalty advocates all over. And that is precisely what we see happen in the tendentious paper of Cass R. Sunstein and Adrian Varmeule, “Is Capital Punishment Morally Required?The Relevance of Life-Life Tradeoffs.” 

Jencks on inequality


Writings of scholars engaged in (mostly) political or (less commonly) social advocacy provide a fascinating opportunity to examine the usage of rhetorical techniques – techniques that, to a large extent, rely on fallacies.  It so happens that many of such writings provide a convenient scholarly fig leaf for lawmakers trying to introduce or block various policies. Inequality – a Reassessment of the Effect of Family and Schooling in America by Christopher Jencks is just one such example. I have lasted all of the first chapter there.

Jencks starts with a strikingly arbitrary definition of wealth and poverty – to him, wealth is not measured by luxuries (such as ownership of a yacht), but by the ability to buy other people’s time. If one follows this line of reasoning, an up and coming hip hop star living in hotels and hiring a large entourage would be considered richer than a solitary majority shareholder satisfied with one housekeeper – the latter’s wealth is tied up in equity not directly translatable into work-hours under his direct custody and control.

In a broad swipe, Jencks generalizes from thus relativized wealth and poverty to cognitive skills taught at schools, and ties them together only to ‘prove’ that inequality or poverty cannot be eliminated by improved education. What a relief for county officials looking for ways to cut public school funding.

In arguing his point of view, Jencks conveniently ignores any fact that would falsify his claims. To him, inequality of income is an outcome of some predetermined Calvinist combination of luck and competence, and those earning more are more lucky and competent (not to mention more productive) than those earning less. He even goes as far as calling lower-income earners “unlucky and incompetent.” Sounds like bad news to academics and to public servants thus compared with investment bankers, not to mention to notoriously underpaid ER doctors and engineers when compared to oil fields workers and even to New Jersey cops.

Additionally, for Jencks “There is no evidence that school reform can substantially reduce the extent of cognitive inequality…” measured by standardized tests or even by educational attainment. (8) It begs the question whether Jencks has ever bothered verifying where U.S. students are placed in worldwide rankings of their reading, science and mathematics skills (according to the PISA test, U.S. 15-year olds scored 25th among the 34 OECD countries in math, and 17th in science and reading). Perhaps it might be time to look at what other countries are doing right. But even in Jencks’ purely domestic scope of analysis, the over-performing schools whose results might serve as examples and be emulated by others are conveniently excluded as outliers. Only large public schools are considered by Jencks, and his findings, unsurprisingly, show no significant differences in their students’ achievement. Apparently, elite private or public high schools sending their alumni to Ivy League universities and whose graduates then become high earners in society do not deserve much comment: “We cannot blame economic inequality on differences between schools, since differences between schools seem to have very little effect on any measurable attribute of those who attend them.” (8). Take that, Bronx Science.

Jencks then goes on to muse about such revolutionary ideas as services to be provided by the state and reduction of inequality of earnings through regulation (for example, the unthinkable concept of a minimum wage law that would actually result in a living wage). Of course, as he points out, such legislation could not possibly be passed by Congress. Too bad – it appears that Europe invented these things way before Jencks, and actually managed to introduce them without the popular revolt he seems to fear. In fact, inequality has been significantly reduced there by comparison. Fortunately for Jencks, neither the U.S. public nor politicians nor even overly many scholars care about the world beyond the water’s edge.

After deciding that nonmonetary incentives encouraging contribution to the common good, such as social and moral incentives, are “inflexible and very coercive,” Jencks goes on to argue that equality of education is not just and equitable either, because “the natural demand for both cognitive skills and schooling is very unequal.” (11) Of course – why try to force classes on a seven-year-old who would rather ride a bike, or demand completing homework from a teenager so much more interested in Facebook. Even worse – encouraging people to get an education is outright unjust: “This puts egalitarians in the awkward position of trying to impose equality on people…,” especially given that, as Jencks does not tire to repeat, “we have found rather modest relationship between cognitive skill and schooling on the one hand and status and income on the other.” (11) Um, just to be sure, could we check again how exactly Jencks and his colleagues ended up at Harvard?

Whose history?


“History is written by victors” (attributed to Napoleon Bonaparte)

R.B. Fogels’ Time on the Cross must have created quite a stir by its iconoclastic approach to conventional wisdom about slavery. The ideological burden of the topic seems to weigh heavily on readers even today, witness the many emotional comments in my well-used library copy of the book. To this day, its audience studying American Slavery cannot accept any other summary of the peculiar institution than utter horror about all its aspects.

Nobody, least of all the author, claims, of course, that the idea of slavery itself is anything but abhorrent. Moral and philosophical values of liberty and self-determination are, if anything, even dearer to us today than they generally were when 19th century abolitionists undertook their struggle. What is worth noting, however, is that these well-meaning abolitionists employed as much, if not more, misinformation as their opponents did. The end may well have justified the means, and society usually accepts such methodology as valid for attaining purposes of a higher good (the fact that every ideology, including totalitarian ones, used this approach, lies well beyond the scope of the present commentary). But, in the social sciences, one should not confuse ideological and political campaigning with valid methods of inquiry aimed at yielding verifiable, reproducible and above all “objectively sustainable” data. And yet this is precisely what has happened with the topic of slavery in the United States.

It did not help that abolitionists did not really know or understand the South, so their accounts were necessarily based on secondary sources. It also did not help that any and all however rare scientific studies obviously molded the available data to support their theses, or were not rigorous enough to be objective and authoritative. Hence a de novo analysis of primary sources such as the one conducted by Fogel shows that the reality of a slave’s life may well have been quite different from what has been imprinted on mass consciousness both before and after the Declaration of Emancipation.

All this is by no means to say that a slave’s life was anything but miserable. The question is, however, how much more miserable it really was than the lives of contemporary poor freedmen, or poor white laborers, and how prevalent gratuitous cruelty and abuse were throughout antebellum society in general.

Leaving further elaboration on these questions to researchers like Fogel, since it is rather difficult to meaningfully quantify suffering, we may turn to the main arguments set forth by the more educated among the abolitionists: that of pervasive exploitation of slaves and of the inherent inefficiency of the South’s economic system that was primarily based on slavery.

To think in economic terms, we need to leave aside momentarily and for the sake of a dispassionate argument any emotional impact of considering human beings as mere capital. And yet, considering that slaves were, in fact, an investment, and indeed worth considerable sums on the market (owners in the United States – whilst not in the Caribbean colonies – would typically break even only after 29 years of rearing a slave child, and the going rate for a prime-age adult was anywhere between $1,000-$2,000 at a time when the annual maintenance of the same slave would cost approximately $48), it would not have seemed to make good business sense to abuse and neglect a rather costly investment arbitrarily, much less to enable mere ego trips of white supremacy. Evidently, there had to have existed a number of sadistic and otherwise mean slave owners and overseers. But to claim that slaves were systematically abused by beatings, torture, and starvation, all facts known to adversely affect life expectancy and thus financial amortization, would testify to very bad economic sense of their owners.

And that would, in fact, support precisely the second major rational claim of abolitionists – that slavery was an inherently inefficient system. Let’s stop and think about it for a moment. How can UNPAID labor (i.e., a workforce limited on the side of costs to feeding, clothing, and housing disbursements) be more expensive than PAID labor (such as the one provided by white workers and freedmen)? To be sure, slavery was most certainly never abolished in the history of mankind on the grounds of it being too expensive. Humanitarian, not economic, reasons prompted the repeal of bondage laws. In fact, slave labor was resorted to whenever a breakdown of social institutions allowed for such conditions – witness the all but rare incidents of forced labor in times of war and occupation even in modern and in quite recent history. If forced labor survived well into the 21st century (the UN still fights to resolve the problem of human trafficking, and customary public international law as recognized by applicable treaties and conventions expressly sanctions forced labor under conditions of “military necessity” and other select circumstances), why would it suddenly have outlived its economic viability and utility in the United States before the Civil War?

There is, however, a clear difference between the disposable nature of slaves in modern history (e.g., in forced labor camps and concentration camps as instituted under Nazi occupation) and the economic system of North American antebellum slavery. The former relied on an almost unlimited supply of free workforce (besides Jews and Gypsies, it mostly drew on captured Slavs – the Eastern Europeans with their telling ethnic moniker – and even some Germanic dissidents were sentenced to “re-education” in forced labor camps). Thus, under Nazi ideology, it made however limited economic sense to cut maintenance costs below subsistence levels and to use a high turnover of workers. In the U.S. , however, slaves were expensive, their supply was rather limited, and, in the later stages of the institution, the economy relied for supply mostly on domestic breeding rather than on African imports. To kill off the work base – and its reproductive base along with it – by excessive negligence would have been highly counterintuitive in view of a rational optimization of life-long ownership rights to “human chattel.”

But these perceptions are perhaps excessively based on current-day accounts of slavery. Modern human trafficking is a short-term business model aimed at squeezing maximal return out of people whose useful lifespan as a slave is rather short. In the case of trafficking in children as sex slaves, a few years of “work” brings on not only puberty, but also communicable diseases that significantly reduce their value, as happens routinely with child prostitutes in South and South-East Asia. Those children are usually paid for (to the relatives or traffickers), and needed to work off the “debt” incurred by their “owners” in order to finance their “acquisition.” Such a system of indentured servitude (without the sexual component) was prevalent in 19th century Europe where children were commonly sold (“rented out”) as servants and laborers for a given period of time. Another model of modern day slavery is that of women forced into prostitution. The case of Bosnian sex slaves imported from Russia and Ukraine to service international observers of the civil war in former Yugoslavia under U.N. command involved a similar pressure to “work off the debt” to human traffickers who had brought the women into the country under false premises of legitimate work in hotels and bars. (For comparison, the compensation for losing such a woman by the client due to her death or of her running away was calculated to amount to approximately $2,000-$3,000, which is quite a different valuation from the going rate of a prime-aged slave of about $1,000-$2,000 almost two centuries earlier in the American South at a very different purchasing power equivalent.  This comparably low price reflects the increasingly disposable nature of a modern day slave). Similarly, illegal immigrants around the world (not only in the United States, but also in Europe) unable to pay their illegal passage into the country often need to work off their debt to the traffickers through forced labor in areas such as farming and manufacturing. All such cases include limited term servitude geared towards deriving maximum profit for the beneficiary of slave labor, regardless of life-long consequences on the well-being of such slaves.

Another aspect of the abolitionist argument as to the inefficiency of the Southern economy was the strikingly blatant racism of their claims. Slaves were, in their parlance and opinion, child-like beings incapable not only of learning commercially meaningful skills, but even of doing menial work well. The fact that many of the most vocal abolitionists were Northerners without much contact with black people in the first place, who thus based their  opinions on assorted prejudices and literary fantasies, may explain such an approach. But, again assuming for argument’s sake the accuracy of their assessment, how would it help the country to free such “lesser” workers, make them “independent” and thus responsible for their own economic viability, if they were clearly unable to survive on their own in this society due to lack of skills and maturity? If the abolitionist claims of far superior efficiency and utility of white workers for Southern plantations held true and was indeed the secret of plantations’ prosperity, did they plan on starving the three million soon-to-be freedmen who were, according to them, entirely uncompetitive in free market circumstances and thus inevitably headed toward becoming a public charge? History showed just how wrong all those “economic” predictions of the abolitionist camp turned out to be: the Southern economy was, in fact, deeply affected by Emancipation. Slaves were, after all, not economically replaceable by better skilled white paid laborers. But then, can we blame activists for using fallacies to obtain a politically desirable and morally commendable result? Their ends did justify the means, after all.

Approaching the issue from a European background foreign to the national trauma of U.S. race relations, I am bound to step on somebody’s toes in matters of political sensitivity. And yet, such an outsider’s view can be beneficial to people locked into a politically correct indoctrination to the extent of rejection of any views – or, indeed, facts – incompatible with popular “awareness” of this horrendous ghost of the past. The remarks left by previous readers in my library copy of Time on the Cross are a case in point.

In a way, this fervent belief in an official version of history, a version whose denial calls for slurs and accusations of lack of patriotism, reminds me of experiences with the published histories of certain Central and East European nations. There, a comparative reading of national historical accounts of various neighboring countries is quite instructive to see how the idea of national victimhood is incompatible with the competing claims of another nation arising from the same events. What for one nation was a heroic recovery of a God-given right to other lands was for the other an unwarranted attack on similarly God-given ownership of the same tract by the sovereign that happened to hold it at a given moment in the ever-shifting tides of history. The painful breaking away of one ethnic group to establish a separate nation state was celebrated as a triumph of long-overdue and denied independence by others.

History is written by the victors. But more specifically, it is actually written by the more vocal supporters of the idea that won. Relying on anecdotal and necessarily subjective accounts, often by the proponents of one political view that has for whatever random reasons prevailed over another, is bound to result in misinterpretations of history. That is why the underappreciated return to boring technical sources – cliometrics as propagated by Fogel – while not as attractive for writers and readers who understandably prefer more colorful narratives, may shed much more light on what has actually happened. Or it may not: as one can see at almost every turn in economics, the interpretation of data is vulnerable to manipulation stemming from the researcher’s own a priori bias and assumptions, and it is difficult, not to mention laborious, to disentangle misinterpretation from bona fide analysis without taking recourse to the entire sample of original data. The vulnerabilities of quantitative analysis in history are impressively exemplified in the scandal surrounding Michael Bellesiles’s Arming America. Still, accounting, church, and census records constitute considerably more reliable “facts” than diaries and written accounts of people simply relating narratives. Unfortunately, a good story sells much better, not to mention easier, than sound statistical analysis, even a story conveyed by adequate verbal interpretation. Maybe readers should reconsider what they want history to be – an art of the pen and discipline of the humanities, or a social science. In any case, it is of the utmost importance not to confuse the two, lest we slide inadvertently into the traps of ideological indoctrination such as historical revisionism to fit it to a particular view of events and situations – be that the projection of the image of a racial struggle, of class warfare, or of any other revolutionary or counter-revolutionary effort.

Emergence


In Everything is Obvious, Duncan J. Watts briefly mentions ‘emergence’ in the context of sociology’s micro-macro problem: explaining ‘macro’ phenomena involving large numbers of people that are driven by the ‘micro’ actions of individuals, each making rational choices. Emergent complexity has been first observed in natural sciences, where the laws governing a higher scale of a phenomenon cannot be derived from laws that apply at a lower scale. Watt gives the example of particle physics being pretty much useless to explain the chemistry of synapses. Social phenomena, however, are characterized by an extremely high level of complexity, making them perhaps the hardest to study in the context of emergence.

For an introductory overview of emergent behavior, see this PBS documentary on emergence and a brief demonstration of emergent behavior in birds (flocking).

Kahneman’s prospect theory


To those familiar with the internal contradictions of classical economic theory, Daniel Kahneman’s prospect theory discussed in Thinking, Fast and Slow brings something of a relief: by accepting the apparent irrationality of agents, economics becomes, well, more rational as a whole. And yet, applying the results of psychological research to economics, as Kahneman does, raises several questions.

Due to the very nature of economics, its analysis does not rely to any significant extent on experiments (notwithstanding the fact that some of the more controversial policies or management decisions are indignantly termed “experiments” by their opponents). Instead, economics uses econometric methods and models for its studies, comparing collected data with certain assumptions and predictions. Psychology, on the other hand, relies mostly on designed studies to reach its conclusions. The majority of these studies is small and is conducted in a laboratory setting.

And herein lies the rub: a lab experiment does not necessarily reflect realistically the behavior of real people in real life. The “helping experiment” described by Kahneman is just one example (whether or not the experiment was real or just invented for the purpose of the study). First of all, Kahneman elaborates extensively about the importance of an appropriate sample size. And then, in another section, he comments on an instructive study that had all of 15 (that is: fifteen!) participants. That means that each individual participant brought a weight of 6.7% to bear on the result of the study.

Another question is just who the participants in psychological studies typically are. It is all-too well known that most such research is conducted in an academic environment using easily and readily available subjects: undergraduate students. Numerous flyers around U.S. campuses offer $5, $10, or even $40 for 15-120 minutes of participation in a psychological study. That fact and the amount of remuneration by itself limits the number of likely participants, since even well-endowed academic institutions with superior grant-writing skills do not have unlimited resources, and funds available for allocation are often spoken for by multiple research projects. Then, there are psychology students who are required both to participate in psychology labs and to conduct their own field studies. As anyone who ever tried to have a bit of rest in an undergraduate lounge knows, students routinely attempt to poll their peers to complete some psychology assignment. Unless a researcher or a student has the tenacity and financial resources for serious incentives to venture outside campus, the majority of such psychology studies will inevitably be done on undergraduates – hardly a group representative of society as a whole, especially when it comes to rational and responsible economic decision-making.

Of course, having undergraduates as study subjects does not begin to account for the entirety of sample bias – those students are also mostly a self-selected sample. One group will be psychology majors who find themselves compelled to conduct and participate in studies as a quid pro quo with their friends (I poll you, you poll me). Another class will be students who try to make some extra money without having to leave campus, by, in a manner of speaking, becoming regular lab rats. Psychology students are expected to know the basic principles of study design. That means that they will most likely know that they are tested on something entirely different than the purported questions. Even non-majors will, with sufficient experience, figure out quickly what the studies are likely to be really about. As in all or most standardized tests, bright college students learn how to beat the system and to anticipate the “correct” answer, if only to avoid feeling embarrassed by being fooled by the study itself.

And then there is the difference between real life and research – many students will simply not care which of the two or three options on the screen they choose. And yet studies understandably try to compel their subjects to make a decision by not providing a “whatever” option.  Assigning far-reaching conclusions to haphazard choices forced on students who try to fulfill some other requirement (lab credits or $10) constitutes a very shaky ground on which to base behavioral science. Additionally, a choice between an assured $100 and $1,000,000+ with a 10% probability is purely hypothetical in a laboratory setting, hence not really subject to empirical verification the way tradeoffs between small “prizes” such as chocolates and coffee mugs would be. Similarly, counting on students to come out of their study booths to react to simulated choking sounds after they had presumably been through several other unpleasant lab-orchestrated situations sounds a bit like too much optimism.

Obviously, polling instead faculty members who actually work in psychology will not make the study any less biased than using undergraduates. What might happen, though, is that they will indeed be tested on the application in their own decision making of the theories they profess to use in their scholarly work. As Daniel Kahneman showed on the example of statistical applications to sample size, even here there is an uncomfortable and quite readily apparent gap between theory and practice.

We may accept, however, that for purely statistical reasons (since the same results are confirmed over and over in numerous studies, so they would at least appear to suffer from the same inherent flaws to very similar degree), psychological findings regarding economic decision-making are still fairly sound. After all, in real life people overwhelmed with countless daily decisions often simply rely on the intuitive responses of System 1 to “simplify their lives” without engaging in too many tiresome analyses that involve the actual use of cognitive faculties of System 2. 

Should we, then, side with Sunstein’s or Slovic’s opinion on policymaking and risk? Due to the nature of democracy, public opinion, however irrationally flawed and swayed by rumors and unfathomable influences, does have an irresistible effect on public officials who, aside from the judiciary, are in a constant state of running either for office or for reelection. But who says policy consultants are really impartial, either, and not themselves susceptible to fallacies in assessing risk? Still, Kahneman’s postulate to accommodate public fears regardless of their merits leads down a very slippery slope, as history has shown on countless occasions ranging from witch hunts to the Red Scare to plays on xenophobic fears sometimes capable of reaching, under a confluence of unfortunate circumstances, a culmination like the Holocaust.

“After-birth abortion”?


The philosophical debate about the attributes of “potential” persons versus “actual” persons has been mostly limited to moral and ethical considerations concerning the permissibility of abortion. In recent history, the point in time when “potential” persons would turn into “actual” persons has been continually pushed back in synch with medical progress, until in certain circles and jurisdictions the “potentiality” of a person for all practical purposes ceased to exist, and was entirely replaced by his “actuality,” with many of the social, moral, and legal consequences inherent in such a construct.

But this argument can just as well be taken in the other direction: the lack of attributes of an “actual” person may well extend not only until birth, but also a bit beyond. This is precisely what two scholars recently did, opening Pandora’s Box a mile wide in the process:

“Australian philosopher and medical ethicist Dr Francesca Minerva and Dr. Alberto Giubilini, a bioethicist from the University of Milan, wrote “After-birth abortion: Why should the baby live?” which claims killing babies is as ethically permissible as abortion.” (The Telegraph, March 2, 2012)

Every civilization has pretenses in professing its protection of human life. The reality of every civilization is, however, that life is protected only with some ifs and buts. In other words, its protection is contingent upon what is by any other name a balancing of interests.

Some religious doctrines, including Christianity and Buddhism, attempted absolute protection of human life, which was ultimately based on the impossibility of ensuring equitable procedures as well as rational justification for any exception.

The ancient Roman question “quis custodiet ipsos custodes?” (“Who shall watch over the guardians themselves?” - Juvenal, Satire VI, lines 347–8) has then and since remained unanswered – and may fairly be presumed unanswerable within our known framework of moral philosophy – for all extrajudicial killings.

Other religious doctrines, including Judaism, Islam, and Hinduism, perhaps more honestly from a pragmatic viewpoint, accepted the inevitability of conditionality of the protection of life in its social context. As a matter of practical necessity, such a stance has to give a certain latitude to arbitrary choices. And therein lies the true significance of the debate triggered by this article on “after-birth abortion.”

In the cultural context of the philosophical debate on the subject framed by Minerva and Giubilini as “after-birth abortion,” it seems, of course, inevitable that the ghost of Dr. Mengele be conjured up to end all further argument. But let us, for a change, take a rational and unsentimental approach to this analysis:

It may be granted to the “party of the outraged” that historic precedent of almost universally legalized infanticide, such as the Spartan and Roman recognition of a parental right to expose unwanted newborns on a mountain side, should provide very little guidance in our times of higher, more compassionate aspirations. So, let us disregard the almost ubiquitous similar usages in Ancient Egypt, Carthage, Judaism, pagan European tribes, Arabian, Russian, Georgian, Chinese, Japanese, Inuit, Native American, African, and almost any other societies with documented evidence on the subject.

But what about the “sanctity of life” in 21st century Western civilization, particularly when it is considered through the lens of economic value judgments? Is there a double standard for Law and Economics? Nothing is, after all, more assured than the material bankruptcy of a society that were to assume a collective burden of maintaining life “equitably” to its ever-expanding medico-technical limits, as the cost of prolonging life increases geometrically with advancing age while modern technology allows to support life even in cases when the definition of “life” itself is entering a disputed zone. Is an infant any more deserving of protection than the elderly who have cast their dice and paid their debt, not to mention made their investment in society?

What likelihood may we attribute to the realization of potential of a newborn whose parents do not want to raise him? Society’s seemingly moral and benevolent interference in this most basic and elementary of human relationships is a striking parallel to what has led to the ruin of state finances in American federalism: the “unfunded mandate,” where the federal legislature, based on the Supremacy Clause (Article VI, clause 2 U.S. Const.), mandates expenditures by states for which it effectively declines to provide ways and means, thus leaving the states to square their own budgetary circles – with known results.

As we recognize the unacceptability of taking public charge of all unwanted children in a meaningful and morally preferable way, we continue to shirk the tragic choice of either turning a blind eye to abortion and infanticide or “mandating” to parents the continued existence and marginal support of their unwanted procreation. That dilemma holds true however justifiable or arbitrary the decision of those individuals may be on whom all infants’ lives entirely depend in reality: their parents. 

But this would also mean to accept in political discourse the conclusion that life does, in fact, have an economic sticker price, be it measured in monetary resources, time, or opportunity cost – as every wrongful death verdict inescapably is called upon to determine. It also means that this logical reality, in all its cogency under whatever socioeconomic model or system, must be accepted as a legitimate topic of inquiry beyond religious and/or ideological taboos and that the general avoidance of the subject merely stifles a constructive debate. It leads to letting the chips fall where they may, thus turning every prematurely ended life into some “individual tragedy” as opposed to a statistical question weighed heavily by utilitarian and other simply realistic if not always noble considerations that we, at the current evolution of the body politic, merely collectively refuse to deliberate and answer.

“Repressed memories,” Daubert, and scientific evidence

The Myth of Repressed Memory by Dr. Elizabeth Loftus and Katherine Ketcham makes not only for fascinating, but in fact for outright startling reading. On the one hand, one has to wonder about how justice could possibly be served in an exceedingly liberal judicial environment where witness testimony is capable of securing a guilty verdict absent any other corroborating evidence whatsoever. Sure, it is the situation of “she said, he said” that is so prevalent in contemporary sexual abuse cases that are often tried years, and sometimes many years, after the events took place. In a case discussed in The Myth of Repressed Memory, however, Eileen Franklin’s testimony was not even that of a repressed victim – it was a testimony of an adult woman suddenly recalling (and under rather unclear circumstances) in minute details what she supposedly witnessed as an eight year old: the rape and murder of her best friend, Susan Nason, by Eileen’s father. It is astounding that the credibility of Eileen alone was deemed acceptably sufficient to convince a jury beyond a reasonable doubt of the guilt of the defendant who was her own father, and that all her testimony was supplanted with was a characterization of her father as an evil man, possibly a pedophile – accusations that were never required to be substantiated.

It is even more shocking to see how, in other cases, clearly manipulated children and/or their mothers tell fantastic stories of satanic cults and of sadistic sexual abuse, and that older family members actually get convicted on that basis alone, once again absent any physical evidence whatsoever. Prosecutors probably argued that the grotesque stories were a result of trauma the victims survived – but it is beyond my comprehension how such cases might be considered different from the Salem Witch Hunt trials. Is the U.S. judicial system of the 21st c. at risk of reverting to the methods and practices of the Dark Ages?

Another issue touched upon, but not really discussed by Loftus and Ketcham, concerns the professional responsibility of “therapists” who use objectionable methods to first convince their patients that all their problems do, in fact, stem from something horrible that has happened to them in the past, and then push them into mental distress and drug abuse by pressuring them into dwelling extensively and often exclusively on those issues. Some of the most frequently quoted “therapeutic” methods consist of trying to “recall” or at least “imagine” the horrors the patients were supposed to have survived. Even if the testimony of such a misguided patient with “memories” that were often arrived at under hypnosis or suggestion is subsequently dismissed by the court or suppressed as unconvincing and inherently inadmissible, are therapists ever subjected to disciplinary proceedings for breach of any standard of professional ethics? What about those who, as the book describes, take on a patient with an eating disorder (a woman who basically is trying to lose weight, like Lynn), and drive her by such “therapy” into a mental institution and into multiple suicide attempts – does anything ever happen to such “therapists”? It does not seem to be the case, even if other therapists and psychiatrists may recognize the damage done and spend years trying to bring a victim of such “healing” back on her feet. The truly scary part is that some of the most destructive therapists discussed in The Myth of Repressed Memory are not only professionally licensed, but even hold advanced degrees, thus invoking their elevated professional qualifications as a kind of ultimate authority over their initially doubting patient. It is fascinating to observe how much the techniques of alienation, pressure, and group loyalty, not to mention brainwash and rampant drug abuse, are similar to the methods used by some of the most notorious religious cults for reeling in and holding on to their followers. The power aspect is just one of the benefits to both cult leaders and some therapists. Another one is money. Turning a patient into a “lifer” in psychotherapy secures, after all, a constant stream of income – and one whose life has been destroyed by harmful “therapy” is, of course, unceremoniously dumped as soon as she turns out to be unable to continue paying her bills (as the story of Lynn exemplifies).

To answer the rhetorical question of Ed Frischholz quoted by Dr. Elizabeth Loftus, “What do you suppose is going on out there?” – one needs to ask if psychotherapy is not, in fact, turning into a pseudo-scientific cult based on mere leaps of faith and mass hysteria.

At first glance, The Myth of Repressed Memory has precious little in common with Daubert et ux. v.Merrell Dow Pharmaceuticals, Inc. This case decided by the U.S. Supreme Court deals with personal injury (the influence of prescription medication on birth defects) and with technicalities under the Federal Rules of Evidence. The opinion of the court was that “The Federal Rules of Evidence, not Frye, provide the standard for admitting expert scientific testimony in a federal trial.” That part was uncontested. Subsequently, though, the court launches into “general observations” of a rather philosophical nature, trying to present its views on the nature of scientific method and knowledge. As far as it is publicly known, none of the Supreme Court Justices in this case was a trained scientist – hence the dissent of Chief Justice Rehnquist is all the more understandable, especially given that the “general observations” have no direct bearing on the case at hand, once the final ruling rendered such considerations moot in the first place.

The bulk of the Supreme Court decision in Daubert seems to focus on the nature of fact and knowledge. And yet, unlikely as it may sound, the cases described by Loftus exemplify what happens when courts take at face value the rather outlandish instigations of the majority opinion that, one, “We are confident that federal judges possess the capacity to undertake this review” (“whether the reasoning or methodology underlying the testimony is scientifically valid and of whether that reasoning or methodology properly can be applied to the facts at issue” )(592-593), and, two, that jurors are deemed capable of distinguishing science from pseudo-science (596). It is a known fact that many if not most jurors, for a variety of reasons rooted in the jury selection process, often do not come from the most educated strata of society, hence their judgment in matters scientific will be limited to their impression of the expert witness’s credentials and how persuasively (read: categorically) his claims will be presented. Jurors’ opinion will also be swayed by personal misconceptions, prejudices, empathy, etc. As for the judge, his ability to consider what is valid scientific method and what is not will in most cases be equally limited to the credentials of the expert witness. This is so because even the experts in the field differ substantially in their opinions about what constitutes acceptable methodology and how far conclusions drawn from it may reach. The story of Eileen Franklin described by Loftus is a clear case in point: the two experts in the matter have very different conceptions of memory processes, and one considers the research of the other either not rigorous or not relevant. If Dr. Elizabeth Loftus, a specialist in her field, considers the argument of Dr. Lenore Terr, another specialist, as based on leaps of faith and not on rigorous science, how can a judge with virtually no knowledge of the field base himself on reliable authority when deciding what is and what is not appropriate scientific method? The question of memory may be decided based on the beliefs of both the jurors and the court – regardless of expert opinions, they will believe what they already had believed or wanted to believe before, and use the expert’s arguments primarily if not exclusively to corroborate those very same personal opinions. But what in a case that considers, as in Daubert, medical and pharmaceutical information? What will the jurors understand from expert mumbo-jumbo describing biochemical molecular structure and advanced regression analysis and reanalysis of previously published and assessed data? In personal injury or medical malpractice cases, each side regularly presents its own expert witnesses – does not that fact alone suggest that, for the purposes of civil as well as criminal justice, scientific truth can be pretty much whatever we want it to be, because there will always be some published or unpublished research supporting either point of view? And how will jurors be able to weigh the credibility of evidence on either side? That question, however, touches on an altogether different subject: on the comparative merits of the common law system of justice assigning the role of finder of fact to a jury of laymen assembled pro hac vice.

The case of Eileen Franklin demonstrates how a non-scientist judge vested with authority to decide what is and what is not a scientific method does, in fact, admit evidence that has no basis whatsoever in scientific fact, and that results in a criminal conviction based solely upon - possibly imaginary - “recovered memories” obtained through hypnosis, suggestive therapy, and manipulations of the patient’s mind by a psychotherapist. How much farther away from verifiable science can “admissible scientific evidence” possibly get?

Why would you believe Wikipedia?

This question has the curious effect of calling into question a presumption we all somehow harbor: that Wikipedia is a source we may, by and large, safely believe and rely upon “for most intents and purposes.” But why do we actually believe Wikipedia?

An anecdote illustrates this issue: When I recently asked a graduating doctoral student of mathematics about some rather obscure concept in his narrow area of specialty, he told me to look it up on Wikipedia. Then he added, as a supplemental explanation: “If it is on Wikipedia, it is probably right.” And, “Just consider the kind of person that is likely to publish on Wikipedia: it must be a nerd who really cares deeply about the subject to write an extensive article, and so he probably does know a lot about it.”

This is one way of looking at it: a volunteer putting in several hours to selflessly write an anonymous article must really care about the subject matter, and must be passionate enough about it to want to educate the general public. Dedicated enthusiasts do tend to know their area of interest rather well.

Another basis for our trust in Wikipedia as a source is the consistency of knowledge provided by this free encyclopedia: We virtually never hear of experiences when someone would claim that an article on Wikipedia contained an outright falsehood. Sins of omission, fuzzy around the fringes, not enough attention given to minority views – sure, all that cannot be ruled out, and typically it is not. All those ultimately inevitable limitations of any encyclopedia would predictably attach to Wikipedia as well – but it is right here at your fingertips, at the beck and call of the digital age. If it was approximately right in all the previous cases of our use of it as a reference tool, it will probably be equally accurate in future cases. Considering past track record in quality control is one of the ways we come to trust our sources.

But there is also the issue of a quality and peer review: Wikipedia articles can be edited, questioned, footnoted, cited, cross-referenced, and amended. Somehow people seem much more compelled to object to the errors of others and point them out than to contribute a more accurate substantive text by themselves. This invisible network of reviewers consisting of the public at large, or actually of other enthusiasts who care about the same subject to a similar degree and who are knowledgeable enough to contribute to its accurate description, is one empirically meritorious way of ascertaining that the community of specialists keeps satisfactory control of quality issues in Wikipedia articles. No, we cannot rule out with any satisfactory certainty that a relatively recently posted article that has yet to undergo the multiple grinding of peer review may still contain spin or be reflective of some kind of special interest, but sunlight – and democracy – are generally fairly reliable disinfectants.

So, why do we believe Wikipedia, other than just by default? I think we may safely invoke for most purposes the standard of “general acceptance” – the one extensively discussed in Frye v. United States, 293 F. 1013 (D.C. Cir. 1923) and in Daubert v. Merrell Dow Pharmaceuticals, 509 U.S. 579 (1993).