Pages

2018-03-01

Regulating Mathematics

Let’s not delude ourselves – wide swaths of mathematics have been regulated at least in their applied incarnations. At least since cryptography, a branch of number theory, has come to be considered an “armament” requiring an export license.

But as “weapons of math destruction” have become commonplace and algorithms that rule our working lives and consumer existence are used for anything and everything from predictive advertising to policing to a virtually unlimited number of other uses that include the internet of things as much as smart agreements and the internet of contracts, it became increasingly obvious that delegation of the outcome of machine analysis, evaluation, learning and assessment would require regulation. Cathy O’Neil, the “mathbabe” with a comet-tail long track record in finance, has been arguing for considerable time that algorithmic notations project the past onto the future (so the best we can hope for is to perpetuate the past) and are thus rife with bias: they serve as a means of social control through pernicious feedback loops, such as value-added models penalizing seemingly excellent educators, perpetuating racial, class-based and other discrimination in “predictive policing,” political polling, prison sentencing, car insurance premiums, or employment tests.

When Lufthansa, following the bankruptcy of Air Berlin, substantially and unjustifiably hiked its airfares (to no one’s surprise, because a serious low-cost competitor had just vanished) and was chided by the German Federal Cartel Office, it created the “algorithmic defense”: no human was to blame, it was all “the algorithm’s fault.” To which federal regulators remarked that algorithms were not written by God in heaven. We can be sure to see algorithmic defenses spring up all over the place, almost at the speed of light.

In this context, O’Neil postulates a “Hippocratic oath” of modeling and data science: first, do no harm. That would require that mathematical models be purged of characteristics that allow them to serve as proxies for race and class and start responding to ethical responsibilities – which are tricky because there are different stakeholders. Thus, meaningful regulation in a meta way requires auditing algorithms – which, in reality, would mean to create and continually improve algorithms that audit algorithms. That is because mathematics is inherently “trusted” but, because of its undisclosed assumptions and model correlations in most algorithms, is anything but trustworthy: its formulae are secret, and, although they almost operate like laws in some instances, their disclosure is currently mandated by no law, not even by the Freedom of Information Act, and has proved difficult to enforce by civil litigants. Furthermore, the constitutionality of potential outcomes dictated by algorithmic output is reviewed by no one. For example, we have fair hiring laws – just that those are not applied to Big Data algorithms, and there are no signs of an emerging nationwide conversation about it, just as there is not about so much of data science.

In the face of increasingly overwhelming evidence that the very analogy-based and precedent-oriented genesis of AI does, in fact, “learn” from models that carry prejudicial patterns on race, class and gender (surely among others), indicating “group membership” or value allegiances of algorithms and robots steered by them may become the next frontier of disclosure – perhaps through brand names, though it will likely require greater and deeper efforts. But mere disclosure of potential or predictable biases reinforced by autonomous learning may not be enough in the absence of proactive and affirmative eradication tools. Which raises another bizarre specter: the infiltration of AI by algorithms to secure political correctness. So long as algorithms are written by humans, draw value data input from humans and perform for a human target audience, it will be difficult to see how phenomena that have existed in human valuation, rating and triage processes could fail to leave mirroring marks on mathematical models ultimately traced back to them.

No comments:

Post a Comment