As we commemorate the centenary of the outbreak
of World War I – arguably the first engagement that could with some
justification be characterized as “automated response” – it behooves us to take
a look at the development of this phenomenon in years since, but, even more
importantly, at its anticipated future.
The automated response that led to WWI was purely
legal in nature: the successive reactions following the general mobilization of
the Austro-Hungarian Empire were rooted in a network of treaties of alliance.
The system contained a fair number of “circuit breakers” at almost every turn,
even if they would have amounted, in contemporary view, to breaches of a treaty
obligation. This situation found a direct successor in Article 5 of the North Atlantic Treaty of April 4, 1949 (the Washington Treaty establishing NATO),
which to date has been invoked only once following 9/11.
But
it is not so much the automatism based on honor, credibility and other social compulsion
on an international scale that will likely determine automated responses in the
future. It is much more a technical and systemic automated response that will
increasingly, for a variety of reasons, take reactions out of human hands.
For
one, response time of modern weapon systems is shrinking at an increasing pace.
Comparable to the situation in computer trading, the percentage of situations –
without regard to their significance – will grow where human response will
under almost any imaginable circumstance be too slow and hence come too late.
From
a warfighter’s perspective, therefore, automated response is a good thing: if a
threat is identified and incoming munition destroyed before it becomes a
manifest threat, it matters little whether that happens by human intervention
or fully under the control of technology. Of course, a number of concerns are
evident:
- Dependence on technology will grow, with catastrophic consequences once confidence in the protection of defensive systems proves misplaced;
- Friendly fire incidents become unstoppable once hostile pattern similarity comes within a defined threshold;
- The responsibility of software and system design is likely to spiral out of all reasonable control.
Automation comes in many shades and hues in
military hardware. Not all of them relate to hostile action. For example, naval
surface vessels can use sonar to steer the boat on autopilot while the same is
not yet feasible for submarines because, amazingly enough, processing speed of
on-board computer systems is still too slow to rely on information returned by
a sonar system.
In the sense that gives rise to most concern,
drones, for example, are not “autonomous weapons,” because a trained human
remains in the loop controlling the action however remotely. The philosophical
and ethical dimension of the debate was perhaps best described by Tom Malinowski of Human Rights
Watch, proposing to ban autonomous robotic weapons preemptively:
Could a machine do something that human soldiers throughout the
centuries have rarely done, but sometimes do to very important effect -- to
refuse to follow orders? I'm convinced that, if these weapons are developed,
they're not just going to be deployed by the United States and Sweden, they're
going to be deployed by dictatorships. They're going to be deployed by
countries that primarily see them as a way of controlling domestic unrest and
domestic opposition. I imagine a future Bashar Assad with an army of fully
autonomous weapons thirty years from now, fifty years from now. We've seen in
history that one limit on the ability of unscrupulous leaders to do terrible
things to their people and to others is that human soldiers, their human
enforcers, have certain limits. There are moments when they say no. And those
are moments when those regimes fall. Robotic soldiers would never say no. And
I'd like us not to go there.
Action on autopilot by automated response is
arguably most dangerous in the area of cyber warfare because the threats of
repercussions on vital, not necessarily strategic infrastructure (not the least
of which is commercial air travel) remain
uncontrollable – and it would be naïve to assume that this threat assessment
will change soon based on technology that can somehow be preserved from coming
into “other” hands as well. Minimization of civilian casualties, although the purported
primary objective of automated response, is not, even if the Pentagon says cyber weapons are unlikely to be used on their own but rather to support conventional attacks.
It is the very nature of deniability of stealth attacks in cyberspace (and Stuxnet comes to mind) that
they are perfectly separable from overt
action. But of course, that is not to say that they cannot be part of integrated strategy.
The problem with automated retaliation to cyber-attacks
is attribution. In a great many
cases, cyber-attacks cannot be pinned down reliably on an “evildoer” with any
degree of technical certainty and proof that international law requires, for
good reason, to acknowledge self-defense or justifiable retort.
In the age of asymmetrical warfare with its many facets, there have also
been made a great many accurate and fascinating arguments against large
high-end systems in favor of small, many and smart v. few and exquisite
– not least in keeping with the threat profile. The more limited the scope of
application, the more tolerable, arguably, automated response may become – with
time. Like in the case of all technology, its Overton window is not constant over time. One should be mindful of the
precedent of The Hague Convention of 1899 and its Declaration on the Prohibition of the Discharge of Projectiles and Explosives from Balloons or by Other New Analogous Methods. By 1914, the topic had been moot and air war
had become a ubiquitous reality in every theater of operations. We are not good
at technology assessment, nor at its prediction. In fact, human intellectual shortcomings,
influenced by emotional and cultural factors, are nowhere more evident and rarely
more consequential than in this area.
Players trying to find a real-money on line casino need look no further than Ducky Luck. In other words, you’ll have five instances what you began with. You’ll must fund your account with $25 in money or $20 by way of crypto to claim each portion of their nine-tiered welcome bundle. Even when loading up three machines in several browser tabs, we didn’t encounter any noticeable lag. Games load in seconds, and graphics don’t 헤븐카지노 take a pay cut on mobile gadgets. It looks like would possibly be} preventing against a machine like that, and it is preventing back.
ReplyDelete