The Tortoise & the Gun

by Elizabeth A. Watkins, Laura B. Greig

For a Cyborg, a Dragon (see our last post) is especially important. For, temple, too, a Dragon is important. Protecting a physical space requires a physical body, protecting an online space requires a data body.” But these faces of security are not entirely separate; we used to think about cybersecurity as a problem for computers, but it’s much closer to the front door than that. Cybersecurity is a human gesture: it has human weaknesses. And can use ingenuity to exploit human weakness as it’s sewn into computational structures, into walls and gates. The wall, the gate, the garden all do their part to protect, to surround, to insulate using blind resistance, but the dragon does what those cannot: adapt.  The Dragon can move, can communicate, and can anticipate. 

In the past two years, we have been given a new glimpse into everything cybersecurity can mean. It’s not just hackers opening our bank accounts, it’s trolls twisting our information. It’s not just a thieves breaking into an email account, it’s propaganda factories producing manipulated video of heads-of-state. Security is not a machine’s perimeter but a human value, of safety, of vulnerability, of exploitation. The sovereignty of our presidential election was attacked on every front: voting machines, campaign servers, social media platforms. Some of these attacks are human-vs-human, some are human-vs-machine, some are machine-vs-machine.

Machines, as it happens, have the problem of selective attention. If they’re not told that attention is a nimble thing, capable of being re-allocated and re-focused, then a machine knows not when to shift its gaze, or when its gaze is being fooled. Facebook thought security meant preventing criminals from breaking into individual accounts. They didn’t know it meant an army of trolls flooding an information ecosystem with messages meant to exploit human terrors, fears, and gnawing anxieties. 

Some have cried out for machines to fix this human problem. Engineers have responded in kind, saying they’ve trained a new sort of dragon -- Artificial Intelligence that can detect artificial bots spreading disinformation, watch for spam, and even detect harassment in online forums. Some AI can be connected to a camera, and can watch out for guns. 

This new dragon protecting the temple, however, has inevitably not been left alone to carry out its tasks. Another, newer dragon has been built to counteract its protections. This new animal is called Adversarial Machine Learning (AML), and it works by subverting the Dragon’s carefully engineered attention. 

An especially enigmatic thread works by generating “optical illusions for machines.”

Last year, researchers described a turtle they had 3D printed. Human eyes would tell us that the object looks like a turtle or tortoise, head and tail and everything in between. An artificial intelligence algorithm, however, saw it differently: Most of the time, the AI thought the turtle looked like a rifle.

Our AI security dragon, carefully trained to see and to interpret the world, was fooled into thinking it saw a gun where in fact there was only a tortoise, in a pattern invisible to human eyes. Or, in a flipped “real,” humans are fooled into thinking there is a tortoise, when a computer dragon can see the hidden truth of the gun. 

The shaded trickery used against our own AI dragons extends well past the tortoise and the gun: reading like so many maps telling of “here by dragons,” AI can be coaxed into seeing what isn’t there. In short AI can be induced to hallucinate. In one researcher’s tests of an AI that would guide self-driving cars,  Hello Kitty loomed in the machine's view of street scenes. Other cars disappeared.

Dragons of protection and of conquest are spirited into the world, with human weakness, ingenuity, values, and stories twisting and weaving in a new arms race in the design of systems and machines.