We’ve all heard the impending potential doom AI will bring us, whether it’s in the form of AI taking over jobs or the existential risk of superintelligence.

While key players in the technology space are having a very public go at each other’s grasp of AI and its risks, the real risks – and make no mistake, there are real risks – of AI today lie away from the extremes.

The thing is, we’re not at imminent risk of being overtaken by superintelligent AI – but neither should we be laissez-faire when implementing machine learning solutions.

Bias

It may sound strange to call algorithms biased, but that’s exactly what they are. In most cases it’s not explicitly programmed in, but learned from training data. More often than not, the vast amounts of training data that machine learning algorithms require to work well has roots in human behavior, and the very human bias inevitably present in that data is then automatically modelled in, unless extreme care is taken to counter it.

To give just one example of this type of near-invisible bias, take a look at Google Translate. It was recently improved with the addition of Google Neural Machine Translation system, learning from millions of translation examples – examples with baked in bias. In the Turkish language, the personal pronouns are gender neutral, however try translating “O bir doktor. O bir hems ire.” and you will see this type of bias in action.

What’s worse, the inherent bias in algorithms can be worse than in humans. Humans, for the most part, try not to behave in an overtly biased or racist manner partly because there are real-world consequences to doing so. With algorithms, however, there is usually no way to call them out on the bias they display, nor any way to sanction them.

The result is the AI can work to amplify and legitimise biases, or even racist behavior.

In search of Common Sense

Another important risk is giving the machines – or algorithms, if you want to sound less scary – the power to autonomously make decisions in situations that would require human guidance.

Why? Isn’t this – autonomous operation – exactly what we want the machines to do so they can ease our load or take over activities like driving a car?

Not necessarily. Particularly in long-tail scenarios where there is a near-infinite list of individually low probability but collectively likely-to-happen-at-some-point events, giving algorithms autonomous decision powers can backfire badly without humans in the loop.

Machine learning systems do not handle situations very well that are very dissimilar to what they have encountered in the training data, whereas common sense allows humans to usually successfully navigate such situations. If you expect common sense-behavior from such a system, you’re asking for trouble.

To illustrate, a human driver will over the years of driving face a number of unexpected events we are never explicitly trained for – what to do when the headlights fail at night? When the traffic lights stay red forever? When a passenger tries to climb out of the window? When we’re surrounded by carjackers? When a sandstorm or a snowstorm causes visibility to drop to zero?

And if that “natural long tail” isn’t difficult enough to manage, consider the unnatural long tail – mischievous or downright malicious events that autonomous cars will not be prepared for can easily be imagined, ad infinitum. How do they react if a hacked V2X system gives conflicting data on whether the traffic light is green vs what it really shows? What happens when someone throws a human-shaped cardboard figure on the road? Or sprinkles lidar-confusing metal confetti?

“I don’t like to break rules, but I do like them all bendy”

On the positive side, think of the last time you received exceptional customer service.

Chances are it might have involved the customer service person bending, or possibly even breaking, some of the guidelines and rules imposed on them, doing something they “were not supposed to do”. Encoding empathetic discretion or determining the morality of an action is something AI cannot do today.

Despite the hype, machine learning systems of today are not intelligent the way people are, and we are nowhere near to developing so-called GOFAI (Good Old-Fashioned AI), a “mind” that would be capable of human-style intelligent reasoning and understanding.

The current crop of machine learning systems can be taught to behave in a mostly predictable and well-performing manner under normal circumstances. But how will the systems behave when the training fails them?

If any rule bending is required for a good experience, chances are that process cannot be modelled well and automated in a satisfactory manner.

As lucrative as the efficiencies of fully automating a decision-requiring system are, we must understand the failure behavior, and never leave AI systems unsupervised to autonomously make decisions that could have serious real-world consequences.

References