Search Results

Share Article:

Facebook Twitter Linkedin Mail

The real dangers of AI

Tech and Innovation

Posted on August 22, 2017

5 min read

We’ve all heard the impending potential doom AI will bring us, whether it’s in the form of AI taking over jobs or the existential risk of superintelligence.

While key players in the technology space are having a very public go at each other’s grasp of AI and its risks, the real risks – and make no mistake, there are real risks – of AI today lie away from the extremes.

The thing is, we’re not at imminent risk of being overtaken by superintelligent AI – but neither should we be laissez-faire when implementing machine learning solutions.

Bias

It may sound strange to call algorithms biased, but that’s exactly what they are. In most cases it’s not explicitly programmed in, but learned from training data. More often than not, the vast amounts of training data that machine learning algorithms require to work well has roots in human behavior, and the very human bias inevitably present in that data is then automatically modelled in, unless extreme care is taken to counter it.

To give just one example of this type of near-invisible bias, take a look at Google Translate. It was recently improved with the addition of Google Neural Machine Translation system, learning from millions of translation examples – examples with baked in bias. In the Turkish language, the personal pronouns are gender neutral, however try translating “O bir doktor. O bir hems ire.” and you will see this type of bias in action.

What’s worse, the inherent bias in algorithms can be worse than in humans. Humans, for the most part, try not to behave in an overtly biased or racist manner partly because there are real-world consequences to doing so. With algorithms, however, there is usually no way to call them out on the bias they display, nor any way to sanction them.

The result is the AI can work to amplify and legitimise biases, or even racist behavior.

In search of Common Sense

Another important risk is giving the machines – or algorithms, if you want to sound less scary – the power to autonomously make decisions in situations that would require human guidance.

Why? Isn’t this – autonomous operation – exactly what we want the machines to do so they can ease our load or take over activities like driving a car?

Not necessarily. Particularly in long-tail scenarios where there is a near-infinite list of individually low probability but collectively likely-to-happen-at-some-point events, giving algorithms autonomous decision powers can backfire badly without humans in the loop.

Machine learning systems do not handle situations very well that are very dissimilar to what they have encountered in the training data, whereas common sense allows humans to usually successfully navigate such situations. If you expect common sense-behavior from such a system, you’re asking for trouble.

To illustrate, a human driver will over the years of driving face a number of unexpected events we are never explicitly trained for – what to do when the headlights fail at night? When the traffic lights stay red forever? When a passenger tries to climb out of the window? When we’re surrounded by carjackers? When a sandstorm or a snowstorm causes visibility to drop to zero?

And if that “natural long tail” isn’t difficult enough to manage, consider the unnatural long tail – mischievous or downright malicious events that autonomous cars will not be prepared for can easily be imagined, ad infinitum. How do they react if a hacked V2X system gives conflicting data on whether the traffic light is green vs what it really shows? What happens when someone throws a human-shaped cardboard figure on the road? Or sprinkles lidar-confusing metal confetti?

“I don’t like to break rules, but I do like them all bendy”

On the positive side, think of the last time you received exceptional customer service.

Chances are it might have involved the customer service person bending, or possibly even breaking, some of the guidelines and rules imposed on them, doing something they “were not supposed to do”. Encoding empathetic discretion or determining the morality of an action is something AI cannot do today.

Despite the hype, machine learning systems of today are not intelligent the way people are, and we are nowhere near to developing so-called GOFAI (Good Old-Fashioned AI), a “mind” that would be capable of human-style intelligent reasoning and understanding.

The current crop of machine learning systems can be taught to behave in a mostly predictable and well-performing manner under normal circumstances. But how will the systems behave when the training fails them?

If any rule bending is required for a good experience, chances are that process cannot be modelled well and automated in a satisfactory manner.

As lucrative as the efficiencies of fully automating a decision-requiring system are, we must understand the failure behavior, and never leave AI systems unsupervised to autonomously make decisions that could have serious real-world consequences.

References

Understanding broadband speeds on fixed networks

Social eye for the corporate guy

Technology - can we live without it? (I can’t!)

When milliseconds are a matter of (virtual) life and death

Tech and Innovation

Posted on June 6, 2017

4 min read

There are industries, like the infamous high-frequency trading, where shaving milliseconds off the latency of your network connection is worth tens or even hundreds of millions of dollars. In the coming era of remote-controlled haptic robots, latencies will also be critical.

But even today, every day, latency is a matter of life and death for thousands of Australians.

A matter of virtual life and death, in this case.

What is latency anyway? In networks, latency is the time it takes for information to be sent between two points. The measurement usually used for this is the so-called RTT time (round trip-time), the time it takes for a signal to be sent plus the length of time it takes for an acknowledgement of that signal to be received.

And in the world of online gaming, RTT is one of the network performance measures that matters a great deal.

Whether you’re into the free-roaming adventures of World of Warcraft or learning the ropes of interstellar economy in Eve Online, there’s now a massively multiplayer online game for everyone.

Different type of games have varying network performance requirements; strategy games tend to be much more forgiving, whereas for obvious reasons FPS (“first-person shooter”) games are more demanding.

And herein lies the catch: to a degree, depending on how the games are architected, a player with a worse RTT performance on his or her connection can be at a significant disadvantage when playing these games.

How low is low enough then? It depends, but the following can be used as a guideline for RTT times as measured to the gaming server[1]:

  • Green (good gaming experience for all genres): < 80ms
  • Yellow (mostly ok but can have some issues): 80-150ms
  • Red (major issues with more demanding games): 150-200ms
  • Black (unable to play demanding games): > 200 ms

Now, the overall RTT latency is comprised of a number of elements, many of which are outside the user’s control. These are factors like serialization delays and the specific routing the packets take, the switching involved, as well as queuing and buffer management algorithms used in the network equipment. The most obvious impacting factor is distance – in the case of gaming, the distance from the gaming server.

The Melbourne-Sydney latency is only slightly more than 10ms, but Sydney-Perth is already close to 50ms; double this – as can happen simply from using Wi-Fi in poor conditions – and you’re already past the “green zone”. An Australia-US link will add over 150ms[2].

As mentioned, one surprising element can add to the latencies at home, too: Wi-Fi. In good conditions Wi-Fi will only add a few milliseconds of latency, but can add as much as 30-50ms in ‘noisy’ conditions such as an apartment building where many Wi-Fi routers operate close to each other on the same frequency.

The type of broadband connection you have also has an impact; a fibre optic connection is best, followed by other technologies – wireless can add significant latency, and a satellite connection adds sufficient latency that real-time gaming can be challenging at best.

What about mobile? A good-quality LTE (4G) connection should be just fine for games, with ping times roughly 30ms in good conditions, although this can vary with your location and the device you’re using. Even the more limited data quotas available on mobile are not usually an issue with many games using as little as 5MB/hr.

As 5G rolls out with much-improved latencies, the network performance may even surpass many fixed connections in its latency. According to Ericsson, 5G should allow low-level end-to-end latencies of 1ms or less; even at an order of magnitude greater application level latencies, 5G would compare favourably against most other access types[3]. For a wireless connection to do that is something that would’ve been unthinkable some years ago.

[1] https://forum.unity3d.com/threads/question-about-acceptable-levels-of-latency-in-online-gaming.261271/

[2] http://www.verizonenterprise.com/about/network/latency/

[3] https://www.ericsson.com/assets/local/publications/white-papers/wp-5g.pdf

Tags: technology,

Understanding broadband speeds on fixed networks

Unplug me and I cease to exist

Mobile World Congress 2010 - the news so far

Future Ways of Working 2.0: The network is coming

featured Network

Posted on April 12, 2017

2 min read

So you can already stream music and video to your smartphone at will, along with practically any content from around the world, and – bosses willing – you can work when you are on the go. But you haven’t seen anything yet.

The next generation of mobile network will soon be here – and the building blocks leading to 5G are already being deployed.

A futureproof network must be more than speedy. It needs a higher capacity to support more people doing more things at the same time. It also needs lower latency with data travelling faster than ever from your smartphone to somewhere in the network and back again; supporting the Internet of Things (IoT) that allows machines to talk to cars and cars to talk to your calendar, telling you to leave in 5 minutes to make your 9am meeting.

The future will be about more than controlling your smart home or seeing what sensors tell you about your office environment – it’ll be about powerful virtual assistants negotiating on your behalf, and informing you of the optimal outcomes they have automatically achieved for you.  Imagine no longer having to suffer through the arduous process of comparing electricity plans or finding that perfect holiday – your virtual assistant will take care of that for you.

Swarms of drones flying around delivering goods are still some way off (especially in the CBD), but ground-based delivery robots or “drones on wheels” are more feasible in the short term. So one may soon be delivering your lunch to the office.

The vision of cities being completely re-imagined in the age of flying cars will also take some time, but autonomous cars will soon start to make your commute much more bearable and Vehicle-to-Everything (V2X) technologies will make it smoother and safer.

What all these emerging technologies have in common is the requirement for reliable connectivity, and levels of capacity and low latency that right now are difficult to truly conceptualise. Ultimately, what is incredibly exciting is all the things we’re yet to think of, because once the network foundation is there, we have a practically unlimited space for innovation.

Find out more about how we are building a new kind of network

Tags: Network,

It’s the network that makes it great

Riding the rails and staying connected

A man enjoying 4G coverage on the ski fields

4G coverage on the ski fields