Managing Multiplicity

MANAGING MULTIPLICITY

I’ve been asked to speak with you about what’s going on in the world – but let me start with what’s going on underwater.Every municipal water system has hundreds of kilometers of drains and channels and pipes to maintain, of all sorts of sizes, from wide to narrow. That’s a lot of expensive manual labour – and a fair bit of it involves significant danger to the inspectors, who have to SCUBA into all sorts of places, making even routine inspections more difficult and much more expensive.

But what if there were a better way?

What if, instead of sending a diver down to look at the walls of an underwater channel for leaks or other sorts of flaws, you could send a robot down to perform the inspection.

That idea isn’t entirely new. They’ve been doing pretty much the same thing on deepwater oil rigs for some years.

The robotics revolution has been more of a slow burn than an explosion – a classic example of the ‘twenty-year overnight success’. The sensing technologies get progressively better – better cameras and better sonars – while at the same time the motors and control systems have grown by leaps and bounds – both in reliability and manufacturability.

Robots aren’t exactly cheap – especially when they’re submersibles – yet they cost but a fraction of the price when they first showed up on oil rigs.

The technology is now so well understood that six years ago a community of enthusiasts designed their own submersible robot, the Open-source Remotely Operated Vehicle, or OpenROV.

You may have heard the words ‘open source’ before; it’s a term from technology that indicates the community behind the project shares all of their work with the world on terms that allow anyone to take their work and use it, adapt it, improve it, or change it in any way they feel appropriate.

Open source has changed the world. Everyone here today using a smartphone – and that would be almost everyone here – uses an open source operating system. Android smartphones use Linux, while Apple uses another one — known as Darwin. Open source powers the world these days.

It wasn’t always like that. Go back twenty years and there was very little open source anything; very little software and certainly no submersible robots.

That’s changed, and the reasons for that change tell us a lot about the world of the 21st century, where we’ve come from and where we’re going.

It’s very likely that the first piece of open source software you used was the Web. We don’t think of the Web as software – but it’s exactly that, the software connective tissue connecting web browsers to web servers, all around the world.

One reason we’re using the Web and not something else is because the creator of the Web, Tim Berners-Lee, decided to make it freely available.

This meant universities and libraries and museums could put their collections onto the Web without having to pay fees. It meant that businesses could put themselves onto the Web as an experiment – without having to develop a detailed business case. Open source made it possible for people to have a go.

Having a go yielded big benefits, because the nature of the Web is that it connects people. Someone might try something, and someone else would find it, then copy that idea, or change it or improve it in their own thing — which someone else would find and copy or change or improve for their own thing – which someone else would find. And on and on and on.

The Web built itself out of nothing because we were all sharing our work, and learning from everything everyone else was doing.

We still are.

In the beginning we shared basic things – how to build Web pages, mostly. Then people shared some basic Web pages featuring the things they cared passionately about – passionate enough to spend time building a web pages.

The early years of the Web were a charming, anarchic mess – something that new took a fair bit of playing with before we started to get it right.

The first real sense of how right we could make that came along in 2001, when Jimmy Wales decided to turn his paid encyclopedia into an open-source project that anyone could edit.

Wikipedia started off as toy, then became a joke, then a serious threat to Encyclopedia Britannica, and finally just drove it into insignificance.

And it did all of that in under a decade.

It did that despite the articles being written by experts – Albert Einstein having famously written the Britannica article on Relativity.

Wikipedia has no experts. It simply has millions of contributors, each sharing, learning, and improving on one another’s work. Yet just that is enough to put a comprehensive knowledge resource within reach of anyone with a smartphone.

Just that, as it turns out, is a lot.

Wikipedia was our penny-drop moment. We realised that sharing what we know helps to make all of us smarter. (It also helps to make us dumber, but that’s a point we’ll come to a bit later.)

We’re all less stupid than we were just a few years ago. Theoretically, at least, because we don’t always take advantage of all of the knowledge we so freely share with one another.

But sometimes that knowledge comes together around a project, and that project speeds its way toward its goals because everyone working on that project is working hard to make everyone involved in the project smarter and better at what they’re doing.

That’s the kind of capacity building that leads to the OpenROV, and tens of thousands of similar open-source projects, created by people who learned by doing and learned more because they were sharing what they were doing with others.

Each project leverages all of the intelligence of everyone involved

Everyone involved leverages the intelligence of everyone who shared something relevant before that.

This means that everyone is bringing to the table something far greater than ever before.

It means OpenROV can be created from the whole cloth – just because a community wants it to be created.

But that’s not the whole story.

A submersible needs an operator. Sitting at the end of the tether, telling it what to do next.

Or at least that used to be the case.

But there’s a huge and unexpected side effect in this change in the way we learn – we’ve been able to teach it to machines.

All of the machine learning techniques that researchers have worked on for over fifty years finally hit an inflection point.

That inflection point came around the same time OpenROV was introduced – at the start of this decade.

Just as we were having our penny drop moment about sharing and learning, we were having another penny drop moment about machine learning.

** repeated mistakes of machine learning via AlphaGo **

So there is no secret to machine learning, no mystery, just lots and lots of mistakes that are learned from.

And so it was that two years ago I got the opportunity to mentor a startup through the Sydney University INCUBATE program – it’s the only startup incubator in the world that runs through the student union – created by students for students.

And that’s when I met the founders of Abyss Systems.

Originally they thought they had a robot that would be useful for plumbers – able to inspect piping the plumber couldn’t easily reach.

But then they realised they had a much bigger opportunity.

Using the OpenROV submersible, they could create a robot that could inspect water and drainage systems – safely.

But unlike the systems that exist today, which require a human operator and human analysis for the endless hours of video footage they collect, Abyss Systems uses artificial intelligence to rapidly absorb, digest and give a summary of those hours, sharing only the features it finds most interesting with human operators.

In other words, it takes all the boring bits out of the work, leaving the human only the open questions and hard problems.

Most of this software – written by Abyss Solutions – has been built on top of other open source software for computer vision and machine learning.

Because all this sharing matters, and almost everything we share someone else will find some use for – uses we never imagined.

Turns out that having submersibles that are smart enough to pilot themselves, record and then analyse the results is quite a good idea.

Abyss has grown from four co-founders to a company of 22, servicing clients all over the world – including, most recently, Hoover Dam.

Why? Because Abyss Systems has created a product that brings out the best both in humans and in our machines.

The machines get to do all the dangerous and boring bits – the underwater survey and top-level analysis of the survey data.

The humans get to do all the interesting and significant bits – identifying and classifying problems, elevating issues, ensuring coverage.

Machines doing what they do best. Humans doing what they do best. The sum of the parts greater than what either could do on their own – doing the job better, faster, safer and far less expensively.

Because they’re working together.

We have been hearing far too much about how the machines are going to run us all out of our jobs and and into unemployment and penury.

We’ve been hearing far too little about how this new generation of machines amplifies our capacities and allows us to make the most of our gifts.

This is what the mid 21st century looks like. It’s not an either/or where it’s either people or machines, but a multiplicity, where we’re finding interesting ways to work together on an evolving and deepening basis.

That sounds quite wonderful – and it is.

But innovations also present some unexpected dilemmas.

One of the big outcomes of all of this sharing and learning is an explosion in the number and type of sensors in the world.

We have sensors for practically everything, everywhere. We are trying to measure almost everything, everywhere.

And although this has only barely begun, we’ll have electronic water meters everywhere. Within the next decade they’ll be commonplace.

That’ll be a fantastic cost savings, as meter reading becomes electronic and instantaneous.

It does mean the meter readers will need new jobs – so here’s at least one downside of all of this automation.

This electronic flow of information will create a real-time stream of data from millions of residences and offices.

That’s going to prove invaluable – not just for billing, but for planning, for detecting leaks, and so forth.

Water utilities will buy and build software that will examine that flow of data in incredible detail, learning everything they can from that data.

And that’s where the dilemma lies.

With very little effort it should be possible to model a customer’s activities based on their water usage.

When they wake in the morning. When they shower. When they prepare or clean up after meals. When they go to sleep.

All of this will be completely exposed in the data stream, as if the customer shouted it from the rooftops.

Which of course they haven’t. And if they thought that you could see their comings and goings from their water usage, chances are that they’d feel uncomfortable about that.

Consider how people feel about Facebook after a few months of scandals involving supposedly private data that turned out to be anything but.

You’re going to be collecting detailed data on your customers. You will understand their habits. You will be able to see those habits as clearly as if you had cameras within their homes and offices.

And you’re going face some serious responsibilities about how that data gets used.

Can you sell that data? Can you use that data to sell new services to your customers? Can you pass along suspicious behavior?

And what happens if that data gets copied or stolen or just lost?

A connected world is a smart world, but a connected world is also a world where attacks can have huge consequences – and not just for the business that gets hacked into.

This detailed personal data is useful to stalkers, thieves, arsonists – you name it. People who know where you are and when can use that information against you.

And you will have that data. That part is inevitable.

Your machines will be digesting it to give you better insights about your water products and your customers.

But you have a duty of care. And I don’t simply mean the new laws about data breaches – though those are important.

You have an ethical responsibility to be transparent about how you collect that data, how you analyse that data, and how you put that data to work.

Only via transparency can people monitor your behaviour with their data.

And if you don’t provide that transparency, it will be forced upon you, because someone somewhere will seek competitive advantage with that data, and that’s when it will be misused or sold or stolen and suddenly every water customer in Australia will feel violated.

A bit like how we feel about the banks, right now.

Transparency is not just a good idea. Transparency is safety. Transparency is surety. Transparency is the way we must work whenever we combine human ingenuity with machine intelligence.

If we’re not very careful we’ll get that wrong. We’ll forget that the machines are simply doing what we told them to, learning what we’ve asked them to learn, and sharing all of that with us – because that’s what they do.

We’ll be faced with ethical dilemmas that will be intensely amplified by all these machines. That’s the human face of automation – it’s not someone being put out of a job, it’s a machine profiling someone so completely based just on their water usage that you can effectively map out the entire pattern of their lives.

No one is going to ask you to scrap the sensors or ignore the data they generate.

That data is too important, as the water supply becomes ever more precious.

But that data has tremendous potential both to help and to harm and unless it is framed with that understanding, it will be misused.

It’s easy to imagine machine learning systems that spot outliers and classifies them as troublemakers, and deals with them inflexibly.

Machines are very good at being inflexible. But spotting outliers, that’s a human task. They’ll learn that from us.

So we have to keep one thing front of mind: we build all of our biases into all of these new fusions between humans and machines. We decide what data to collect. We decide why it is important. We decide how much it matters, and for whom.

All of that is carefully coded into the design of the sensors we deploy, and the software we use to monitor the data generated by those sensors.

If you look for a certain kind of behaviour, you’ll be sure to find it – even if that’s simply a reflection of the way you designed your sensors and your algorithms.

This is something new we’re learning about machine learning – it creates ghosts, reflecting our own concerns back to us, written in data.

That’s a bit of a trap for us, because when we see our machines reflecting the biases we coded into them, we tend to believe that our biases reflect reality, when all we’ve done is build a machine so specialised the one thing it knows how to do is to confirm our biases.

This is a problem Facebook has gotten into.

**talk about the newsfeed and ML and confirmation bias**

So this already a big problem in the world. And as you build these systems you do not want to repeat these mistakes.

The hardest thing in to do in a data-rich age is simply to let the data speak for itself. We always want to see something in the data. And that’s a problem. We want to put our voice into something that has its own voice.

Sometimes we need to listen very hard for that voice, but it’s always there.

This is not a new problem. It’s plagued science for four hundred years. This is the reason we have peer review – scientists fact-check one another as the best way we have to remove human biases from the data.

But this is no longer a problem confined to science, as all of us become data scientists. As all of us get great machines that help us see whatever we want to see in that data.

For the middle years of the 21st century, we’re going to have to get very good at something we’re not terribly good at – listening.

We all seem to want to be very shouty these days. Reckoning that the loudest voice is the most important.

Quite often that’s the opposite of the truth, and the truth gets drowned out in the shouting.

There’s a problem with that — because when we start listening to the loudest voice, rather than the truthful one, we start making decisions based on that voice, and those decisions frequently turn out to be the wrong ones.

That’s why I earlier mentioned that all of this sharing can make us dumber. If we focus on the wrong things. If we confirm our biases rather than seeking out truth.

The machines are our partners in this. They will help us confirm our biases at lightning speed and with incredible precision.

Or – and here we’re being a bit more speculative – they could help to remind us that we need to look further afield for the truth.

If you have smartwatch, it can warn you every 20 minutes to get out of your chair and have a walk.

We need something similar for our minds. Something that taps on our thick foreheads to tell us that all may not necessarily be as it appears.

We’ll resent that, of course.  We like being right.

But we’ll need that – as never before.

So the future isn’t clear sailing. It’s not a harmonious marriage of human and machine capabilities. Like all relationships it’s going to have its good sides and its downsides. Its strengths and its weaknesses.  Idealising the potential – which is something you’ll hear a lot from organisations with a vested economic interest – serves no one. Be realistic is the best place to start, because it means that we’ll build in the sort of safeguards that keep this marriage on firm ground as both we and our machines move together into a future of multiplicity.

We’ll need to work on our listening – as we do in relationships. We’re going to have to work on our sensitivity – as we do in our relationships. And we’re going to have to be willing to accept and adapt to another’s needs – as we do in relationships.

Because none of this is happening in a vacuum. The transformations happening in water – just look at the work of Abyss Solutions – are happening in every other industry, everywhere.

The voices are multiplying, and they’ll all have something important to say.

One of the best bits of mentoring I’ve received over the thirty five years of my career came from a very quiet bloke who advised me, “Say little – and listen much.”

The future belongs to those who can listen – to the human voices, to the data, to the algorithms of the machines.

Listen closely, learn and do.

Thank you.

OzWater keynote, 9 May 2018, Brisbane, Queensland.

You May Also Like

About the Author: mpesce