TOURING 2050
A billion seconds is 31 years and 251 days.
There’s exactly billion seconds between where we are today and the last day of January 2050.
That’s a whole generation.
I myself still plan to be here – I’ll have just turned 87. And I expect to still be working – we’ll come to that a bit further along.
But let’s imagine someone who’s 24 years old today, just at the start of their careers.
In a billion seconds they’ll be where I am today – somewhere between 55 and 56 years old.
Let’s take a the world they’re living in, beginning with the Australia they’re living in.
If current population trends hold, Australia will be home to somewhere around 38 million people.
That sounds like a lot – sixty percent more than today – but it’s still less that population of California in 2018.
So although we have a lot of fear coming from the ‘Little Australia’ crowd, who worry about overcrowding and ‘battery hen housing’ and whatnot, you can see that California – a lot smaller than Australia, and with a roughly similar proportion of both arable and desert lands – has managed this reasonably well.
There’s no reason to believe that Australia will fail the first challenge of demography.
The climate of course is warmer and drier – when it’s not colder and wetter.
Unpredictability has become the real feature in the weather, and a lot of time and planning goes to avoiding some of the more obvious mistakes in planning and building in floodplains.
But it’s never perfect, so the insurance industry has become a very different beast, consistently rejecting insurance or claims on climate grounds – a regular feature of every policy written from the mid-2020s.
Australia – together with New Zealand – has become the refugee port for a generation of Pacific Islanders who found their homes disappearing as sea levels rose half a meter – then kept rising.
Their numbers count as nothing next to the hundred million displaced Bangladeshis trying to find new homes in Myanmar or India or any place that will have them, but it’s definitely added an Islander flavour to Australia’s mid 21st century.
The summers are still too hot. The winters are still too short. No one is burning coal – at least, not here in Australia.
Most of the cars are electric – the few that aren’t use diesel. Petrol is rare, and petrol stations became valuable pieces of commercial real estate as they came onto the market in the early 2030s.
Although everyone expected that all vehicles would be autonomous and driverless from the early 2020s, it didn’t work out that way.
The problem was simply this: the real world is more complex than we’d reckoned.
Human beings a very good at dealing with exceptions. They leave us flustered, yes, but we’re designed by billions of years of evolution to roll with the punches.
Our machines are not nearly so flexible, and the more we worked with them to make them autonomous, the more we came to understand the limits of autonomy.
It’s too easy to overload a machine with data, yet very hard to teach it what data it must focus upon, and what data can be safely ignored.
The skies are easier to navigate than our streets – there are fewer things to worry about colliding with.
That has meant a sky filled with autonomous drones, buzzing around with packages and, occasionally, people onboard.
It does mean our cities have acquired a constant buzzing noise sounding more or less like a hive of busy bees.
It also means that occasionally packages drop from the skies – leading to another thriving area of the insurance industry.
So our roads are only slowly being colonised by autonomous vehicles.
First came the interstate trucking, followed slow and steady bus routes, then Australia Post delivery vans, and then things stalled. Insurance reasons, mostly.
People do still drive, but public transport has become quite a bit more accessible and more reliable as it became more autonomous so that’s often an option of first choice. Just plug your destination into an app and it all happens automagically.
And ride-sharing is as popular as it ever was.
Which brings us to the world of work.
Previous generations had a career path, a ladder of sorts, with distinct landing pads: education, graduation, landing the big job, climbing the org chart, then retirement.
That might happen within a single organisation, or possibly within two or three over a thirty- or forty-year career.
That was the way the world worked, and the way to get ahead.
When the generation of so-called ‘Millennials’ veered away from that approach to career, most immediately assumed it had something to do with their innate inability to concentrate — too much screen time, don’t ya know — rather than reflecting a bigger and deeper shift in the culture.
Back in 2018 it had been feared that automation would be putting everyone out of work.
It didn’t turn out that way, but it made employment something of a moving target.
As automation removed layers of clerical functions – bookkeeping and scheduling and annotating and validating – job functions began to migrate away from maintaining the organisation into roles that kept the organisation moving along as quickly as it could maintain its coherence.
Organisations had to become faster and more capable of thinking on their feet – and every individual in the organisation had to do much the same.
Professional practice grew to become inseparable from what we would today call ‘professional development’ – but far more intense.
Everyone working – no matter what their field – spends about as much time learning the next thing they want to be doing (or their employer asks them to do) as they do delivering on the skills they already have.
As a result, in 2050 there’s an enormous emphasis on mentoring. Everyone is mentoring everyone else, and everyone is being mentored by everyone else.
There’s also an enormous emphasis on machine-guided skills acquisition. More on this in a bit.
It’s not an either/or future. It’s not humans versus robots.
It’s a both-and. As is so much of the future.
That barrier – between what constitutes education and what constitutes practice – collapsed long before 2050, and isn’t coming back.
It didn’t collapse for any one reason, or because of any one decision.
It collapsed because the whole world changed.
Consider Wikipedia: We never realised until we saw it in practice. People have expertise, and that we feel compelled to share this expertise.
That was true long before Wikipedia. It’s part of what makes us human.
Turns out that each of us have a passionate desire to share – and a passionate desire to mentor.
That’s a human quality. It’s been part of us from year dot.
As a result, the apprenticeship system – which is all about mentoring – is going great guns in 2050.
That’s a good thing, because robots are not particularly agile. They make lousy plumbers and electricians – and although you can get them to pour the foundation of your home, heaven help you if you set them to the interior work.
Again, it’s just too complicated for them to do much other than make bumbling mistakes.
All of the highly skilled trades – whether building or plumbing or medicine or the law – still demand years of mentoring before you’re considered fit to practice.
That’s not to say that there aren’t robotic surgeons – they do an excellent job stitching patients together.
Or that there aren’t incredible AI-powered paralegals who can search two hundred years of case law for relevant precedents.
This doesn’t mean that the electrician and the plumber are working in the same way they did in 2018. To be sure, they’re making full use of some amazing tools that help them always make the best possible decision.
But it’s still them, making those decisions. Even in 2050. The economy still has a productive base of individuals using the machines to be as productive as possible.
How intensely we focus on productivity as an end in itself is one of the larger shifts in the world of 2050.
The snakes and ladders career path of the Millennials made a muddle of any focus on career as a goal.
That constant movement from opportunity to opportunity was less opportunistic than an exploration, a journey into understanding of one’s capacities and the places where one might fit.
In a day and age where everyone has great tools to help them do their best, flexibility is the best approach both for today and tomorrow.
Flexibility has its costs. To be a jack-of-all-trades is to master none.
But people trust that when they need to, they can buckle down, find the resources they need – both human mentors and machine tools – to achieve a degree of mastery.
By 2050, that’s become the main goal of our educational system; migrating away from the rote acquisition of facts and toward capacity building, preparing children for lifelong learning, lifelong mentoring, and lifelong growth in capacities.
Thais makes teaching a much more human task than it’s been since the dawn of the 20th century – and it means the classroom looks a lot more like the rest of the world: students spent half their time peer mentoring other students in the subjects they’ve mastered, spending the other half being peer mentored by other students in the subjects they’ve yet to master.
That all sounds simple enough but none of it would work without the presence of a teacher who keeps a careful eye on all of it, stepping in as needed to adjudicate, remediate, and encourage – as a mentor should.
Assessments have gone out the window. NAPLAN is just a weird, bad memory.
You’re judged by what your peers think you capable of – because your peers are the avenue to work in 2050.
Almost all work is project-based, and almost all project teams temporary, brought together for a purpose, which, once achieved, sees them disband.
Back in 2018, this was known as ‘holocracy’, and it looked like chaos to those who believed corporate org charts should have a top and and a bottom and clearly defined roles.
But those organisations failed to adapt – they simply couldn’t change their internal structure – and found themselves unable to compete with these freewheeling temporary agglomerations of talent and capacity organised around a specific goal.
People still earn money – and people still compete for a good spot within the best teams.
You get those coveted spots by having the best profile, and the best connections. So a lot of what people do (all the time) is share what they’re doing with others – so that others have an awareness of those skills, and can call upon them as needed.
The selfie culture of 2050 is much more about your unique skills and roles than about your beautiful vacation or breakfast.
It sounds like a lot more work – more than sitting at a desk all day and banging through a few KPIs. But everything is in motion now – both the nature of work and the work itself – so this continuous upskilling of everyone in the workforce is simply the way everyone always works, from the day they enter Grade 1 until the day they retire.
Now about that…
As I said, I’ll be 87 in 2050. And I hope to still be working. I’m using as a model, my uncle Amadeo, who’s turning 80 next month.
They forced him to retire from his university position when he was 70.
That’s when he went to work at a series of startups. At his last startup he was managing a staff of 1500 people.
He’s slowing down a bit – one time, when his boss was giving him a raise, he criticized him for falling asleep in meetings – but he loves to work and plans on working as long as he can.
That’s my personal goal — because it means that I will continue to find my own work exciting and rewarding in 2050.
People retire today because they’re tired, they’re bored, they’re ready move on – or the organisation is. It’s not because they’re done. And yet, that’s our attitude.
In 2050, people don’t have to work as they grow older – superannuation remains one of the wisest decisions made by any Australian government – but there’s a demonstrated correlation between work, quality of life and longevity, so most people keep their hand in – mentoring, learning, and doing what they can, for as long as they can.
I can tell you that this is absolutely where I see myself in a billion seconds.
And I’m going to have some support.
This is probably a good moment to dispel a few myths about what we can and can not expect out of artificial intelligence in the years to 2050.
It’s a powerful tool. But – like a hammer – it’s a very specific tool.
It’s said that when all you have is a hammer, everything looks like a nail.
That’s roughly where we are with AI today: everything looks like a problem that can be solved through the application of artificial intelligence.
That’s less true than it might appear today, and the growing consensus among the experts in the field – people who have spent their entire careers working with artificial intelligence and robotics, is that there are real limits to both.
Just as with autonomous vehicles – slow progress – that’ll happen in the rest of the world.
The simple and straightforward tasks have already disappeared by 2050, automated away.
Fortunately there’s quite a lot of automation that’s quite useful.
For example, when I meet people – even today – I struggle to remember their names, or how came to know them.
Within a few years we’ll have a little voice in our ears quietly whispering those details, prompting us out of forgetfulness.
It’ll be a sophisticated bit of facial recognition, personal history and scouring the net for facts.
Which will matter a bit by the time I’m in my 80s – though we’ll have that long before.
The way we regard memory is already changing. In 2018 we tend to use Google and Wikipedia as a sort of external memory. When we can’t remember something we can turn to the machines that do remember.
That’s only going to become more of a feature – both as the population ages, and as there becomes more detail to remember.
Both are accelerating in the years to 2050, when well more than a quarter of Australians will be over 65, and when we’ll all be living in a world so dense with information we’ll need these very smart support tools to find our way through it.
We’re always going to face the dilemma of what to offload to machines. We already have.
Consider: no one performs long division anymore. Yes, every child still gets taught how to do it in primary school. But after we learn it how to do it, we use a calculator. It’s better and faster.
You can make an argument that there’s been a bit of an atrophy in our maths skills – or you can argue that mental energy would be time better spent on more human problems.
That’s an argument that will be very current in 2050, as we decided what parts of our brains are best offloaded to the machines.
For an 87 year-old that question might read very differently than for a 27 year-old.
Or not.
A 27 year-old growing up in a world rich in AI tools might be much more relaxed about which parts of their ‘mind’ reside in their own heads, and which sit outside themselves.
Some will no doubt be relaxed about it, and some will be much more strict.
So in 2050 we will all have cognitive supports that help to keep us vital and active and contributing for as long as possible – and which, in turn, help us into longevity precisely because we’re active and wanted.
It’s a virtuous cycle that means there are a lot of active seniors in 2050.
But this rose has thorns.
There’s a serious side-effect associated with the use of artificial intelligence.
It’s something we hadn’t seen until recently, but which has now become so obvious it can no longer be ignored.
If you go over to Google image search – today – and type in the word ‘baby’ you’ll see lots of images of babies.
All of them cute.
Nearly all of them white.
This isn’t because most babies are white babies.
This is because Google’s image search is an artificial intelligence system that learns how to deliver the images people want to find in a search by remembering the images people select when a series of images is presented.
People overwhelmingly select the images of cute white babies.
Google’s artificial intelligence learns by watching millions and millions of these selections.
It teachers are millions of people, very few of whom think of themselves as racist, but who – in aggregate – train Google to deliver a racially biased result.
That’s a big problem right there, too obvious to ignore.
What could Google do? Could it mix up the results, throw a few babies in who aren’t white? Sure – and they actually seem to be doing this.
But Google isn’t really the problem here. Google simply makes the problem visible.
Artificial intelligence turns out to be incredibly good at amplifying human biases.
As we use more AI in more tools in more ways, more and more of our biases will be amplified.
Where we use these tools as aids to our judgement, we’ll find ourselves making decisions with amplified unconscious bias.
There’s a rather notorious case of this from the United States, where COMPAS, the Correctional Offender Management Profiling for Alternative Sanctions, was much more prone to label non-white offenders as likely recidivists, wrongly flagging them at nearly twice the rate of whites.
Can an algorithm be racist? Or does it simply embody the biases of its creators?
As more of the world becomes algorithmic, the areas where our unconscious biases can be amplified increases dramatically.
By 2050 much of the world will be algorithmic. The rules by which it runs – almost entirely autonomously – will have been learned from us.
Right now we’re embedding all sorts of unconscious biases into that autonomous world, systems that select for white babies and against non-white offenders.
By 2050, we live in a world where that isn’t just isolated in a few examples, but has become pervasive. Where the algorithmic amplification of our biases becomes so prevalent we can never see beyond them.
We can already see what this looks like today, in the darker corners of Facebook, where racists congregate and amplify their own hatred.
Facebook makes that problem worse with its own artificial intelligence, designed to watch which stories users respond to – then give them more of that. So Facebook tends to make racists more racist.
This is the uglier side of the future, the part of the next billion seconds that both requires careful attention, and demands that we make changes in the present based on what we already know.
We need to lean into our impartiality. We need to build tools to amplify our impartiality.
We already have human tools to aid us in this – ethical frameworks, practices and philosophies.
In the years to 2050 we will need to embed these tools within the design of our algorithms – or the algorithms of bias will overwhelm any ability to make an unbiased decision.
A lot of human labour in 2050 will consist of inspecting these algorithms, testing them for bias, and amending them.
It’s almost a gardening approach to AI – pruning the plants, and doing some weeding.
(Which seems a fit occupation for someone in their late 80s.)
Even with these dangers, artificial intelligence will become indispensable.
A company here in Brisbane – Maxwell Plus – already uses artificial intelligence to improve diagnostic outcomes for cancer patients, and is actively working to become a fully-fledged ‘medical AI’.
At some point, that AI will advise you – all of the time – on the best decisions you should be making at every moment, with respect to your health.
At the same time, AI has over-promised and under-delivered. IBM has been kicked out of an oncology program because their ‘Watson’ AI system failed to live up to its promises.
We aren’t close to where we’ll be in 2050, and we still have a long way to go. The changes will be incremental, but they will be continuous.
We can expect all of this to get better – gradually – as these systems learn more, and as we learn more about how to tune them.
The same will be true for our money.
By 2050 your bank will have disappeared into something that looks like a financial management app that keeps your assets in circulation, earning as much return as possible, until the moment you need liquidity – a moment it will have been able to predict with fair accuracy, because it has studied your spending and consumption patterns.
Money is already becoming ‘smart’, becoming inseparable from the algorithms that make up so much of our world, and while some have hailed this as the end of commercial law – or rather, commercial lawyers – nothing could be further from the truth.
The automation of commercial law with ‘smart money’ and ‘smart contracts’ simply means that we will be automating our errors, and then repeating them at scale.
We will still need lawyers to argue those contracts, and judges to decide outcomes.
When a soon-to-be uni student asks me what they should study, I tell them their best career opportunities lie at the intersection of law and computer science.
A dual degree in these disciplines sets them up well for a career to 2050 and beyond, because in this intersection lies most of the economy of 2050, a collision of business practice, contract law, and algorithmic operations.
These skill will be vital to everyone fighting for their slice of an economy that by 2050 has become a mixture of algorithmic process and human pruning, where efficiency is always being checked by human oversight – and where algorithms constantly seek advantage over one another, tending toward a range of behaviors that might only be classified as illegal long after they’ve occured.
Greed is a human bias, and that bias will also end up in our algorithms.
And so we come full circle to a world of 2050.
That world doesn’t see us liberated from our worst impulses.
If we’re wise, it will be a world in which we are more vigilant, and use new tools to help support that vigilance.
Yet we can not automate our wisdom. We can only apply it as we see it’s needed.
That’s certainly our most important role in the world of 2050.
And it’s a role that we can begin to take up from today, as we mentor and teach, reminding those coming after us that we need to take great care where our decisions touch others.
If we do that wisely, then the world of 2050 can be a more human world.