Please start with the original post and come back to the updates. So, you will know how the updates dovetail with the substance of this original post.
Update 13/Nov/2018: Here is another piece by Twiggle on medium that validates the substance of this series.
Update 27/July/2018: I recently came across this book and the substance resonates very much with this post. I think, there is no better feeling than your Ideas being validated by people smarter than you.
There is a lot of movement around the “AI” in the last couple of years especially in the print, social media and the bookshelves (you will get why AI is in quotes, in a moment). There are plenty of predictions about the future of AI and how it will impact humanity as a whole. You might have heard of at least one of these.
- Jobs will disappear
- State of the art AI is here and it can match Human Intelligence
But If you had your antennas tuned-in, you might have also heard the counter arguments for each of these predictions. So they were actually,
- Jobs will disappear, machines will take-over 1 v.s. Humans cannot be replaced, machines will co-exist with humans. 2
- State of the art AI is here and it can match Human Intelligence 3 v.s. No today’s AI can’t match human intelligence 4
AI is a buzzword (at least when this was written)
The AI that we know today is not comparable to human intelligence by any stretch of imagination. May be this analogy can help, If you are served a coffee you shouldn’t be able to tell if your coffee has sugar or artificial sweetener in it, that’s the idea of an artificial sweetener: Same taste, just different health hazards :-). Now is that how AI works today ? Say, you are asked to interact with a voice controlled assistant or a messenger app that purportedly uses AI, can’t you tell you are dealing with an otherwise-limited-machine ? The day when you can’t tell if you are dealing with a machine or mind is the day of true “Artificial Intelligence”, in AI parlance it’s passing the “Turing test”.
Apparently, Industry leaders got sick and tired of calling something less than AI as AI, hence for a better understanding, they started calling AI which is in circulation today as “Narrow AI” and the real AI as “Broad AI” or “Artificial General Intelligence” (AGI).
AGI was coined by Shane Legg, co-founder of Google DeepMind. Yes, the same company which built AlphaGo, that beat the South Korean Go champion a couple of years ago. AlphaGo uses a combination of cutting-edge Deep learning and Reinforcement learning. So, you don’t have to take my words at face value, if Google DeepMind co-founder says “we don’t have human-level AI today, it’s just Machine learning and what we need is AGI to match humans” it probably is true.
But for all practical purposes of this post I will use “AI” as it is.
Going back to those AI predictions, there are 2 camps, one says we are doomed, machines are clearly going to take over while the other says machines are not even close to human minds. Which one is true? Nonetheless, throwing them around loosely without clear contextualisation can rattle even the best minds.
Is the world of AI going to be a utopia or dystopia? This is a hard question and so is the answer. Quite frankly there are no black and white answers for these questions, but if we thin-slice through some 400 years of history of how we brought in machines into our lives and discern patterns we can get some idea. I felt much obligated to untangle this and offer my humble thoughts. I am hoping to get your thoughts on this along the way, so we can have a feedback loop going. Hence I am writing this multi-part series. Consider this series as a compass rather than a map on how to read and reason about the AI predictions. Consider this as my attempt to inject the badly needed optimism.
In this part, I will briefly capture the human history with machines in terms of what we wanted from the machines (vs what actually saw the light of the day) over the period of time.
A quick recap
The wheel was invented around 3500 B.C, the earliest counting machines like abacus were invented some 5000 years ago and there were some occasional flashes of brilliance were-in people tried to delegate higher-order functions to machines and devices, but It was only in the 1700s, i.e at the beginning of industrial revolution is when we as a civilisation harnessed the power of machines to push the human race forward.
Division of labor
The above figure captures how we brought machines into our lives and how that trend is projected to move forward. The wager was very simple: Machines will to do all the heavy work efficiently and increase the productivity of the factories and humans would take care of pretty much everything else i.e high-order functions like calculations, reasoning, judgments, decisions and human interactions.
Ever since we have been trying diligently to push the slider towards the right. While relinquishing any human function to machines is an Automation, relinquishing lower-order human functions that are repetitive in nature to machines is “blue-collar” automation, relinquishing higher-order human functions that are related to human intelligence like reasoning and decision making to machines is “white-collar” automation. Although AI is very much an automation, we tend to associate AI with magic. Due to this association AI gets a lot of press and PR, but really its far from magic. Part of the reason for this association is, we always had some delusions of grandeur about the slider reaching far right and machines taking over. Judgement day and Skynet left a permanent impact on a generation. Hasn’t there always been Sci-fi movies trying to scare us about machines doing stuff few decades before a real tangible technology existed? Technologies always had strange gestation periods, the time between a proof of concept and a technology becoming ready for entrepreneurial prime-time is at least a decade, let alone the economics and market. But without taking away any credit from the progress we have seen in AI and acknowledging the fact that we have reached an interesting time, let’s ask ourselves does it make sense to even contemplate the possibilities of pushing the slider all the way to the right where there won’t be any visible distinction between minds and machines ?
Ray Kurzweil, trusted as the AI Oracle by Bill Gates and Larry page predicted machines will surpass humans by the year 2029 and AGI will be mainstream by 2045. He calls it Singularity in his book Singularity is near. So can we blur the lines between the machines and the mind? if so what will be the rough timeline? Is Ray right about the timelines? These are some key questions.
To answer this let’s look at couple of real-world scenarios:
Common sense and Machines:
On 16 Dec 2014, a gunman took hostages in a Sydney cafe 5, Uber raised fares by as much as four times its normal rate when demand shot up during the siege that left three people dead. Its “surge pricing” algorithm increased fares during the peak period as people rushed to leave the area. Uber was sued and they ended up paying a hefty compensation for being “less sensitive” to the situation.
- Now had there been a human in the loop, common sense would have prevailed i.e. owing to the situation “surge pricing” would not have been applied. But it was a trained model and it did what it was designed to do. We cannot blame the person who built the model either, because you cannot expect a data scientist building a pricing model to build something amenable to a real-life hostage situations, it’s completely out of scope for a pricing model. This is a real-life example of machines operating in a narrow domain, but humans can take a broad view of a situation and apply themselves accordingly. You cannot supervise or train a model to act broadly, which in this case means listening to all the events in the world and responding.
- Similarly, in the field of Natural Language processing say, we are training a chatbot to respond to customers. if I typed “I am in London” that implies I am in England, but if typed that “I left London” it doesn’t imply that I left England. How do you train a model that operates across all geographies (i.e. every single thing that qualifies as a location) with this simple common sense?
That’s my point. So until machines can think like humans i.e. until AGI becomes mainstream, having a human in the loop would serve well. If somebody tells you by 2029 AI will have common sense, applying our past filters we wouldn’t buy that. Because if common sense is so hard to build into machines, I would say it’s misleading to loosely say “jobs are going to disappear”. In fact it’s quite the opposite, unlike globalisation or outsourcing, it appears AI has a lot of potential to create jobs organically. UBER could seriously use a supervisor to validate surge prices, that right there is a job a anyone with common sense can do and this is just one example.
AI today performs far better than 50 years ago and delivers on a lot of the older promises, but it applies today only on narrow domains. Key reason? today AI is largely supervised. i.e real-world experiences are hand fed into an evolving learning model which can get better at emulating human intelligence in that narrow domain. While this is important, this cannot be the future simply because,
- Narrowly intelligent systems sometimes can miss the most basic common sense intelligence. How good is intelligence without common sense? Today AI is very domain oriented and doesn’t apply broadly. It is impractical to hand feed a learning agent to create AGI or broad AI.
- Yann Lecun VP & Chief AI Scientist at Facebook beautifully summarised the above predicament with this analogy, “if intelligence is cake, then supervised learning is the icing, reinforcement learning is the cherry on the top and unsupervised learning is the sponge itself. We know how to make the icing and the cherry but not the sponge”.
When we hear “AI will make Jobs disappear” know that the context is: Jobs in a few narrow domains are “at risk”. Oxford university did a study on this, BBC expanded on this and provides a will robot take your job ? search where you can search for specific job types and it will give out how likely it will be impacted by automation. A better thought experiment would be to, stop and ask ourselves “has Automation taken over the world” ? “is everything automatable automated”? Think about this, Amazon is trying to bring self-service checkouts in their Amazon Go stores, is it going to take over the world ? “are all checkout cashiers going to be out of jobs”? what do you think ? Also looking at the grand scheme of things, the world is moving faster than ever and the landscapes are changing at warp speed, Jobs are disappearing in the developed world because they can be done cheaply elsewhere, clean energy solutions wiped out old coal jobs these are not technology implications, these are pure economical implications. We are expected to be nimble to survive this major digital upgrade that the world is getting. [Edited to revise]
Common sense and creativity are hard for machines and that’s exactly what Daniel Pink says in his book, A whole new mind, why the world belongs to right brain thinkers. After being reprimanded by Akash Jaiswal, author of Fluid, that there is nothing called Left brain or Right brain, I am revising my positions. So the good news is if you are “Fluid” i.e. interdisciplinary, it’s hard for machines to emulate you.
[Edited to revise]
IMHO, you choose a vocation that demands your right-lobe skills which the machines are not adept at or just prepare yourself to complement and co-exist with machines because minds and machines will complement and co-exist for a long time. More on how to de-risk and prepare your self for this new hybrid age of minds & machines and what are some challenges you will be facing in the next part – Future of jobs, not so bleak.
Over to you now, what are your thoughts?
- AI is already well on its way to making “good jobs” obsolete: many paralegals, journalists, office workers, and even computer programmers are poised to be replaced by robots and smart software. (Martin Ford, Rise of the Robots)
- Right brain thinking is hard for A.I to dub. (Daniel Pink, A Whole new mind)
- Deep learning has put AI in the front seat and finally, AI is delivering on some of the 50-year-old promises (general Industry view)
- ML, as we know of it, works only in narrow domains. we need AGI (Artificial General Intelligence) to match human-level intelligence and supervised learning isn’t the way to go. (Paraphrasing Shane Legg, co-founder Google Deepmind, the company that built AlphaGo which beat humans in Go)