Reading AI predictions – Part 1, Take’ em with a pinch of salt

This is a 8 min read

Update 27/July/2018: I recently came across this book and the substance resonates exactly with this post. Trust me there is no better feeling than your Ideas being validated by smarter people than you.

There has been a lot of movement around the “Artifical Intelligence (AI)” space in the last couple of years especially in the print, social media and the bookshelves (you will get why AI is in quotes, in a moment). There are plenty of predictions about the future of AI and how it will impact the humanity as a whole. You might have heard of at least one of these predictions.

  • Jobs will disappear
  • State of the art AI is here and it can match Human Intelligence

But If you had your antennas tuned-in, you might have heard some counter arguments for each of these predictions. So they were actually,

  • Jobs will disappear, machines will take-over 1  v.s. Humans cannot be replaced, machines will co-exist with humans. 2
  • State of the art AI is here and it can match Human Intelligence 3 v.s. We need AGI to match human intelligence 4

AI is a buzzword (at least today)

The AI we know today is not truly “Artificial Intelligence”. Let me explain, think about this analogy, If I served you a coffee you shouldn’t be able to tell if your coffee has sugar or artificial sweetener, that’s the idea of an artificial sweetener,  same taste (just different health hazards :-)). Now is that how AI work today? If I asked you to try an app (analogous to serving you coffee) that purportedly uses AI to solve a problem, can’t you tell if you are dealing with an otherwise-limited-machine with some gimmicks? The day when you can’t tell if you are dealing with a machine or mind is the day of true “Artificial Intelligence”, in AI parlance it’s passing the “Turing test”.

Apparently, Industry leaders got sick and tired of calling something less than AI as AI, hence for a better understanding, they started calling AI which is in circulation today as “Narrow AI” and the real AI as “Broad AI” or “Artificial General Intelligence” (AGI). AGI was coined by Shane Legg, co-founder of Google DeepMind. Yes, the same company which built AlphaGo, which beat the South Korean Go champion a couple of years ago. AlphaGo uses a combination of cutting-edge Deep learning and Reinforcement learning. So, you don’t have to take my words on this, if Google DeepMind co-founder says “we don’t have human-level AI today, it’s just Machine learning and what we need is AGI to match humans” you better believe him!

But for all practical purposes of this post, I will use AI in a way we know it today. 


Now, going back to those AI predictions, there are 2 camps, one says we are doomed, machines are clearly going to take over while the other says machines are not even close to human minds. Which one is true? Nonetheless, throwing them around loosely without clear contextualization can rattle even the best of the minds.

Is the world of AI going to be a utopia or dystopia? This is a hard question and so is the answer. Quite frankly there are no black and white answers for these questions, but if we thin-slice through some 400 years of history and discern patterns we can get some idea. I felt much obligated to untangle this and offer my humble thoughts. I am hoping to get your thoughts on this subject along the way, so we have a feedback loop going. Hence I am writing this multi-part series.  Consider this series as a compass rather than a map on how to read and reason about the AI predictions. Consider this as my attempt to inject the badly needed optimism.

In this part, I will briefly capture human’s history with machines, what we wanted from the machines (vs what actually saw the light of the day) over the period of time and how to contextualize these AI predictions.

A quick recap of history

The wheel was invented around 3500 B.C, the earliest counting machines like abacus were invented some 5000 years ago and there were some occasional flashes of brilliance were-in people tried to delegate higher-order functions to machines and devices, but It was only in the 1700s, i.e at the beginning of industrial revolution is when we as a civilisation harnessed the power of machines to push the human race forward.

Division of labor  

Machines were to do all the heavy physical work efficiently and increase the productivity of the factories and humans would take care of pretty much everything else i.e high-order functions like calculations, reasoning, judgments, decisions and human interactions.

Ever since we have been trying diligently to push the slider towards the right. While relinquishing any human function to machines is an Automation, we tend to associate relinquishing lower order functions as Automation (imagine blue-collar automation). Although AI is very much an Automation, we tend to associate AI with magic. AI is simply relinquishing high order functions like decision making to machines (with the given technology today white-collar automation may be possible, but its far from magic). Due to it’s association to magic, AI gets a lot of press and PR. I digress, back to the slider, Interestingly along the way, we also had some delusions of grandeur about the slider reaching far right and machines taking over. (Remember Judgement day and Skynet?) Hasn’t there always been Sci-fi movies trying to scare us about machines doing stuff few decades before a real tangible technology existed? Technologies always had strange gestation periods, the time between a proof of concept and a technology becoming ready for entrepreneurial prime-time is at least a decade, let alone the economics and market. But without taking away any credit from the progress we have seen in AI and acknowledging the fact that we have reached an interesting time, let’s ask ourselves does it make sense to even contemplate the possibilities of pushing the slider all the way to the right where there won’t be any visible distinction between minds and machines ?

Ray Kurzweil, trusted as the AI Oracle by Bill Gates and Larry page predicted machines will surpass humans by the year 2029 and AGI will be mainstream by 2045.  He calls it Singularity in his book Singularity is near. So can we blur the lines between the machines and the mind? if so what will be the rough timeline? Is Ray right about the timelines? These are some key questions.

To answer this let’s look at a real-world scenario:

Common sense and Machines:

On 16 Dec 2014, a gunman took hostages in a Sydney cafe 5, Uber raised fares by as much as four times its normal rate when demand shot up during the siege that left three people dead. Its “surge pricing” algorithm increased fares during the peak period as people rushed to leave the area.

  • Now had there been a human in the loop, common sense would have prevailed i.e. owing to the situation “surge pricing” would not have been applied. But it was a trained model and it did what it was designed to do. We cannot blame the person who built the model either because you cannot expect a data scientist building a pricing model to build something amenable to a real-life hostage situation, it’s completely out of scope for a pricing model. This is a real-life example of machines operating in a narrow domain, but humans can take a broad view of a situation and apply themselves accordingly. You cannot supervise or train a model to act broadly, which in this case means listening to all the events in the world and responding.
  • Similarly, in the field of Natural Language processing say, we are training a chatbot to respond to customers. if I typed “I am in London” that implies I am in England, but if typed that “I left London” it doesn’t imply that I left England. How do you train a model that operates across all geographies (i.e. every single thing that qualifies as a location) with this simple common sense?

See my point? So until machines can think like humans i.e. until AGI becomes mainstream having a human in the loop would serve well. If somebody tells you by 2029 AI will have common sense, applying our past filters we wouldn’t buy that. But if common sense is so hard to build into machines,  I would say it’s misleading to loosely say “jobs are going to disappear” and UBER could seriously use a supervisor to validate surge prices. That right there is a job a college kid can do and this is just one example of machines actually creating jobs organically.

Takeaways

AI today performs far better than 50 years ago and delivers on a lot of the older promises, but it applies today only on narrow domains. Key reason? today AI is largely supervised. I.e real-world experiences are hand fed into an evolving learning model which can get better at emulating human intelligence in that narrow domain. While this is important this cannot be the future simply because,

  • Narrowly intelligent systems sometimes can miss the most basic common sense intelligence.  How good is intelligence without common sense? Today AI is very domain oriented and doesn’t apply broadly. It is impractical to hand feed a learning agent to create AGI or broad AI.
  • Yann Lecun VP & Chief AI Scientist at Facebook beautifully summarised the above predicament with this analogy, “if intelligence is cake, then supervised learning is the icing, reinforcement learning is the cherry on the top and unsupervised learning is the cake itself. We know how to make the icing and the cherry but not the cake”.

In summary

Systems like Skynet needs AGI, so Sarah Connors of the world can relax now. On a serious note, when you hear “AI will make Jobs disappear” know that the context is: Jobs in a few narrow domains are “at risk”.Instead, a better thought experiment would be, stop and ask yourself “has Automation taken over the world” ? “is everything automatable automated”? Think about this Amazon is trying to bring self-service checkouts in their Amazon Go stores, is it going to take over the world ? “are all checkout cashiers going to be out of jobs”? what do you think ?.  Also looking at the grand scheme of things, the world is moving faster than ever and the landscapes are changing at warp speed, Jobs are disappearing in the developed world because they can be done cheaply elsewhere, clean energy solutions wiped out old coal jobs these are not technology implications, these are pure economical implications. We are expected to be nimble to survive this major digital upgrade that the world is getting. Here is a good news common sense and creativity are hard for machines and that’s exactly what Daniel Pink says in his book, A whole new mind, why the world belongs to right brain thinkers. 

IMHO, you choose a vocation that demands your right-lobe skills which the machines are not adept at or just prepare yourself to complement and co-exist with machines because minds and machines will complement and co-exist for a long time. More on how to de-risk and prepare your self for this new hybrid age of minds & machines and what are some challenges you will be facing in the next part – Future of jobs, not so bleak.

 

Till then stay tuned!

Over to you now, what are your thoughts?

 


Footnotes

  1. AI is already well on its way to making “good jobs” obsolete: many paralegals, journalists, office workers, and even computer programmers are poised to be replaced by robots and smart software. (Martin Ford, Rise of the Robots)
  2. Right brain thinking is hard for A.I to dub. (Daniel Pink, A Whole new mind)
  3. Deep learning has put AI in the front seat and finally, AI is delivering on some of the 50-year-old promises (general Industry view)
  4. ML, as we know of it, works only in narrow domains. we need AGI (Artificial General Intelligence) to match human-level intelligence and supervised learning isn’t the way to go. (Paraphrasing Shane Legg, co-founder Google Deepmind, the company that built AlphaGo which beat humans in Go)
  5.  https://www.bbc.com/news/technology-30595406

References

  1. Whiplash 
  2. Machine, platform,  crow
  3. The second machine age