Ken’s passion is being a technology innovator that cultivates and nurtures new ways to do business. After 23 years of solid IT experience, he is now leading the Innovation Foundry team. The team aims to explore how Artificial Intelligence - with a specific focus in Chatbots and Robotic Process Automation - can provide scalable, developer-friendly automation solutions to help enterprises improve their operational efficiency and customer support.
Two fancy words dominate today’s tech business landscape: artificial intelligence.
The term was first mentioned in 1955 by John McCarthy, a math professor at Dartmouth. Since then, the fledgling field saw more than its share of fantastic claims and promises.
Advances in compute power (thanks to cloud and distributed computing), scalability and memory architecture are all contributing to improve AI’s prospects. And as successful use cases proliferate, companies are now fighting to jump on the bandwagon.
But before jumping, companies need to understand what AI is. Below are some foundational terms to keep in mind as you begin your journey.
Not all AI are created equal. Generally, when we say AI, we are loosely referring to three kinds: narrow AI, artificial general intelligence (AGI) and self-aware or the singularity.
Most AI implementations in today’s world are narrow AI. IBM’s Deep Blue, which sparked the meteoric rise in AI interest when it beat Garry Kasparov in 1997, is one good example. Even the more advanced Google AlphaGo, which uses a neural network (which mimics how our brains work), is only very good at its trained task.
Narrow AI is often challenged by scenarios that varies widely, like a driverless car navigating through busy traffic - a reason why it is also called Weak AI. But when given a task with narrow definition (e.g. win at chess), allows it to show its true potential.
Narrow AI also needs humans. For example, a chatbot, a ubiquitous case of Narrow AI, can be trained to answer key questions over time. But it can never be left alone totally to handle all questions because human behavior can be unpredictable and wide-ranging. They still need human intervention, constant training, optimization or oversight.
Narrow AI’s huge advantage is predictability. In a similar scenario, you can trust the AI to decide the best possible way to accomplish a task every time, unlike a human where emotions, motivation, and state of mind (like drunkenness) may impact decision making. Great for handling customer enquiries in a hotline; not that great when you want to engage a human in a casual conversation over a wide variety of topics.
Artificial general intelligence (AGI)
AGI goes a lot further. It comes the closest to how humans think. Essentially, it allows a machine to apply intelligence to solve any problem - not just a specific task.
AGI is still laboratory experiment. A popular local example is Sophia, created by Hanson Robotics, which tries to engage humans on a wide variety of topics. Smart city developers see massive potential in AGI. It is especially great for life-threatening tasks, like replacing human troops on battlefields or firemen entering a burning building.
AGI is also making small steps in the business environment. One example is the use of facial recognition technology to identify emotion of customers. Similar to human recognizing the difference between a genuine smile and sarcastic smile, the system is able to provide advice and recommendation accordingly. However, AGI can also create shifts in labor trends, as they can take over human tasks forever. Advancements in smart factories that not only assemble goods, but also use advanced robotics to operate autonomously are small steps toward AGI.
It is Hollywood superstar; it also does not exist. The singularity is sometimes phrased as AI self-awareness and at other times superintelligence. It is when AI starts becoming smarter than humans.
An interesting aspect about the singularity is that it will look at problems differently from us. And because it is connected to data streams and other AI minds, it can be very efficient and fast acting.
The singularity can impact humankind immensely but not in the way movies like Terminator, I, Robot and 2001: Space Odyssey predict. The biggest change is that humankind will no longer be the only intelligence on earth. This scenario creates enormous ethical questions, like slavery and acceptance, and can create issues where some parts of the human population and even governments may not feel comfortable with an intelligence that thinks differently and can be out of their control.
How far are we from achieving the singularity? Difficult to say. According to Ray Kurzweil, a futurist and Google’s director of engineering, the singularity will happen in 2045. But when it does awaken, it will not be a cataclysmic event.
One key feature that separates AI from automation is its ability to learn. We call it machine learning. Coined by Arthur Samuel in 1959, machine learning is the ability for an algorithm to improve performance and predictions using data without being explicitly programmed to do so.
Machine learning allows AI to adapt to different environments, while continually improving itself. It is why autonomous cars are far more adept than automated vehicles in responding to diverse traffic situations.
Why is machine learning a big deal? Two reasons: first, humans understand more than what we are taught. It is called Polanyi’s Paradox. It is why a small child can recognize anger, identify a tree or understand green without having to go through a rigorous school. Previously, automated machines relied on specific human programming and instructions that is limited to what programmers can specify through code. With machine learning, AI is able to understand more than it is taught. It can adapt better or apply the same lessons to solve other related problems.
Second, an AI is a super learner and has a photographic memory. It can accomplish critical human tasks a lot faster and more effectively. It is the reason why AI is now used to assist in diverse fields, from medicine and law to financial services.
There are various models to teach AI. But we are looking at a few more matured and popular types: supervised learning, unsupervised learning and deep learning.
Supervised learning is the most common model. Here an algorithm tries to map a set of inputs to a set of outcomes using data. For example, historical data used to train trading bots to predict future market data. Or traffic lights use car locations and speed to manage traffic flow. Any scenario where you have a lot of data on behaviors and you are trying to predict an outcome is ready for supervised learning.
Unsupervised learning is a lot like how humans learn. We tend to gain knowledge without having a lot of data or having specific labelled data for all objects we see in life. It is why a small toddler can quickly grasp a dog is a four-legged animal that barks or the color of grass is green.
Unsupervised learning develops algorithms using unlabeled data without prior training. It can perform more complex processes than supervised learning, but the result could also be unpredictable. When unsupervised learning becomes mature, the possibilities are endless. AI can then help us to discover new patterns - great for identifying threats, financial opportunities, diseases or customer behaviors.
Getting deep on deep learning
Deep Learning is a fast-growing branch of machine learning and often the most talked about. It is also different.
Basic machine learning models require human guidance. When the outcome is inaccurate, the programmer will “interfere” by fine tuning the algorithm.
In deep learning, the algorithms determine themselves if their outcomes are accurate. They still learn using the same methods described above (i.e. Supervised, Reinforced etc). But they use an artificial neural network (ANN) to learn and decide, which is similar to how our brain neurons connect. ANN is also sometimes called connectionist system for that reason.
So, for example, supervised learning and deep supervised learning use the same approach to learning from available data. But the latter learns without having humans interfering.
Getting deep learning right is not easy. But when it does, the results can be spectacular. It allowed Google’s AlphaGo to defeat multiple world champions (incidentally, AlphaGo also uses Deep Reinforcement Learning to learn from previous AlphaGo versions).
Deep Learning is already used to vastly improve computer vision, speech recognition, natural language processing (NLP), machine translation, drug design and more.
There are many misconceptions of AI. The biggest is that AI will eradicate human jobs. Nothing can be further from the truth.
Narrow AI is designed to augment or assist humans - not replace them. Some jobs may be replaced, but many professions will welcome AI. Imagine in medical practice, how AI can help surgeons and doctors to identify cures or improve surgical procedures based on earlier lessons. Or imagine a chatbot helping you to answer most of the questions, without having to wait for a human answer. In this scenario, call center agents also benefit by being able to focus on advanced or complex questions.
Inevitably, some jobs like drivers, personal assistants, etc. will be eliminated. But new jobs will also be created, especially when managing AI. So, the job market for humans will shift - just like when automation became ubiquitous - but we will not see it disappear.