Fleek

AI Doesn’t Exist… Well, Not How You Think

Artificial Intelligence always seems to be a buzzword in the business world. How many times have you heard a company say, “We have an AI platform” or “We have a bot”? But what does it mean? And does it even make sense?

The concept of Artificial Intelligence (AI) is not a new one. Back in 1997, the IBM supercomputer Deep Blue managed to defeat the legendary Gary Kasparov in a game of chess – the first time ever a reigning world champion was beaten by a computer under tournament conditions. But did this really qualify as a victory for AI, at least ‘AI’ as we’ve come to know it? Deep Blue did manage to outmanoeuvre Kasparov, but it could only do so because it was programmed by a specialised team of developers to play chess. It learnt all the rules, could compute all its moves and counter moves.

Even before Deep Blue, people were trying to describe the process of human thinking as the mechanical manipulation of symbols. After Alan Turing developed the theory of computation, AI was a hot topic in the 1950s up to the 1970s. Just think about the John Hopkins Beast of the 60s, an early cybernetic robot that could survive on its own by automatically seeking out power outlets and recharging itself.

Since then there have been many projects over the decades that made giant leaps in the quest to create a true AI machine. Fast-forward to the present, and we have “AI” almost everywhere…or do we?

What puts the ‘intelligence’ in Artificial Intelligence?

Renowned natural language processing and computing science expert Dr Erik Cambria once said there is no technology today that is even barely as intelligent as the most stupid human being on Earth.

The term ‘Artificial Intelligence’ originally refers to the imitation of human intelligence: having machines or systems make decisions based on right versus wrong, logical versus not, moral versus immoral or even ‘worth it’ versus ‘not worth it’, in the same way a human brain would.

When you look at this definition, it’s easy to understand why people have misconceptions about what AI actually is. They see a machine or programme (seemingly) making decisions on its own and assume it is AI in its purest form. A car driving itself. A virtual assistant that can interpret and execute your requests. Targeted Facebook ads about products you mentioned in a WhatsApp conversation. It all seems so automated and naturally intelligent.

It’s important to understand that AI doesn’t, in fact, think for itself. AI runs on complex decision trees that understands a request and acts according to a set of rules its memory (pretty much exactly how humans do). It can consequently learn and make decisions, but only after we tell it what to think and what to look for.

This is why many attempts to build AI in the past have been halted when investors became disillusioned with the expected results. They believe AI should be able to think, create and learn on its own. If you were to pose that challenge to a development team, you could potentially get a machine that would learn and seem to think on its own. The reality, however, is that it still needs to be programmed to do so, and the results would often be less than what a person might expect.

AI follows a set of orders. Even programmers using machine learning tell the computer what to make of data and how to learn. Humans, on the other hand, can create new things, we have emotion, we can take complex ideas and put them together in new ways that create new ideas. We’re intelligent. The AI we build operates within a certain set of boundaries, and typically does only what we program them to do, even if that function is to learn by running experiments on sets of data.

So where does the ‘intelligence’ part really come into play with AI?

Artificial intelligence versus complex data processing

As I mentioned earlier, people often confuse the complex data processing of machines with the intricate cognitive behaviour of the human brain.

Let’s look at practical examples. You have a bot that controls every appliance in your home, and you instruct it to keep the indoor temperature above 20 degrees. To maintain the temperature, the bot could switch on every device in the house for a certain amount of time to see which one affected the temperature in the desired way. After some time (and lots of fiddling with the dishwasher probably) it would have learnt that the air conditioner is the appropriate way to control the temperature, and store that information for future reference.

Ask Google Assistant on your phone what your favourite song is. It won’t know, but it will ask you what the answer is, so that next time it does. Ask Siri to beatbox and it might not be able to respond, yet Google Assistant does. Does that make Google Assistant ‘smarter’ than Siri?

No, it simply means it was programmed more comprehensively to handle more varied requests. It means Google probably has more developers dedicated to the assistant than Apple has for Siri, or at least that Google developers found it important to have a beatboxing assistant whereas the Apple developers did not.

These machines and programs didn’t perform their acts because they ‘knew’ what to do, in the same way a human would figure things out. They used complex data processing – programmed by humans – to produce the desired result.

How does AI fit into the current business environment?

If AI isn’t as ‘AI’ as we thought, does it still have a place in the business world? Of course! Today we have a unique opportunity to build smart systems much easier than in the past. We can use powerful speech-to-text engines and language analysis to understand text, synthesise speech much more eloquently than before, and have bots that interact with people on multiple channels from WhatsApp to Skype and even via regular phone calls.

Map these inputs and outputs to a set of business functions in your organisation, and you effectively have AI that takes care of a part of your business. Just think of a bot that handles sales queries. The bot can interact with a customer and receive a request to generate a quote.

When the customer adds a certain item onto the quote, the bot can check that item against previous orders, determine that it always accompanies other products and suggest missing items that could be added. It can even check the quantity of a particular product and then, by interrogating the sales data, suggest related quantities of other products on the same order.

Does this seem like a bot that thinks, acts and assumes like a human? Sure. Just remember it won’t think outside the box and won’t deviate from its rules. You might think it’s not true ‘AI’ because it can only do what you’ve told it to do, but every platform we work with has been, at some stage, programmed what to do – even if that programming forces it to learn new things and adapt in often unexpected ways.

The future of AI in call centres

In the end, AI is only as smart as it’s been programmed to be, and only as good as the platform it is based on. The difference today is that we have so much more computing power, and different platforms to create more engaging and smarter solutions than ever before. There is perhaps no better industry to explore the implementation of AI than the world of telephony and call centres.

The opportunities bots bring to a call centre’s productivity and employee empowerment (yes, robots and humans can work together) are endless. Not to mention the powerful potential of business reporting when paired with AI.

The Fleek team has spent quite a bit of time exploring these avenues and we have some great initiatives coming your way in 2019, so watch this space.

No Comments
Post a Comment