Menu
The AI fight is escalating: This is the IT giants' next move

The AI fight is escalating: This is the IT giants' next move

Google, IBM, Microsoft and Amazon Web Services are all piling artificial intelligence capabilities onto their software stacks

Artificial intelligence is where the competition is in IT, with Microsoft and Google both parading powerful, always-available AI tools for the enterprise at their respective developer conferences, Build and I/O, in May. 

It's not just about work: AI software can now play chess, go, and some retro video games better than any human -- and even drive a car better than many of us. These superhuman performances, albeit in narrow fields, are all possible thanks to the application of decades of AI research -- research that is increasingly, as at Build and I/O, making it out of the lab and into the real world.

Meanwhile, the AI-powered voice technologies behind virtual assistants like Apple's Siri, Microsoft's Cortana, Amazon.com's Alexa and Samsung Electronics' Bixby may offer less-than-superhuman performance, but they also require vastly less power than a supercomputer to run.

Businesses can dabble on the edges of these, for example developing Alexa "skills" that allow Amazon Echo owners to interact with a company without having to dial its call center, or jump right in, using the various cloud-based speech recognition and text-to-speech "-as-a-service" offerings to develop full-fledged automated call centers of their own.

Some of the earliest work on AI sought to explicitly model human knowledge of the world in a form that computers could process and reason from, if not actually understand. That led to the commercialization of the first text-based "expert systems." Those early systems didn't come by their expertise the way humans do, learning by experience over the course of their career. Instead, the experience was spoon-fed to them following a laborious process of humans interviewing other humans, and distilling their implicit knowledge into explicit rules.

The biggest advances in AI research in recent years, and the ones most applicable in the enterprise, have involved machines learning from experience to gain their knowledge and understanding. Improvements in machine learning led directly to the dramatic 4-1 defeat last year of 18-time world go champion Lee Sedol by AlphaGo, a program developed by Google's DeepMind subsidiary.

Machine learning began with the creation of neural networks -- computational models that mimic the way nerve cells, or neurons, transmit information around our bodies. Our brains contain around 100 billion neurons, each connected to about 1,000 others. An artificial neural network models a collection of these cells, each with their own inputs (incoming data) and outputs (the results of simple calculations on that data).

The neurons are organized into layers, each layer taking input from the previous one and passing its output to the next. When the network correctly solves a problem, additional weight is given to the outputs of the neurons that correctly predicted the answer, and so the network learns.

Networks with many layers -- so-called deep neural networks -- can be more accurate. They are also computationally more expensive -- prohibitively so, in their early days. They were saved from being a research curiosity by the parallel processing capacity of the GPU, previously used mainly to display games, not to play them.

Transistors are doing it for themselves

Those advances are giving businesses new ways to deal with their big data problems -- but creating the necessary technology is, to some extent, a big data problem itself.

One of our strengths is that we can learn from just a few examples, Google engineering director Ray Kurzweil told attendees at the Cebit Global Conference in March.

"If your significant other or your boss tells you something once or twice, you might actually learn from that, so that's a strength of human intelligence," he said.

But in the field of deep learning there's a saying that "life begins at a billion examples," he said.

In other words, machine learning technologies such as deep neural networks need to observe a task a billion times to learn to do it better than a human.

Finding a billion examples of anything is a problem in itself: AlphaGo's developers scoured the Internet for records of thousands of go games played by human players to provide the initial training for their 13-layer neural network, but as it became stronger, resorted to making it play against other versions of itself to generate new game data.

AlphaGo drew on two types of machine learning to win its match. The human games were analyzed using supervised learning, in which the input data is tagged with the response the neural network should learn -- in this case, that playing these moves leads to victory.

When AlphaGo was left to play against itself another technique, called reinforcement learning, was used. The goal of winning the match was still explicit, but there was no input data. AlphaGo was left to generate and evaluate that for itself, using a second neural network in which the neurons began with the same weightings as the supervised learning network, but gradually modified them as it discovered superhuman strategies.

A third technique, unsupervised learning, is useful in business but less useful in games. In this mode, the neural network is given no information about its goal but is left to explore a data set on its own, grouping the data into categories and identifying links between them. Machine learning used this way becomes just another analytics tool: It may identify that a game can be played or end in several ways, but it leaves the judgment of what to do about it to a human supervisor.

There are plenty of companies, big and small, offering some of the building blocks of AI for use in enterprise applications and services. The smaller companies often focus on specific tasks or industries; the bigger ones on the big picture, and tools that can be used for general applications.

Thanks in large part to the barrage of publicity surrounding its Watson offering, IBM is one of the first vendors of AI to spring to mind -- although it prefers the term "cognitive computing."

The Watson range includes tools for creating chat bots, discovering patterns and structure in textual data, and extracting knowledge from unstructured text. IBM has also trained some of its Watson services with industry-specific information, tailoring the offering for user in health care, education, financial services, commerce, marketing and supply-chain operations.

IBM and its partners can help integrate these with existing business processes, or developers can dig in for themselves, as most of the tools are also available as APIs on IBM's Bluemix cloud services portal.

Cognitive is Microsoft's preferred term too. Under the Microsoft Cognitive Services brand, it offers developers access to APIs for incorporating machine learning technologies into their own applications. These include tools for converting speech to text and understanding its intent; detecting and correcting spelling mistakes in a text; translating speech and text; and exploring relationships between academic papers, their authors and the journals that publish them. There's also a service for building chat bots and connecting them to Slack, Twitter, Office 365 mail and other services, called Bot Framework. Microsoft also offers an open-source toolkit businesses can download to train their deep learning systems using their own massive datasets.

At Build in early May, it offered production versions of services previously only available in preview, including a face-tagging API and an automated Content Moderator that can approve or block text, images and videos, forwarding difficult cases to humans for review. There's also a new custom image recognition service that businesses can train to recognize objects of interest to them, such as parts used in a factory. 

Google offers many of the machine learning technologies it uses internally as part of its Google Cloud Platform. The systems are available either already trained for particular tasks or as blank slates that can be trained on your data, and include image, text and video analysis, speech recognition and translation. There's also a natural language processing tool for extracting sentiment and meaning from text that can be used in chat bots and call centers. There's even a super-focused job search tool that attempts to match jobseekers with vacancies based on their location, seniority and skills.

As for Amazon Web Services, it allows businesses to create new "skills" or voice-controlled apps for Alexa, the digital assistant embedded in Amazon Echo devices, and offers many of the technologies behind Alexa "as a service."

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

More about AdvancedAmazonAmazon.comAmazon Web ServicesAppleBluemixBNP ParibasCustomEchoFacebookFortisGoogleIBMMicrosoftMountain ViewReasoningSamsungTwitter

Show Comments