CIO

AWS re:Invent: Intelligent services, closer clouds

Quantum computing is still way out there, but Amazon Web Services is bringing other computing tasks much closer to home.

A long, long time ago, Amazon Web Services was all about hosting computing workloads in data centers far, far away. At its AWS re:Invent 2019 event, the company acknowledged that computing power can be more useful when it’s closer to home, announcing three new services for reducing latency, including a mini-cloud you can house in your own data center.

The company also released a raft of new AI services, including ways for enterprises to leverage machine learning without prior experience and a service that aims to help customers pick the most appropriate AWS compute services for their needs.

Quantum computing on demand

Among the more intriguing announcements from re:Invent is Amazon Braket, a new service that will soon allow researchers and developers to test quantum computing algorithms on quantum hardware from D-Wave, IonQ and Rigetti in a single place.

Quantum computers may one day solve intractable optimization problems beyond the scope of today’s computers, but the conditions necessary to cool a quantum processor to its operating temperature of -273C (-460°F) suggest quantum computing will likely be beyond the reach of most data centers. That’s why renting time on someone else’s quantum computer may be the best way for academics and enterprises to gain experience in how to make good use of them, at least for now.

AWS isn’t the first to offer quantum computing capabilities in the cloud: Microsoft launched Azure Quantum a month ago, and the IBM Q Experience has been online for more than three years. Meanwhile, Atos offers simulators that will run quantum computing algorithms on conventional hardware. Like IBM and Microsoft, AWS is offering more than just access to quantum hardware. It’s teamed up with researchers at the California Institute of Technology (Caltech) and elsewhere to research new quantum computing technologies and has opened a consulting service to help businesses develop quantum computing apps.

When Amazon Braket becomes generally available, AWS will charge by the hour for services including access to its design tools, testing algorithms on simulators, running jobs on a real quantum computer, or training machine-learning algorithms using quantum technologies. Pricing has not yet been announced.

Close-up computing

In its early days, AWS offered U.S. businesses a stark choice: Run their computing jobs in Northern Virginia or Northern California. If they were anywhere else in the world, they would have to put up with latencies of tens or even hundreds of milliseconds as network packets shuttled across or between continents.

As adoption of cloud computing has grown, AWS has added more and more data centers, boasting 69 Availability Zones around the world. Even those offerings may still be too far away for some applications to run smoothly. For those situations, Amazon has introduced three new offerings, tackling the latency problem in different ways.

AWS Outposts are the most obvious approach: These fully managed compute and storage racks are built with the same hardware Amazon uses in its datacenters, run the same workloads as AWS compute and storage instances, and are managed with the same tools and the same APIs. Enterprises can install AWS Outposts on premises, wherever they need computing capacity, building a hybrid cloud by linking them to additional resources in their Amazon Virtual Private Cloud.

Next year, AWS will introduce VMware Cloud versions of the hardware that enterprises can manage through the same VMware APIs and control plane they use for their in-house infrastructure. This kind of dedicated capacity will cost from around $7,000 per month or $225,000 up front for the smallest system, or four times that for a memory-optimized unit designed for running large on-premises databases.

If you operate in an industry where you constantly need to exchange high volumes of data with other companies with latencies under 10 milliseconds — video editing and post-production, for example — then you’re probably already located close to these companies, but you may not be close to an AWS data center. That’s where AWS Local Zones come in: They’re a sort of halfway house between AWS Outposts and regular Availability Zones. Offering a subset of the full suite of AWS services, they will have high-bandwidth connections to a nearby AWS Region datacenter where the workloads that they don’t support can be run. The first AWS Local Zone is now live in Los Angeles. Pricing for basic on-demand storage and compute resources is about 20 percent higher than in the Oregon datacenter with which it is paired.

Whereas Outposts put computing power on your premises, and Local Zones near your partners, the third solution, AWS Wavelength, will put it close to the end user. As the name suggests, there’s a wireless element to the service, which will offer computing and storage capacity on the edge of some operators’ 5G networks. It’s initially targeting mobile applications like game streaming and augmented or virtual reality services for consumers but could also serve more exacting users such as autonomous vehicles and industrial robots. By running workloads in data centers at the edge of mobile operators’ networks, AWS will be able to offer latency of under 10 milliseconds. It expects to offer the service through Verizon’s 5G network in the U.S. some time next year, and then through KDDI in Japan, SK Telecom in South Korea and Vodafone in the U.K. and Germany by the end of 2020.

Learning machines

If you weren’t already convinced that machine learning was the trend of the moment, then you might be soon, as it’s at the heart of several other announcements made at AWS re:Invent.

SageMaker, the company’s service for developing and training machine learning models, gained six new features. SageMaker Studio is an integrated development environment (IDE) for building, debugging and monitoring new machine learning models. SageMaker Notebooks link the company’s elastic compute resources with Jupyter notebooks. These are based on an open-source tool for automating machine learning workflows. SageMaker Experiments is a sort of version-control system for machine learning models, helping developers roll back a lesson if it introduces unwanted artefacts or biases into a model. SageMaker Debugger keeps tabs on the learning process and, in an industry accused of producing too many “black boxes,” offers a first step towards making the output of neural networks “explainable”, says Amazon.

SageMaker Model Monitor follows machine learning systems after training, warning when they’re asked to work outside their design parameters, and SageMaker Autopilot analyzes raw data, selects the learning algorithms it considers most appropriate, then trains a selection of models with different trade-offs. Users can then select one optimized for speed, accuracy, or whatever characteristic interests them.

AWS also unveiled five new machine learning services that enterprises can use to improve everyday tasks without, it says, any experience with machine learning.

With Amazon Kendra, the company thinks it’s cracked enterprise search. The idea is that employees will use natural language to pose questions such as “When does the helpdesk open?” and Kendra will comb through unstructured text from multiple applications, searching documents the employee is authorized to access to find the answer.

Amazon CodeGuru plugs into source code repositories such as GitHub or CodeCommit, responding to pull requests by evaluating code changes for quality and recommending how to remediate any flaws it finds just as a human code reviewer would. That may be all too true: Amazon has trained it on reviews of its on codebase, and on the top ten thousand open-source projects on GitHub, so it will have learned from the bad as well as the good. It can also profile running applications to identify bottlenecks in code.

Businesses worried about fraudulent online orders or dodgy new accounts can pass details to Amazon Fraud Detector for an instant opinion on whether they’re safe or need further review. The detector is based on models developed internally by Amazon, and businesses can customize it with lists of their customers’ email and IP addresses.

We’re probably all familiar with at least one Amazon speech-to-text application, Alexa, which copes adequately with everyday speech. While the consequences if Alexa mishears something are limited, the risks are different in the doctor’s office, so the company has developed Amazon Transcribe Medical to cope with the specialist vocabulary and integrate with voice-enabled applications.

Like Soylent Green, the key part of the next service, Amazon Augmented Artificial Intelligence (A2I), is people. Machine learning models don’t always produce clear-cut answers to challenges such as image recognition or content moderation, requiring developers to code the sending of lower-confidence responses to a human for review. With A2I, that process can be automated, with enterprises choosing whether to send queries to Amazon’s Mechanical Turk service or to their own internal reviewers.

If you count selecting AWS compute resources as an everyday task then there’s one more new machine learning service, AWS Compute Optimizer. This looks at your account’s resource usage history and then recommends the mix of Elastic Compute Cloud instances that it considers best suit your application. It’s an extension of the Cost Explorer Rightsizing Recommendations tool that Amazon released earlier this year. Where that could only suggest downsizing underutilized instances, the new tool can also encourage you to spend more for better performance, but the AI will surely have only your interests at heart.