Published on July 26th, 2018 | by Emergent Enterprise0
Google Unveils Tiny New AI Chips for On-device Machine Learning
The hardware is designed for enterprise applications, like automating quality control checks in a factory
Two years ago, Google unveiled its Tensor Processing Units or TPUs — specialized chips that live in the company’s data centers and make light work of AI tasks. Now, the company is moving its AI expertise down from the cloud, and has taken the wraps off its new Edge TPU; a tiny AI accelerator that will carry out machine learning jobs in IoT devices.
The Edge TPU is designed to do what’s known as “inference.” This is the part of machine learning where an algorithm actually carries out the task it was trained to do; like, for example, recognizing an object in a picture. Google’s server-based TPUs are optimized for the training part of this process, while these new Edge TPUs will do the inference.
These new chips are destined to be used in enterprise jobs, not your next smartphone. That means tasks like automating quality control checks in factories. Doing this sort of job on-device has a number of advantages over using hardware that has to sent data over the internet for analysis. On-device machine learning is generally more secure; experiences less downtime; and delivers faster results. That’s the sales pitch anyway.
The Edge TPU is the little brother of the regular Tensor Processing Unit, which Google uses to power its own AI, and which is available for other customers to use via Google Cloud.
Google isn’t the only company designing chips for this sort of on-device AI task though. ARM, Qualcomm, Mediatek and others all make their own AI accelerators, while GPUs made by Nvidia famously dominate the market for training algorithms.
However, what Google has that its rivals don’t is control of the whole AI stack. A customer can store their data on Google’s Cloud; train their algorithms using TPUs; and then carry out on-device inference using the new Edge TPUs. And, more than likely, they’ll be creating their machine learning software using TensorFlow — a coding framework created and operated by Google.
This sort of vertical integration has obvious benefits. Google can ensure that all these different parts talk to one another as efficiently and smoothly as possible, making it easier for customer to play (and stay) in the company’s ecosystem.
Google Cloud’s vice president of IoT, Injong Rhee, described the new hardware as a “purpose-built ASIC chip designed to run TensorFlow Lite ML models at the edge” in a blog post. Said Rhee: “Edge TPUs are designed to complement our Cloud TPU offering, so you can accelerate ML training in the cloud, then have lightning-fast ML inference at the edge. Your sensors become more than data collectors — they make local, real-time, intelligent decisions.”
Interestingly, Google is also making the Edge TPU available as a development kit, which will make it easier for customers to test out the hardware’s capability and see how it might fit into their products. This devkit includes a system on module (SOM) containing the Edge TPU, an NXP CPU, a Microchip secure element, and Wi-Fi functionality. It can connect to a computer or server via USB or a PCI Express expansion slot. These devkits are only available in beta though, and potential customers will have to apply for access.
This may seem like a small part of the news, but it’s notable as Google usually doesn’t let the public get their hands on its AI hardware. However, if the company wants customers to adopt its technology, it needs to make sure they can try it out first, rather than just asking them to a leap of faith into the AI Googlesphere. This development board isn’t just a lure for companies — it’s a sign that Google is serious about owning the entire AI stack.