Yesterday, a team of researchers from MIT introduced a new computer chip optimized for deep-learning, an approach to artificial intelligence that is gaining popularity. The chip, dubbed “Eyeriss” could allow mobile devices to perform tasks like natural language processing and facial recognition without being connected to the internet. It’s the latest attempt to make the complex operations of machine learning more portable. That means that our smartphones, wearables, robots, self-driving cars, and other IoT devices could begin performing complex deep learning processes locally — something that until now has been very difficult to do.
Deep learning has traditionally demanded large amounts of computer processing. GPUs, computer chips designed to render the graphics we see on our computer screens, are a good enough workhorse to handle the task. But GPUs come with a drawback: they suck up a ton a power. This makes them impractical to use for deep learning on mobile devices. The workaround to this has been to take raw data collected by devices, upload it over the internet, perform deep learning on powerful GPU servers, then shoot the results back over the internet to the device.
This can lead to some problems that the Eyeriss is promising to solve. The first is that if your mobile device can’t find an internet connection, then it can’t carry out deep learning tasks. (Siri, for example, needs lots of processing power to understand speech, which is why it won’t work unless it can reach Apple’s servers over the web.) When you do manage to connect to the internet, the data that devices upload to remote servers can be personal in nature, which leads to privacy issues. There’s also the pesky problem of transmission latency — the amount of time it takes for information to be sent from your mobile device and back. The researchers claim Eyeriss is designed in a way that makes it 10 times more power-efficient, which means it could avoid all of these problems without killing your battery.
Eyeriss is the latest in a number of chips being announced that are taking deep learning out of remote servers. Qualcomm just revealed its Snapdragon 820A and 820Am processors at CES 2016, which allow cars to detect multiple lane markings and understand traffic signals using deep learning. Nvidia also flaunted its machine learning wares at CES, demonstrating how its Tegra processor could use deep learning for be used for autonomous driving. And Google recently announced a partnership with chip maker Movidius aimed at improving facial recognition on phones.
The MIT research team, headed by Vivienne Sze of MIT’s department of electrical engineering and computer science, introduced the chip at the International Solid State Circuits Conference in San Francisco, where they used it to perform an image recognition task. Plans for when the Eyeriss will reach devices have not been announced.