MIT’s 168-Core Chip Will Soon Bring AI to Smartphones, IoT Devices

MIT researchers unveil 168-Core Chip that will bring AI in your mobile devices and IoT devices

Last Wednesday, MIT researchers presented a chip designed to implement neural networks to process data faster at the International Solid State Circuits Conference (ISSCC) in San Francisco. The chip is claimed to be 10 times as efficient as a mobile graphics processor (GPU) that will enable mobile devices to run artificial-intelligence algorithms locally.

Dubbed ‘Eyeriss’, neural nets are sometimes branded ‘deep learning’. MIT researcher Vivienne Sze whose group developed the chip says “Deep learning is useful for many applications, such as object recognition, speech, face detection. Right now, the networks are pretty complex and are mostly run on high-power GPUs. You can imagine that if you can bring that functionality to your phone or embedded devices, you could still operate even if you don’t have a Wi-Fi connection. You might also want to process locally for privacy reasons. Processing it on your phone also avoids any transmission latency, so that you can react much faster for certain applications.”

Eyeriss could also help usher in the “Internet of things” — the idea that appliances, vehicles, manufacturing equipment, civil-engineering structures, and even livestock would have sensors that report information directly to networked servers, helping with maintenance and task coordination. With powerful artificial-intelligence algorithms on board, networked devices could make important decisions locally, entrusting only their conclusions, rather than raw personal data, to the Internet. And, of course, onboard neural networks would be useful to battery-powered autonomous robots.

The chip implements convolutional neural nets, where many nodes in each layer process the same data in different ways. “The networks can thus swell to enormous proportions,” said MIT. “Although they outperform more conventional algorithms on many visual-processing tasks, they require much greater computational resources.”

Data enters and is divided among the nodes in the bottom layer. Each node manipulates the data it receives and passes the results on to nodes in the next layer, which manipulate the data they receive and pass on the results, and so on. The result emerges from the final layer.

A process called ‘training’ decides what each node does, where the network finds correlations between raw data and labels applied to it by human annotators.

“With a chip like the one developed by the MIT researchers, a trained network could simply be exported to a mobile device,” said MIT.

Each Eyeriss core has its own memory bank, which is the opposite of centralized memory for GPUs and CPUs that power today’s deep-learning systems. The chip tries to reduce repetition in processing by efficiently breaking down tasks for execution among the 168 cores. The circuitry can be reconfigured for different types of neural networks, and compression helps preserve bandwidth.

Nvidia at CES demonstrated self-driving cars that removed data from servers to identify objects or obstructions on a street. With MIT’s chip, self-driving cars could have on-board image recognition capabilities, which could be useful in remote areas where cellular connections aren’t available.

The researchers haven’t said if the chips would reach devices. Besides Intel and Qualcomm, chip companies like Movidius are trying to bring AI capabilities to mobile devices.

The research was partially funded by Defense Advanced Research Projects Agency (DARPA).

Subscribe to our newsletter

To be updated with all the latest news

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post