connectivity

Researchers Figured Out How to Fit More AI Than Ever onto Internet of Things Microchips

The research focuses on how much AI you can fit onto "the smallest, cheapest, lowest-powered chip”
article cover

Francis Scialabba

· less than 3 min read

Stay up to date on emerging tech

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.

Right now, deep learning architecture requires more space than a high schooler learning to parallel park.

But a team of researchers at MIT, National Taiwan University, and the MIT-IBM Watson AI Lab say they’ve figured out how to fit more AI than ever onto the simple chips that power connected devices, from medical wearables to coffeemakers.

Think of it this way

Microprocessors, which typically house large-scale neural networks, have a large memory capacity, lots of processing power, and a steep price tag. Compared to them, Internet of Things chips—or microcontrollers—haven’t even hit puberty yet.

Throwing AI into the mix: MCUNet, the researchers’ new system, aims to take a neural net and compress it onto a microcontroller while maintaining a baseline quality level.

  • The research focuses on “how much AI [you can fit onto] the smallest, cheapest, lowest-powered chip,” John Cohn, IBM Fellow for the MIT-IBM Watson AI Lab, told us.

It’s not the first time researchers have explored putting AI into microcontrollers, but it’s a significant advance, says Cohn: MCUNet was more than 70% accurate on ImageNet, the popular image classification database—an improvement of 16 percentage points.

In the wild

Aspects of the tech could hit the market in as soon as a year, says Cohn.

Potential applications include: a doorbell with a condensed facial recognition model, a security system that recognizes the sounds of a break-in, a defibrillator that can better analyze an arrhythmia.

But, but, but: You knew we’d go there...What about the bias?

Lack of diversity in AI teams and data sets mean the majority of large-scale neural networks, along with any form of algorithm, has built-in bias. The potential to put these models in even more places makes addressing that issue more urgent.

  • And if you pare down one of these complex models—especially one that could make judgment calls about danger—how does that change the potential for harm?
  • The researchers agreed that it’s important not to “introduce new biases” during the paring-down process, but they didn’t share specifics on how they’d get there.

Bottom line: The tech is a step forward for deep learning and IoT devices. But unless diverse teams and external auditors think through the potential harms of certain applications, it's at risk of being a step back overall.

Stay up to date on emerging tech

Drones, automation, AI, and more. The technologies that will shape the future of business, all in one newsletter.