AI data processing at the edge reduces costs, data latency

Pete Bartolik, IoT World Today

November 20, 2020

2 Min Read
AI data processing at the edge reduces costs, data latency

A race is on to accelerate artificial intelligence (AI) at the edge of the network and reduce the need to transmit huge amounts of data to the cloud.

The edge, or edge computing, brings data processing resources closer to the data and devices that need them, reducing data latency, which is important for many time-sensitive processes, such as video streaming or self-driving cars.

Development of specialized silicon and enhanced machine learning (ML) models is expected to drive greater automation and autonomy at the edge for new offerings, from industrial robots to self-driving vehicles.

Vast computer resources in centralized clouds and enterprise data centers are adept at processing large volumes of data to spot patterns and create machine learning training models that “teach” devices to infer what actions to take when they detect similar patterns.

But when those models detect something out of the ordinary, they are forced to seek intervention from human operators or get revised models from data-crunching systems. That’s not sufficient in cases where decisions must be made instantaneously, such as shutting down a machine that is about to fail.

“A self-driving car doesn’t have time to send images to the cloud for processing once it detects an object in the road, nor do medical applications that evaluate critically ill patients have leeway when interpreting brain scans after a hemorrhage,” McKinsey & Co. analysts wrote in a report on AI opportunities for semiconductors. “And that makes the edge, or in-device computing, the best choice for inference.”

That’s where AI data processing at the edge is gathering steam.

Overcoming Budget and Bandwidth Limits

As the number of edge devices increases exponentially, sending high volumes of data to the cloud could quickly overwhelm budgets and broadband capabilities. That issue can be overcome with deep learning (DL), a subset of ML that uses neural networks to mimic the reasoning processes of the human brain. This allows a device to self-learn from unstructured and unlabeled data.

To read the complete article, visit IoT World Today.

 

About the Author

Subscribe to receive Urgent Communications Newsletters
Catch up on the latest tech, media, and telecoms news from across the critical communications community