Data processing by AI at the edge reduces costs and connectivity constraints


As the number of edge devices grows exponentially, sending large volumes of data to the cloud could quickly overwhelm broadband budgets and capacities. Deep learning, a subset of machine learning, reduces connectivity costs and burdens.

A race is on to accelerate artificial intelligence (AI) at the edge of the network and reduce the need to send huge amounts of data to the cloud.

Edge computing, or edge computing, brings data processing resources closer to the data and devices that need them, thereby reducing data latency, which is important for many urgent processes, such as video streaming or video streaming. autonomous cars.

Development of specialized silicon and enhanced machine learning (ML) models are expected to drive greater automation and autonomy on the periphery of new offerings, from industrial robots to autonomous vehicles.

Vast computing resources in centralized clouds and enterprise data centers are adept at processing large volumes of data to spot patterns and create machine learning training models that “teach” devices to deduce what to do next when they detect similar patterns.

But when these models detect something abnormal, they are forced to request intervention from human operators or obtain revised models from data processing systems. This is not sufficient in cases where decisions need to be made instantly, such as shutting down a machine on the verge of failure.

“A self-driving car doesn’t have time to send images to the cloud for processing once it detects an object on the road, and medical applications that assess critically ill patients have no leeway. to interpret brain scans after a hemorrhage ” McKinsey & Co. analysts wrote in a report on AI Opportunities for Semiconductors. “And that makes edge computing, or embedded computing, the best choice for inference.”

This is where AI data processing at the edge is gaining momentum.

Overcome budget and bandwidth limits

As the number of edge devices grows exponentially, sending large volumes of data to the cloud could quickly exceed budgets and broadband capacity. This problem can be overcome with deep learning (DL), a subset of ML that uses neural networks to mimic the human brain’s reasoning processes. This allows a device to self-learn from unstructured, unlabeled data.

With DL integrated edge devices, businesses can reduce the amount of data that needs to be sent to data centers. Likewise, specialized chips built into the ML can learn how to remove raw data that does not require output activity, for example, sending video data to the cloud when it meets certain criteria, such as capturing. of a human image only and the removal of images of birds and dogs.

“There isn’t enough bandwidth in the world to just collect data and send it to the cloud,” said Richard Wawrzyniak, senior market analyst at Semico Research Corp. “AI has advanced to the point where data processing resides on the device and then sends all relevant data points somewhere for processing.

Decide what is near and dear

Organizations are faced with the challenge of developing architectures that differentiate which data can be processed at the edge compared to the one that should be sent upstream.

“We see two dimensions,” said Sreenivasa Chakravarti, vice president of the manufacturing business group at Tata Consultancy Services (TCS). “Most organizations are trying to keep the data separate and discussing how to keep what’s closest to you on the edge and what to park in the cloud. »This requires having a cloud-to-edge data strategy.

Chakravarti said he expects advanced autonomous capabilities to be used in more production lines, not just autonomous vehicles. The challenge is to synchronize autonomous activity into a larger ecosystem, he said, as manufacturers want to increase the throughput of their operations, not just individual systems.

Likewise, many autonomous systems must incorporate some type of human interface.

“Before the automotive industry is ready to let AI take the wheel, it first wants to integrate it into cars currently produced with many driver assistance technologies,” writes ARC Advisory Group senior analyst Dick Slansky. “AI lends itself very well to powering advanced safety functions for connected vehicles. Driver assistance functions built into vehicles coming off production lines are now helping drivers familiarize themselves with AI before vehicles become fully autonomous.

The future of AI data processing at the edge

Almost all cutting-edge devices shipped by 2025, from industrial PCs to cellphones and drones, will have some type of AI processing, predicted Aditya Kaul, research director at market research firm Omdia | Tractica.

“There are many other categories where we have yet to see activity or visibility, as Original Equipment Manufacturers have not moved so quickly in all traditional areas and need to understand the value of AI at the periphery. This will be the second wave from 2025 to 2030, ”Kaul predicted.

Chipmakers are in a heated arms race to market AI acceleration modules for cutting-edge devices. Established companies such as microprocessor titan Intel and graphics processor leader NVIDIA face challenges from new competitors, such as well-funded tech giants Google, Microsoft and Amazon, and emerging companies such as Blaize and Hailo Technologies. Around 30 companies were engaged in the development of AI acceleration chip technology for advanced applications at the start of the year, probably heading for fierce competition.

“I wouldn’t want to be one of those companies,” said Simon Crosby, chief technology officer at Swim, a developer of streaming data processing software. “In the edge world, at the end of the day, the accelerator parts have to be used in a vertically integrated solution by someone who is going to market a hardware solution. Customers don’t care about the guts.

Getting started with AI on the periphery

The technology for AI applications at the edge is advancing so rapidly that many organizations may be reluctant to invest in a particular technology for fear that it will quickly be overtaken by more advanced capabilities. TCS’s Chakravarti said he advises companies not to wait until they think the technology has matured, but rather to start developing organizational skills.

“You can either wait to mature and be left behind, or make early investments and grow with technology,” Chakravarti said. However, he advised, “don’t take the technology and look for the problems. Focus on your problems and research the technology to solve them.

VSdevil our IA conference in IoT. Register here.


Comments are closed.