Site icon The Next Platform

Subaru Drives Its EyeSight System Forward With AI Augmentation

Several years ago, Subaru set a goal to stop fatal accidents involving its cars in 2030 and is leaning heavily on AI to reach the target.

The Japanese automaker launched an AI lab in 2020 to accelerate the development of tools to improve the safety of its vehicles and is growing the number of technologies it’s putting in place as the AI market continues to evolve.

The company has been using machine learning technology over the past few years and is now incorporating generative AI into the mix, Takashi Kanai, deputy chief of Subaru Lab – where the company’s AI development happens – and manager of the automaker’s advanced driver assistance systems development department, told The Next Platform in an interview at last week’s Dell Technologies World conference in Las Vegas.

A key focus for Subaru is developing the next generation of its EyeSight driver assistance system, a vision safety feature integrated into some Subaru models that enables a suite of functions, such as adaptive cruise control, alerting the driver when the car strays from its lane, and automatically braking and reducing the amount of fuel going into the engine when a potential collision is detected. It also warns drivers when another vehicle is in their blind spot.

The EyeSight feature, first introduced in 2008, includes two color cameras next to the rearview mirror to monitor traffic and road conditions. Last year, the company added a third camera to some select vehicles. The goal is to introduce the next generation of EyeSight in cars starting in 2025, though Kanai declined to talk about timelines. Subaru first started talking about developing a new generation of EyeSight using AI technologies in 2022.

According to Subaru, it has sold more than 5.5 million cars fitted with EyeSight since 2008, and such equipped vehicles account for about 91 percent of all the cars it sells. The automaker in 2023 sold 632,086 vehicles worldwide, a 13.6 percent year-over-year increase.

The company makes the safety of its cars and SUVs a selling point in its marketing and now is looking to AI to improve such features as EyeSight, Kanai says. Subaru two years ago hooked up with Google Cloud to improve the capabilities in EyeSight by leveraging the company’s managed services to accelerate its research into deep learning, development of AI models, and machine language training.

The carmaker uses Google Cloud to analyze images generated by EyeSight. The services used have included Google’s Vertex AI development platform and Compute Engine infrastructure-as-a-service (IaaS), initially using Nvidia’s “Ampere” A100 GPU accelerators. In the rapid innovation around generative AI since late 2022, Google Cloud has added it latest capabilities to many of its cloud services, including Gemini 1.5 Pro and Gemini 1.5 Flash to Vertex AI, giving Subaru more AI tools.

That said, Subaru is operating in a hybrid model. While Google Cloud analyzes the images from the EyeSight cameras, the Subaru Asura Net neural network is used for such tasks as identifying objects within the images, such as cars and people, and semantic segmentation, which is a deep learning algorithm that associates a label or category with every pixel in an image and in this case can be used to find safe and drivable areas.

Meanwhile, some petabyte-level data from the cameras stay on premises while Subaru Lab decides whether the cost of moving the data to the cloud is worth the value that the company gets from doing it, according to Google.

The company also is developing AI models in-house in datacenters in the Subaru Lab and offices in Tokyo and is building out its on-premises infrastructure. In February, Subaru said it was adopting Dell PowerScale network-attached storage (NAS) systems to manage and use the massive amount of data it is generating in the testing and development of the next-gen EyeSight system.

Kanai said the Subaru uses the PowerScale systems – the cluster includes the PowerScale 5600, F200, A3000, and A2000 machines – primarily for two jobs, the first being storing the data and images being created while test driving the ADAS system. A car being driven can generate a terabyte or more of data, he said. The company also is using the NAS systems for training its in-house models. Subaru Lab can store about 1,000 times more files on PowerScale than it could on its previous platforms, allowing the EyeSight system to better analyze AI images because it can more easily access the stored data, Kanai noted.

Subaru also is using 100 Gb/sec Ethernet switches and servers from a range of vendors – including Nvidia and its DGX supercomputing platform – in its datacenters as it looks to address the top infrastructure challenges that have come from supporting AI. The company is using a combination of air- and liquid-cooled servers. That combination is crucial because “AI needs so many GPUs that it causes a lot of heat, so we needed liquid cooling,” he said.

The high-speed networking is necessary to move data back and forth between Subaru’s offices and its datacenters. Test drivers often will send the data they accumulate from offices to the datacenter, according to Kanai.

Last month, Subaru announced that AMD will design circuits for an optimized system-on-a-chip (SoC) that will be based on the chipmaker’s upcoming Versal AI Edge Series Gen 2 SoC, which AMD also unveiled in April. The optimized SoC will be incorporated into the new EyeSight in the second half of the decade, Subaru said in its announcement. The carmaker will use the SoC to enhance the system’s AI inferencing capabilities and improve the recognition processing of the cameras while reducing latency and costs.

The new Versal Series Gen 2 portfolio is designed for AI embedded systems, putting multiple processors onto single devices. FPGA programmable logic will enable data preprocessing while AI engines comprising a range of vector processors will provide for more efficient inferencing. CPU cores from AMD will fuel the postprocessing process of making decisions and ensuring controls for safety-critical applications.

With the demand for AI applications for embedded systems growing, there’s a corresponding need for single-chip products to drive the necessary acceleration in power- and area-constrained environments, Salil Raje, senior vice president and general manager of AMD’s Adaptive and Embedded Computing Group, said when announcing the new Versal AI Edge Series generation.

It will take another year before the portfolio hits the market. AMD said samples of the Versal Series Gen 2 silicon will hit in the first half of 2025, with evaluation kits and system-on-model samples coming in the middle of next year. Production silicon is expected in late 2025.

Exit mobile version