Compute

Google Covers Its Compute Engine Bases Because It Has To

The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more

Sign up to our newsletter

Featuring highlights, analysis, and stories from the week directly from us to your inbox with nothing in between.
Subscribe now

More Analysis

AI

High Quality Data Is Key For Effective AI Agents

COMMISSIONED  As enterprises increasingly adopt GenAI-powered AI agents, making high-quality data available for these software assistants will come into sharper focus. This is why it’s more important than ever for IT leaders to get their data house in order. Unfortunately, the data houses of most IT shops may be messier

AI

Cerebras Trains Llama Models To Leap Over GPUs

It was only a few months ago when waferscale compute pioneer Cerebras Systems was bragging that a handful of its WSE-3 engines lashed together could run circles around Nvidia GPU instances based on Nvidia’s “Hopper” H100 GPUs when running the open source Llama 3.1 foundation model created by Meta Platforms.

Connect

The Money Keeps Rolling In For Optical Interconnects

With the bottlenecks between compute engines, their memories, and their networking adapters growing larger with each passing generation of AI machinery, there has never been a more pressing need to shift away from copper and towards optics in datacenter systems. In fact, AI is the killer app that those developing

AI

No Slowdown At TSMC, Thanks To The AI Gold Rush

With a near monopoly on advanced chip manufacturing and packaging, it is no wonder that during the AI boom that the world’s largest foundry is not only making money. But it is a wonder that TSMC is not extracting even more revenue and profits from its customers than it does.

Connect

One Laser To Pump Up AI Interconnect Bandwidth By 10X

According to rumors, Nvidia is not expected to deliver optical interconnects for its GPU memory-lashing NVLink protocol until the “Rubin Ultra” GPU compute engine in 2027. And what that means is that everyone designing an accelerator – particularly the ones being designed in house by the hyperscalers and cloud builders