AI

Intel Xeon 6 With P-Cores Makes the Case For The Host CPU

PARTNER CONTENT  Since OpenAI first released ChatGPT into the world two years ago, generative AI has been a playground mostly for GPUs and primarily those from Nvidia, even though graphics chips from others and AI-focused silicon have tried to make their way in.

Compute

Custom Arm CPUs Drive Many Different AI Approaches In The Datacenter

Sponsored Feature  Arm is starting to fulfill its promise of transforming the nature of compute in the datacenter, and it is getting some big help from traditional chip makers as well as the hyperscalers and cloud builders that have massive computing requirements and who also need to drive efficiency up and costs down each and every year.

AI

You Are Paying The Clouds To Build Better AI Than They Will Rent You

Think of it as the ultimate offload model.

One of the geniuses of the cloud – perhaps the central genius – is that a big company that would have a large IT budget, perhaps on the order of hundreds of millions of dollars per year, and that has a certain amount of expertise creates a much, much larger IT organization with billions of dollars – and with AI now tens of billions of dollars – in investments and rents out the vast majority of that capacity to third parties, who essentially allow that original cloud builder to get their own IT operations for close to free.

Compute

Google Covers Its Compute Engine Bases Because It Has To

The minute that search engine giant Google wanted to be a cloud, and the several years later that Google realized that companies were not ready to buy full-on platform services that masked the underlying hardware but wanted lower level infrastructure services that gave them more optionality as well as more responsibility, it was inevitable that Google Cloud would have to buy compute engines from Intel, AMD, and Nvidia for its server fleet.