- Advertisement -

- Advertisement -

OHIO WEATHER

Nvidia Chip Shortages Leave AI Startups Scrambling for Computing Power


Cloud computing providers are very aware that their customers are struggling for capacity. Surging demand has “caught the industry off guard a bit,” says Chetan Kapoor, a director of product management at AWS.

The time needed to acquire and install new GPUs in their data centers have put the cloud giants behind, and the specific arrangements in highest demand also add stress. Whereas most applications can operate from processors loosely distributed across the world, the training of generative AI programs has tended to perform best when GPUs are physically clustered tightly together, sometimes 10,000 chips at a time. That ties up availability like never before.

Kapoor says AWS’ typical generative AI customer is accessing hundreds of GPUs. “If there’s an ask from a particular customer that needs 1,000 GPUs tomorrow, that’s going to take some time for us to slot them in,” Kapoor says. “But if they are flexible, we can work it out.”

AWS has suggested clients adopt more expensive, customized services through its Bedrock offering, where chip needs are baked into the offering without clients having to worry. Or customers could try AWS’ unique AI chips, Trainium and Inferentia, which have registered an unspecified uptick in adoption, Kapoor says. Retrofitting programs to operate on those chips instead of Nvidia options has traditionally been a chore, though Kapoor says moving to Trainium now takes as little as changing two lines of software code in some cases.

Challenges abound elsewhere too. Google Cloud hasn’t been able to keep up with demand for its homegrown GPU-equivalent, known as a TPU, according to an employee not authorized to speak to media. A spokesperson didn’t respond to a request for comment. Microsoft’s Azure cloud unit has dangled refunds to customers who aren’t using GPUs they reserved, the Information reported in April. Microsoft declined to comment.

Cloud companies would prefer that customers reserve capacity months to years out so those providers can better plan their own GPU purchases and installations. But startups, which generally have minimal cash and intermittent needs as they sort out their products, have been reluctant to commit, preferring buy-as-you-go plans. That has led to a surge in business for alternative cloud providers, such as Lambda Labs and CoreWeave, which have pulled in nearly $500 million from investors this year between them. Astria, the image generator startup, is among their customers.

AWS isn’t exactly happy about losing out to new market entrants, so it’s considering additional options. “We’re thinking through different solutions in the short- and the long-term to provide the experience our customers are looking for,” Kapoor says, declining to elaborate.

Shortages at the cloud vendors are cascading down to their clients, which include some big names in tech. Social media platform Pinterest is expanding its use of AI to better serve users and advertisers, according to chief technology officer Jeremy King. The company is considering using Amazon’s new chips. “We need more GPUs, like everyone,” King says. “The chip shortage is a real thing.” 



Read More: Nvidia Chip Shortages Leave AI Startups Scrambling for Computing Power

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.