Pulse News

FridayMarch 14, 2025

CFOs Are Crashing the AI Buying Party; Six Takeaways From Nvidia Report

View Original Article →Published: 2/27/2025

**CFOs Are Crashing the AI Buying Party; Six Takeaways From Nvidia Report**

By Stephanie Palazzolo

Feb 27, 2025, 7:00am PST

Happy AI Agenda Live day! I'm so excited to see many of you at our event later today, where we'll hear from executives at top AI firms like Anthropic and Google as well as venture capitalists and founders on what research breakthroughs are coming and where to invest. One topic we'll be sure to broach is the growing pressure chief financial officers are putting on their colleagues to start showing a return from their AI purchases, founders tell me.

More than 70% of the conversations that Writer, a startup building AI tools for enterprises, has with potential customers involve a CFO, up from less than 5% a year ago, said co-founder Waseem Alshikh. CFOs often are leading those conversations, instead of the chief information or technology officers that often make cloud and software buying decisions. CFO involvement often makes sales harder, since budgets are top of mind for them, executives tell me.

That shift reflects the evolving attitude around AI among businesses. Whereas a year ago or so some businesses were trying out lots of different AI products or making splashy announcements about new Chief AI Officers, now the emphasis is squarely on the bottom line. AI customers are also less concerned these days about what kind of model Writer is using under the hood or about techniques like retrieval augmented generation, Alshikh said. They just want it to work. "Today, the focus is on ROI, less on model names or sizes," Alshikh said. “Instead, [customers] prioritize secure, transparent and scalable solutions that deliver a positive return on investment."

And CFOs are discussing the costs they're trying to cut using Writer's products and telling Writer representatives to "figure it out," or hit those numbers, he said. In some cases, CFOs are telling Alshikh that they're allocating capital to AI in hopes of increasing the productivity of workers rather than spending money to hire more people. It's not lost on me that most of the applications of AI we hear about today cut costs rather than boosting revenue by, for instance, helping salespeople to strike more deals. It's a noticeably more sober version of the AI dream we were promised two years ago.

**Six Takeaways From Nvidia's Earnings Call**

Nvidia's strong fourth quarter results it posted on Wednesday suggested there would be little slowdown in revenue growth during the first part of the year and possibly the rest of the year as the artificial intelligence boom continues. CEO Jensen Huang and CFO Colette Kress may not have given investors as many bullish signs as they did last year, but there was plenty for shareholders to cheer about—even if the muted stock reaction didn't suggest as much.

At the top of the list was Nvidia's projection of 65% revenue growth to $43 billion in the current fiscal quarter. Given how consistently the company has beaten its projections over the past two years, and the visibility it has into data center plans of major AI developers and cloud providers, it wouldn't be surprising if revenue growth ended up closer to the 78% growth Nvidia posted in the January quarter results. "We will grow strongly in 2025," Huang said.

**Hitting On Inference:** Nvidia's chip challengers—along with Arm, a key supplier to Nvidia—have said there will be lots of room for new types of chips that are optimized to run AI models, known as inference. Huang on Wednesday tried to preempt that argument by emphasizing that the "vast majority" of the computing workloads Nvidia chips handle are "actually inference" and that its new Blackwell chips "takes all of that to a new level." Huang explained that Nvidia designed Blackwell with reasoning AI models in mind. Reasoning models provide better answers to queries the more time they have to compute an answer, known as test-time compute.

Kress said usage of advanced models developed by OpenAI, DeepSeek and XAI is driving demand for chips for inference.

**Watch Out, Chip Startups:** Speaking of challengers to Nvidia, Huang said customers are flocking to Nvidia chips because they handle many types of AI models, unlike custom AI chips which are aimed at specific workloads, such as inference. "I will say this, just because the chip is designed doesn't mean it gets deployed," he said. "And you've seen this, you know, over and over again. There are a lot of chips that get built, but when the time comes, a business decision has to be made." His comments seemed to be a dig at Nvidia's challengers. They include all of Nvidia's top customers, which are in various stages of using or designing in-house alternatives to Nvidia chips, as well as chip startups.

**'Software Has Changed':** Huang said as a result of reasoning models and other developments, he expects AI computing workloads to multiply by "millions of times." He cited the rise of AI-generated software, reasoning models that can handle more complex tasks, and AI for robots and vehicles as being "around the corner." "Software has changed from hand coding that runs on" traditional processors to "machine learning and AI based software that runs on" Nvidia chips, he said. If he's right in the long run, projects like OpenAI's Stargate data center project and Meta Platforms' proposed data center campus, which could cost $200 billion, will look reasonable.

**China is Down But Not Out:** American regulators knee-capped Nvidia's business of selling chips to Chinese companies by implementing rules to weaken the type of chips it can sell there. China accounted for 13% of sales in the year to January 2025, down from 17% the year earlier. Nvidia signaled the decline may have stopped. Kress said that "absent any change in regulations, we believe that China shipments will remain roughly at the current percentage." That percentage likely doesn't account for the robust smuggling market for Nvidia chips into China. Plus, many Chinese firms including ByteDance have accessed advanced chips through cloud providers operating outside China. Nvidia sales in Singapore, which represented 18% of revenue last year, may be another factor in China sales. "Customers use Singapore to centralize invoicing while our products are almost always shipped elsewhere," the company said in its annual securities filing on Wednesday. In fact, shipments of chips that actually made it to Singapore accounted for less than 2% of total revenue last year, the company said.

**'Enterprise' Revenue Doubles:** Cloud providers such as Amazon Web Services and Microsoft make up about half of Nvidia's revenue. That's because they are buying tens of billions of dollars worth of Nvidia chips to develop their own AI and to rent out the chips to their cloud customers. But Huang said over time, he expects Nvidia's sales to other types of businesses will be "by far larger" as a percentage of sales than cloud providers. Nvidia said such sales doubled in the past year, without specifying. In any case, Huang's comments imply that industrial firms including automakers could do more direct purchasing of AI chips rather than renting them from cloud providers. That's music to the ears of investors who hope Nvidia can diversify its concentrated customer base.

**No Software Revenue Disclosure:** The company has periodically raised expectations about generating revenue from selling cloud services and software to businesses, but it avoided the topic entirely on Wednesday's call. Three months ago, the company said its software, service and support revenue was generating $1.5 billion in annualized revenue—implying $125 million of revenue per month—and the company expected that figure to climb to $2 billion by the end of 2024. Since then Nvidia has advertised the enterprise software product on NPR and elsewhere. By not mentioning it in the most recent earnings call, Nvidia may be implying that these newer businesses aren't hitting it out of the park.