AI

What to know about this year’s massive Stanford AI report

The temperature check covered everything from amplified investments to AI’s proficiency compared to humans.
article cover

Imaginima/Getty Images

· 3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

The last 12 months have been a frenetic period for AI—so much so that it took Stanford University researchers a whopping 500 pages to break down the state of the technology in the 2024 edition of its annual AI Index report.

Published by the university’s Institute for Human-Centered Artificial Intelligence, the report covers everything from generative AI investment (on the rise) and the costs of training (also ballooning) to how the technology stacks up against humans (a mixed bag). Just want the TL;DR? Tech Brew pulled out some of the biggest takeaways from the report.

Generative AI investment is surging: While global private investment in AI fell for a second year in a row, it probably doesn’t come as a surprise that one subfield of the tech—generative AI—has clearly been having a moment. Funding for that area jumped nearly ninefold in 2023 from the year before, up to $25.2 billion.

Costs are skyrocketing, too—and they’re not just monetary: Just as investment is mounting, so too is the amount of money required to train these massive systems. The report estimated that OpenAI’s GPT-4 likely cost $78 million to train, while Google’s Gemini Ultra had an estimated price tag of upward of $191 million.

That’s not to mention the environmental cost. The amount of carbon emissions from training can vary, the report said. The report said Meta’s Llama 2 burned through 291 tons of carbon, or “16 times the amount…emitted by an average American in one year,” and it estimated the emissions from OpenAI’s GPT-3 at 502 tons.

AI can’t beat humans at everything: While AI has now gained the edge over people at certain tasks like image classification and English language understanding, it still lags behind human baseline capabilities on others, like advanced mathematics and visual commonsense reasoning. The areas where humans have the upper hand over AI tend to involve complex cognitive abilities, the report said.

Foundation models are trending toward open-source: A major debate within the industry centers on how open large language models (LLMs) should be, with some companies choosing to keep their code under lock and key while others make it more freely available. But Stanford’s report suggests the open-source proponents may be taking the lead: Of the 149 foundation models organizations released last year, 65.7% were open-source, up from 44.4% in 2022 and 33.3% the year before that.

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.