AI

Stanford finds generative AI transparency has improved, but secrecy remains

Researchers created a report card on the openness of the most popular models.
article cover

Andriy Onufriyenko/Getty Images

· 3 min read

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.

Generative AI is becoming increasingly unavoidable as tech companies weave the technology into all kinds of products. But how much do we actually know about the data on which these models are trained, the rules that govern them, or their resource usage?

When Stanford University researchers first looked into questions like these last October, the answer was a resounding not much. Half a year later, a reassessment found some slightly more encouraging signs, though there’s still a lot of room for improvement.

The goal of the transparency index, published by the Stanford Institute for Human-Centered Artificial Intelligence, is to encourage developers of societally consequential AI to reveal more about how these systems work. It comes as governments around the world are taking more steps to create guardrails around AI, oftentimes including disclosures about the development process and expected impact of deployment.

Authors of the Foundation Model Transparency Index graded major players in the AI arms race like OpenAI, Google, and Meta across various measures of openness in their development processes. They then assigned each model an overall score out of 100.

This time around, the average score on the index climbed to 58 points out of 100 from 37 points in October as more companies have disclosed or complied with the index’s request for information around hardware, compute power, and energy usage.

Still, the study noted that some areas are still kept under wraps, including issues around copyrights, personally identifiable information in data, and “downstream impact, such as the market sectors and countries in which their models are used and how they are used there.”

“Some areas of the Index demonstrate sustained and systemic opacity, meaning almost all developers do not disclose information on these matters,” the researchers wrote in the announcement of the updated index.

Among the more transparent foundation models included in the index were those claiming to be closer to open-source, including Hugging Face, IBM, and Meta. On the other hand, Google, OpenAI, and Amazon were nearer to the bottom of the pack.

A ranking of the models based on transparency index scores

Stanford HAI

“The Foundation Model Transparency Index continues to find that transparency ought to be improved in this nascent ecosystem, with some positive developments since October,” the authors wrote in the paper. “Moving forward, we hope that headway on transparency will demonstrably translate to better societal outcomes like greater accountability, improved science, increased innovation, and better policy.”

Keep up with the innovative tech transforming business

Tech Brew keeps business leaders up-to-date on the latest innovations, automation advances, policy shifts, and more, so they can make informed decisions about tech.