In today's AI-driven world, a significant issue that remains in the shadows is the lack of transparency. AI transparency is essential for building trust in AI systems, especially those used in high-risk applications. When stakeholders can clearly understand how an AI system works, they are more likely to trust its decisions.
A Comprehensive Analysis of AI Company Transparency
In a collaborative effort between researchers from Stanford, MIT, and Princeton, a thorough evaluation of the transparency among foundation model developers has revealed noteworthy findings. This assessment, conducted by third-party experts, sheds light on the state of transparency within the AI industry.
Through the assessment, it reveals that even the highest-scoring model amongst foundation model developers attained only 54 out of 100 points, underscoring the industry's fundamental lack of transparency.
Source: Stanford CRFM
Scant Transparency Across the Board
The mean score across all developers stands at a mere 37%. Startlingly, only 82 of the 100 indicators are met by at least one developer, emphasising the room for improvement in transparency standards. Interestingly, open foundation model developers emerge as frontrunners, with two out of three achieving top scores. These leading developers allow their model weights to be downloaded, setting a benchmark for transparency. Stability AI, though not open source, closely trails in fourth place.
An Insightful Breakdown by Domains and Subdomains
The comprehensive assessment defines 100 indicators categorised into three critical domains:
Upstream: Pertaining to the building of foundation models, this includes computational resources, data, and labour. Notably, developers fall short on data, labour, and compute subdomains, scoring just 20%, 17%, and 17% respectively.
Model: This domain focuses on the properties and functions of the foundation model. Developers exhibit transparency in areas like user data protection (67%), model development (63%), capabilities (62%), and limitations (60%).
Downstream: This domain delves into model distribution and usage, reflecting transparency in how the model impacts users, updates, and governing policies.
Source: Stanford CRFM
Source: Stanford CRFM
Granular Analysis through Subdomains
While developers exhibit a degree of transparency in various subdomains, there is considerable room for improvement. For instance, none of the developers disclose how they provide access to usage data. Very few developers openly admit to the limitations of their models or permit third-party evaluations. Similarly, only three developers divulge model components, and just two disclose model sizes.
Open vs. Closed Models: The Great Divide
The ongoing debate in AI circles about whether models should be open or closed is a contentious one. Open models outshine their closed counterparts, with two of the three open models surpassing even the best closed model. Much of this disparity stems from the lack of transparency among closed developers, particularly in the upstream domain, concerning data, labour, and compute details.
Source: Stanford CRFM
The Blind Spot of AI Harms
The article raises important questions about the impact of AI on society. How often do chatbots provide incorrect medical advice? Have AI search engines falsely accused individuals of wrongdoing? Are users exposed to biased content generated by AI? Sadly, these questions often remain unanswered, emphasising the necessity for transparency. On top of that, AI can harm by creating explicit content, promoting misinformation, and generating unwanted content. Transparency is needed for all these cases.
Therefore, transparency reports should define and detect harm, disclose frequency of harmful content, and assess the effectiveness of enforcement mechanisms and safety filters. This is vital for general-purpose and high-risk AI applications.
Overcoming Resistance and Legal Considerations
Companies may resist transparency reporting for various reasons, including the potential for reputational and legal risks. However, the absence of transparency can harm their reputation in the long run. The market is also too concentrated, making increased transparency beneficial for consumers and the market as a whole.
In the end, if AI companies fail to embrace transparency voluntarily, policymakers may have to step in to ensure accountability.