​Just released: Stanford's annual Foundation Model Transparency Index provides a crucial snapshot of the industry.
In an industry that is largely opaque due to competitive and legal pressures, a trend toward secrecy has been accelerating. As the AI industry matures, model improvements become more incremental, and the competitive advantage of new models is increasingly fragile. Consequently, highly competitive labs are naturally becoming more secretive. Furthermore, legal challenges lurk, triggered for example by the use of copyrighted training data or the impact of AI advice on user actions. In that context, it is not surprising that labs like Meta and OpenAI are showing decreasing transparency scores over time. The stakes are simply too high.
There are countervailing forces as well. Legal frameworks like the AI Act demand transparency across the AI supply chain, particularly for high-risk use cases. Organizations procuring AI are increasingly aware of the risks that enter their organization through third-party AI systems and are starting to demand transparency from suppliers. Notably, these forces towards transparency are stronger in enterprise solutions than in consumer products.
As we are now in an era where all leading LLMs have a quality level sufficient for many mundane tasks, transparency may emerge as a differentiator. That is exactly the big bet that IBM is making. It does not pretend to beat other labs on quality, but it is zeroing in on transparency as a Unique Selling Proposition (USP) for the enterprise market.
With a score of 95%, IBM has emerged as the leader in the Foundation Model Transparency Index. The sharp increase (up from 64% in 2024) in transparency for IBM's Granite model series is a strong testament to the company's commitment to transparency and auditability. The ISO 42001 certification it achieved for these models signals the same dedication. Organizations committed to trustworthy and compliant AI should take note. In a world where foundation model quality is no longer the sole differentiator for use-case success, IBM provides a compelling alternative to the major AI labs.
For IBM itself, there are corresponding spin-off benefits. In building toward this level of transparency, IBM had to demonstrate a high degree of governance and control over the AI lifecycle. The learnings from this journey will no doubt feed into IBM's AI governance suite (watsonx.governance). Developing a foundation model is the ideal test case for demonstrating that they have every aspect of the lifecycle under control—from data acquisition and curation to training, deployment, and monitoring.
Nemko Digital, an IBM partner, is excited about these developments and looks forward to what 2026 will bring for AI transparency and governance. Do you want to step up your organization's AI governance and bring AI Trust to your operations? Contact us to learn more.