As foundation models barge into everyday workflows, the AI industry's practices on AI Trust are diverging: some labs are becoming more opaque under competitive and legal pressures, while others lean into transparency in AI as a core enterprise value proposition. The 2025 edition of Stanford's Foundation Model Transparency Index (FMTI) shows an industry‑wide drop in transparency, even as adoption accelerates, making procurement choices more consequential than ever.
2The FMTI is a recurring, annual benchmark that evaluates how openly major AI developers document their flagship foundation models across 100 transparency indicators (spanning upstream inputs like data, labor, and compute; model details such as capabilities, evaluations, and mitigations; and downstream use, policies, and impact). [CRFM overview]
The Index changes every year. In 2025, the criteria were tightened, so scores aren't directly comparable with 2024 (see GitHub indicators & methodology for details). Still, the year-on-year changes remain meaningful: it shows which companies are moving toward deeper, more substantive transparency versus those pulling back.
Importantly, the FMTI assesses disclosure quality, not safety or performance—it is neither prescriptive nor regulatory. It is a report card on corporate transparency, not a verdict on whether a model is accurate, robust, or beneficial.
In 2025, the average transparency score fell to c. 40/100 (down 17 points year‑over‑year), with the steepest opacity in training data, training compute, and post‑deployment usage and impact.
The result is a clear divergence: a small cluster of top performers, a mid‑pack, and a group of low scorers that reveal little about their models.
Competitive secrecy—driven by incremental model improvements and fragile advantages—plus mounting legal exposure (e.g., copyrighted data questions, downstream harms) are key reasons for the decline. Yet enterprise buyers and evolving regulation are exerting countervailing pressure toward transparency, particularly in B2B contexts.
While many labs reduced disclosures in 2025, some well‑performing labs did not. The FMTI highlights Writer and IBM as notably improving; it also shows AI21 and Anthropic rising in the multi‑year subset comparison. This is evidence that B2B‑oriented companies are leaning into transparency even as the average declines.
IBM is a case in point. Their Granite models achieved 95/100, the highest score in the Index's history, with unusually detailed documentation including practices that enable external replication and audit access to training data. IBM positions transparency as an enterprise USP, pairing disclosure with AI governance tooling (e.g., watsonx.governance) and certifications such as ISO 42001—a strong signal to buyers prioritizing auditability, supply‑chain trust, and compliance readiness.
The EU AI Act's General‑Purpose AI Code of Practice (published July 10, 2025) provides a voluntary documentation pathway for model providers to demonstrate compliance on transparency, copyright, systemic‑risk, safety, and security. It introduces a standardized Model Documentation Form—exactly the sort of supply‑chain artifact enterprises can require in procurement to reduce third‑party risk. Enforcement phases in after August 2025, further incentivizing disclosure practices.
The Code does not replace the Act's obligations, but aligning with it eases regulatory engagement and elevates documentation quality—especially for organizations integrating general‑purpose models into downstream systems.
Use FMTI scores as a first‑pass signal of vendor openness. Prioritize high‑scoring vendors and flag low scorers for deeper review around data provenance, risk mitigations, and monitoring.
Map FMTI dimensions into criteria and scoring: training data sources & rights, compute/environmental reporting, evaluation methodology and reproducibility, downstream policies and incident reporting. Where applicable, require the EU Model Documentation Form (or equivalent).
Convert transparency expectations into contractual obligations such as: disclosure of data provenance and licensing basis, publication of evaluation protocols, commitments to post‑deployment monitoring, environmental impact statements, and auditor access to documentation.
Re‑check vendor FMTI standings annually and align your model oversight with EU timelines. If a supplier's score declines, reassess risk, require remediation, or diversify.
The Index measures disclosure, not accuracy or robustness. Use it to shortlist disclosure‑ready vendors, but make sure to run independent safety, robustness, and bias evaluations tailored to your use case.
Transparency is diverging. The industry average fell sharply in 2025, yet a meaningful set of labs is improving or holding steady—there is real choice for buyers who value disclosure.
Enterprise buyers can shape the market. By embedding FMTI‑aligned criteria and EU documentation expectations into procurement, organizations can reward disclosure and reduce supply‑chain risk.
Transparency is necessary—but not sufficient. Treat FMTI as the starting point for governance maturity, complemented by safety/performance testing specific to your domain.
Nemko Digital helps organizations operationalize AI transparency—from RFP design and contractual clauses to audits and governance frameworks aligned with the EU AI Act and leading industry indices. If you're ready to build trustworthy, compliant AI, let's talk about a procurement and governance roadmap tailored to your risk profile and 2026 objectives.