A new study from IBM reveals a critical paradox: while 77% of UK and Irish executives expect AI to drive significant revenue by 2030, a staggering 73% believe these efforts will fail without proper governance and business integration. Compounding this challenge, only 27% have a clear vision for how to generate that revenue, highlighting a crucial gap between ambition and strategy that organizations must address to unlock AI’s full potential and real opportunities.
Recent research from IBM’s Institute for Business Value paints a picture of immense opportunity tempered by significant operational risk. The study, which surveyed over 2,000 C-suite executives—including 150 from the UK and Ireland—underscores that the UK Government’s projection of AI adding £400 billion to the economy by 2030 is contingent on more than just investment. Success depends on building a strong foundation of trust and pivoting to an “AI-first enterprise” where artificial intelligence and AI-enabled tools are woven into the core of the business. For many AI systems—especially those deployed in finance, customer service, and employee-facing work—this also means proactively addressing bias, discrimination, and fairness concerns to reduce potential risks and public harm.
For years, the primary driver of AI adoption has been efficiency. According to the report, nearly half (47%) of current AI spending is focused on cost-cutting and process optimization. However, a strategic pivot is on the horizon. By 2030, executives predict that 64% of AI investment will be dedicated to fostering innovation across products, services, and entire business models.
This shift from tactical efficiency to strategic growth signals a maturing understanding of AI’s transformative power. As Rahul Kalia, Managing Partner for IBM Consulting in the UK and Ireland, states, “AI is no longer just a tool for efficiency; it’s becoming a growth engine for the enterprise.” This is where a robust framework for AI governance and management systems becomes not just a compliance checkbox, but a strategic enabler—helping organizations translate technological advancements into shared benefit for customers, employees, and society.
The report’s most telling finding is the disconnect between technological ambition and organizational preparedness. While the revenue expectations are high, the overwhelming fear of failure (73%) points to a critical governance gap. As Kalia notes, “Success will hinge on integrating AI into core business strategies and reskilling the workforce. Organisations that act decisively, with the appropriate governance and controls in place for AI, will be the ones defining competitive advantage tomorrow.”
Effective governance is the bridge between AI’s potential and its practical, reliable application. It ensures that AI systems are not only compliant with evolving AI regulations like the EU AI Act but are also aligned with an organization’s ethical principles, human values, and strategic goals—including protection of human rights. In practice, that means setting accountable decision-making practices, ensuring good governance, and deploying diligent oversight mechanisms (often through a high-level committee) that can manage the complex interplay between innovation, self-regulation, and formal policymaker expectations from governments.
This holistic approach is fundamental to building resilient and trustworthy AI—especially when models touch sensitive domains or personal data. It also helps address boardroom-level concern around ESG risks, reputational risk, and measurable compliance against technical standards such as ISO and other emerging benchmarks.
The impact of AI extends deep into the workforce and leadership. Over half (51%) of executives predict that the majority of employee skills will be fundamentally transformed by 2030, and 65% expect AI to eliminate existing resource and skills constraints.
Furthermore, leadership itself is set to be reshaped. A remarkable 72% of leaders believe their roles will change significantly, 73% anticipate the emergence of entirely new leadership positions, and 25% of boards are expected to include a dedicated AI advisory role within the next six years. This transformation demands a proactive approach to reskilling and a new emphasis on AI literacy at all levels, underscoring the importance of understanding the landscape of AI laws for businesses and broader university policy-style governance patterns that are increasingly influencing corporate AI adoption.
In the boardroom, the mandate is clear: AI applications must be scaled responsibly, with governance that anticipates bias and discrimination risks, tests for fairness, and documents how systems make decisions—particularly when outcomes can affect employees, customers, or access to services.
Ultimately, the success of the AI revolution will be built on a foundation of public trust. While a separate IBM global study found that 56% of consumers are excited about AI-enabled services, it also revealed that two-thirds would switch brands if a company intentionally concealed its use of AI. This finding elevates trustworthy AI from a technical ideal to a critical market differentiator.
Building that trust also requires making the right technology bets. While nearly half of UK executives (48%) believe their edge will come from AI model sophistication, only 29% have a clear view of the models they will need. Most (81%) expect their capabilities to be multi-model, incorporating smaller, fit-for-purpose models rather than relying on generic, off-the-shelf solutions.
The report also looks to the future, with 60% of executives stating that quantum-enabled AI will transform their industry. However, readiness is a concern, as only 36% anticipate using quantum computing by 2030, and just 37% are actively preparing their organizations to be quantum-safe. This highlights the need for a balanced approach to innovation and governance—where AI adoption is accelerated, but not at the expense of robust controls, clear accountability, and measurable protection against real-world harm.
As the AI landscape matures, the ability to prove that technology is being used responsibly will be the defining feature of market leaders. For more information on building this foundation, explore our resources on AI Trust.