Although legislation is still catching up to the possible risks posed by AI technologies, it benefits an organization’s efficiency and reputation to invest in strict and comprehensive AI policy. Let’s take a closer look at the consequences of poor AI risk management, and how a knowledge of policy can prevent harm.
In an economy where implementing up-to-date technologies provides a crucial advantage, companies face significant pressure to adopt AI systems. However, this race toward automation can lead to shortfalls in service quality and safety.
In 2021, McDonald’s became a pioneer among fast-food chains, experimenting with an automated drive-through process at approximately 100 locations. Using natural language processing (NLP) bots to “listen” to voice orders, the company sought to streamline and enhance the ordering experience. However, the trial was discontinued by late July due to operational challenges.
Social media posts provide insight into these challenges. For instance, one viral video highlighted a bot mistakenly placing an order for hundreds of chicken nuggets, while another showed a customer’s request for water and ice cream being supplemented with unwanted items like ketchup and butter packets. While most issues were minor, some customers experienced significant frustration, occasionally leading to PR incidents.
McDonald’s decision to end the trial reflected its commitment to safeguarding its reputation and improving future AI initiatives. Notably, the company continues to explore the integration of AI in its operations, such as automating kitchen equipment like fryers and grills.
As companies like McDonald’s advance AI-driven automation, organizations must prioritize risk management. For example, an AI malfunction in critical equipment could result in profit loss or, worse, pose safety risks to employees. By adopting comprehensive frameworks, businesses can address these challenges proactively and ensure the successful integration of AI technologies.
The NIST Risk Management Framework, a voluntary framework published by the US standardization body, can provide a starting point to identifying risks. The framework outlines three categories of AI harm: harm to people, harm to the organization, and harm to an ecosystem.
HARM TO PEOPLE |
HARM TO AN ORGANIZATION |
HARM TO ECOSYSTEMS |
Individual harm to a person’s civil liberties, rights, physical or psychological safety, or economic opportunity. |
Harm to an organization’s business operations. |
Harm to interconnected and interdependent elements and resources. |
Groups/community harm to a group such as discrimination against a population sub-group. |
Harm to an organization from security breaches or monetary loss. |
Harm to the global financial systems, supply chain, or interrelated systems. |
Societal harm to democratic participation or educational access. |
Harm to an organization’s reputation. |
Harm to natural resources, the environment and the planet. |
(Source - Artificial Intelligence Risk Management Framework (AI RMF 1.0) (nist.gov))
It is the role of governmental bodies, businesses, and sponsors to translate the potential harm summarized in this framework into practical standards for AI use. The pliant nature of AI systems and their complex operating environments often impede the ability to detect and respond to failures. The data of AI systems is constantly evolving in response to environmental input, leading to a need to frequently audit a system’s updates. Industry standards that aim to mitigate potential harm need to consider AI’s novel characteristics as a technology and create new product safety guidelines in response to this novelty.
It is anticipated that the EU AI Act will standardize AI-related safety regulation, as a mirror to conventional product safety. As has always been the case, managing reasonably foreseeable risks is a key part of the development-to-deployment pipeline. AI regulators have taken inspiration from regulations for developing medical equipment. A new pacemaker, for example, must be well designed even before the trial phase, extensively tested pre- and post-market, must comply with standards regarding high-quality data, and must exceed pre-set thresholds for statistical significance of test results. Naturally, the potential harm caused by medical equipment is quite different from the harm caused by artificial intelligence. Below we take a closer look at how the nature of AI requires a more complex approach to risk.
AI systems are inherently socio-technical in nature, meaning they are influenced by societal dynamics and human behavior. AI risks – and benefits – can emerge from the interplay of technical aspects combined with societal factors related to how a system is used, its interactions with other AI systems, who operate the system, and the social context in which it is deployed.
The EU AI Act uses a risk measurement approach to govern AI. The new legislation mirrors the definition of risk in ISO/IEC Guide 51, as well as adding the infringement of fundamental rights to the definition of AI harm.
Risk is defined in ISO/IEC Guide 51 as
Risk5: Combination of the probability of occurrence of harm (B) and the severity (C) of that harm (B).The AI Act elaborates on these distinctions of risk severity by differentiating between 4 levels of risk:
These 4 levels aid the EU in forming layers of legislation that suit a variety of AI use cases. Through the risk measurement approach, high risk cases are scrutinized with adequately rigorous standards for development and use, while low risk cases are able to advance at a suitable rate.
Most of the EU legislation is applicable only to high-risk use cases. In such cases, risks are to be managed continuously through a risk management system. The ordering bot developed by McDonald's falls under the limited risk category. Consequently, it would not have been subject to a risk management system as per Article 9 in the AI Act, as it is poses only limited risk to the safety of people, their fundamental rights and the environment. Nevertheless, McDonald's maintained a comprehensive risk register and conducted extensive testing in controlled and real-world environments before launching the service globally. This enabled the company to stop its trial before a more drastic amount of harm could have resulted from a global launch. As this case shows, businesses that opt for responsible AI policy which exceeds current regulatory requirements can mitigate risk faster and easier.
A growing number of companies are adopting a comprehensive approach to AI policy, aligning the requirements for high-risk AI systems unilaterally with all their AI integrations, as well as formulating policies around the employees’ internal use of AI. The guidance provided by responsible AI governance frameworks is a helpful resource in the mitigation of reputational harm caused by AI failures, requiring a multi-stakeholder approach to risk analysis. Investing in responsible AI leads to an increase in efficiency, product reach, and your reputation as a cutting-edge organization.
According to the World Economic Forum:
“Responsible AI – the practice of designing, developing and deploying AI with good intention and fairly impact society – is not just be the morally right thing to do, it also yields tangible benefits in accelerating innovation and helping organizations transition into using AI to become more competitive.”
World Economic Forum (2022)
High standards for responsible development, deployment, and integration of novel AI solutions will be expected by regulators and society. At Nemko Digital, our services can guide your organization towards both a simple process of compliance and a journey of responsible innovation.