Skip to content
AI - Artificial Intelligence 12 06 2025

Enterprise AI Agents: Productivity vs Compliance Risks

Enterprise AI agents revolutionize sales teams with 70% productivity gains. Navigate GDPR, EU AI Act compliance and implement governance for responsible AI use. In 2022, Salesforce conducted a study across 7K sales professionals in 38 countries. ...

Start Reading
AI Trust 12 06 2025

Why Trust in AI Needs a Global Framework

Trust in AI has become the cornerstone of successful artificial intelligence deployment worldwide. As corporations rapidly develop and deploy automation technology, establishing trustworthy AI solutions is no longer optional—it's essential for ...

Start Reading
iso/iec 42005, ai framework 05 06 2025

ISO/IEC 42005: AI Impact Assessment Framework Guide

Learn how AI impact assessment frameworks evolve with ISO/IEC 42005 standards. Essential guidance for legal, compliance, and AI governance teams. An AI impact assessment framework is a structured methodology for evaluating the potential risks, ...

Start Reading
EU AI Act 14 05 2025

AI Medical Software Compliance: Navigating EU Regulations

Artificial intelligence is no longer just a buzzword in healthcare, it's becoming a core part of how AI-driven medical software is developed, used, and regulated. From supporting diagnoses to guiding treatment decisions, AI is increasingly embedded ...

Start Reading
AI Maturity 06 05 2025

AI Maturity Readiness: The 8-Dimension Approach to Success

In today's rapidly evolving technological landscape, achieving AI maturity readiness is essential for organizations aiming to harness the full potential of artificial intelligence. While AI offers immense benefits, the journey from initial concept ...

Start Reading
05 05 2025

Dubai AI Week 2025: Reflections on UAE's Accelerating AI Journey | Nemko Digital

Dubai AI Week 2025: Reflections on the Fast Acceleration of Their AI Journey

Start Reading
ISO/IEC 42001 28 04 2025

ISO 42001 Certification: The Key to AI Regulatory Compliance

In today's rapidly evolving technological landscape, artificial intelligence (AI) has become a transformative force across industries. However, with great power comes great responsibility—and increasing regulatory scrutiny. ISO 42001 standard ...

Start Reading
AI Maturity 16 04 2025

AI Maturity Governance: Why It Matters for Your Business

Many enterprises assume that AI maturity is solely about advancing and implementing new technology and tooling. In order to adapt quickly, make data-driven decisions based on predictive modeling, and drive business growth through the development of ...

Start Reading
AI Trust, AI Safety and Robustness 15 04 2025

The UAE’s Visionary Approach to AI: Innovation with Ethics at the Core

As AI continues to reshape global industries, the UAE stands out for its proactive and visionary stance. The country’s strategy is not merely about keeping pace with global trends - it’s about leading them. With a bold ambition to establish itself ...

Start Reading
AI Trust 08 04 2025

Global AI trust certification: AI Trust Mark unveiled - Nemko Digital

Providing a Global Framework to Assess Trustworthiness in AI and AI-Embedded Products As AI becomes a core part of products and services across industries, trust has become essential. In response, Nemko Digital developed the AI Trust Mark — a ...

Start Reading
AI - Artificial Intelligence, AI Trust, EU AI Act 18 03 2025

Navigating the EU AI Act in 2025: Key Actions and Compliance Strategies

As organizations prepare for the full implementation of the European Union Artificial Intelligence Act, navigating the EU AI Act has become a critical priority for businesses across sectors. This landmark legislation, developed by the European ...

Start Reading
AI - Artificial Intelligence, AI Trust, EU AI Act 14 03 2025

Navigating the EU AI Act: A Strategic Approach to AI Product Risk Evaluation

Navigating the EU AI Act has become a critical priority as artificial intelligence (AI) continues to shape industries and consumer experiences. This landmark European Union Artificial Intelligence Act introduces comprehensive regulatory frameworks ...

Start Reading
AI - Artificial Intelligence, AI Trust, EU AI Act 04 02 2025

Fundamental Rights Impact Assessments (FRIAs) under the EU AI Act: What You Need to Know

The upcoming EU AI Act brings significant regulatory changes for organizations deploying high-risk AI systems, especially regarding the need to conduct Fundamental Rights Impact Assessments (FRIAs). These assessments are essential for ensuring ...

Start Reading
AI - Artificial Intelligence, AI Trust, AI Safety and Robustness 04 02 2025

How can inaccurate AI harm a business?

Although legislation is still catching up to the possible risks posed by AI technologies, it benefits an organization’s efficiency and reputation to invest in strict and comprehensive AI policy. Let’s take a closer look at the consequences of poor ...

Start Reading
AI - Artificial Intelligence, AI Trust, AI Safety and Robustness 04 02 2025

The AI Lifecycle

In this article, we dive into the AI lifecycle and its role in governance, exploring how ethical, compliant, and efficient AI management integrates across all stages, referencing NIST’s Risk Management Framework.

Start Reading
AI - Artificial Intelligence, AI Trust, EU AI Act 04 02 2025

General Product Safety Regulation (GPSR)

In 2023, the EU introduced a new legislative framework for product safety titled the General Product Safety Regulation. In this article, we provide an overview of the GPSR’s history, its biggest changes from previous legislation, and how AI systems ...

Start Reading
Cyber security 27 01 2025

Cybersecurity Landscape in the Field of AI

What new challenges does AI present for cybersecurity? The increasingly rapid evolution and widespread adoption of artificial intelligence (AI) technologies poses new problems for cybersecurity. While AI systems inherit traditional cybersecurity ...

Start Reading
AI - Artificial Intelligence, AI Trust, Digital Trust 08 01 2025

A Pivotal Year for AI Governance and the Road Ahead

2024 has been a landmark year in AI governance, with the EU introducing the AI Act, the General Product Safety Regulation (GPSR), and the updated Product Liability Directive. The UK hosted the first Global AI Safety Summit, setting the stage for ...

Start Reading
AI - Artificial Intelligence, AI Trust 25 09 2024

Understanding the First-Ever International AI Treaty: A Legal Milestone

While artificial intelligence (AI) has undoubtedly revolutionized all sectors, it has also given rise to a host of ethical, legal, and human rights issues. Recognizing the urgent need for a comprehensive regulatory framework, the Council of Europe ...

Start Reading
AI - Artificial Intelligence, AI Trust 18 09 2024

Navigating the World of AI Assurance: Nemko's Strategic Move

The rapid advancement of artificial intelligence (AI) has sparked a global conversation on regulation. Governments and organizations worldwide are facing the challenge of balancing innovation with ethical and safety concerns, with the increased ...

Start Reading
AI - Artificial Intelligence, AI Trust 11 09 2024

Regulating Artificial Intelligence

On December 9, 2023, a significant milestone was achieved in the realm of artificial intelligence regulation. The European Union members, pioneering the effort, agreed on the EU AI Act. This legislation serves as the world's first dedicated law on ...

Start Reading
AI - Artificial Intelligence, AI Trust 04 09 2024

Ensuring a fair future: The crucial role of ethics in AI development

The Ethics Guidelines for Trustworthy AI form a critical framework for developing, deploying, and evaluating artificial intelligence systems in a manner that respects human rights, ensures safety, and fosters a fair and inclusive digital future. ...

Start Reading
AI - Artificial Intelligence, AI Trust, EU AI Act 28 08 2024

A Quick Dive into the EU AI Act: Key Insights & Implications

In a significant step towards regulating artificial intelligence (AI), the European Union (EU) has officially published the AI Act, setting the first comprehensive legal framework for AI by a major global economy. The AI Act introduces a ...

Start Reading
AI - Artificial Intelligence, AI Trust, ISO/IEC 42001 21 08 2024

Navigating the Future of Artificial Intelligence with ISO 42001: A Guide for Businesses

This article aims to demystify ISO/IEC 42001:2023 for businesses, highlighting its importance, applicability, and the key steps to implementation, with a specific focus on how Nemko can facilitate this process, ensuring organizations can harness ...

Start Reading
AI - Artificial Intelligence, AI Trust, AI Safety and Robustness 14 08 2024

Keeping AI in Check: The Critical Role of Human Agency and Oversight

In an era where artificial intelligence (AI) technologies play a pivotal role in various sectors, the need for ethical guidelines and human oversight has never been more critical. For organizations venturing into the development and deployment of AI ...

Start Reading
AI - Artificial Intelligence, AI Trust, AI Safety and Robustness 07 08 2024

The Foundations of AI Safety: Exploring Technical Robustness

In an era where artificial intelligence (AI) is not just a buzzword but a backbone of innovation across industries, understanding the pillars of AI safety and technical robustness has never been more crucial. AI systems, from simple predictive ...

Start Reading
AI - Artificial Intelligence, AI Trust 31 07 2024

Transparency in AI as a Competitive Advantage

With AI influencing almost every part of our lives, transparency has become crucial for ethical technology. As AI systems increasingly influence decisions in healthcare, finance, and beyond, the imperative for transparency in how these systems ...

Start Reading
AI - Artificial Intelligence, AI Trust, Data Privacy 24 07 2024

Mastering AI Privacy and Data Governance

In today's era of digital transformation, where data holds as much value as oil, ensuring privacy and implementing strong data governance are crucial for establishing Trustworthy AI. As businesses integrate AI into their operations, effectively ...

Start Reading
AI - Artificial Intelligence, AI Trust, AI Safety and Robustness 17 07 2024

Diversity, Non-Discrimination, and Fairness in AI Systems

As it is commonly known, companies striving to innovate and harness the power of AI are also faced with the challenge of ensuring that their AI systems are developed and deployed ethically. Part of the ethical discussion involves talking about ...

Start Reading
AI - Artificial Intelligence, AI Trust, Digital Trust 10 07 2024

Environmental and Societal Wellbeing: A Key Requirement for Trustworthy AI

In an era where artificial intelligence (AI) reshapes industries and societies, the importance of developing and deploying AI systems ethically has never been more crucial. The Ethics Guidelines for Trustworthy AI emphasise not only the importance ...

Start Reading
Nemko Digital

Webinars Library

Welcome to the Webinars Library, where you can access all the replays of our previous webinars and register for the future webinars as well.

No data found!

AI Trust Mark - Global Framework Webinar - Nemko Digital
MORE INFO
Past Webinar, 2025-06-24 AI Trust Mark - Providing a Global Framework to Assess Trust in AI Products

Ready to certify your AI products for global markets? This essential webinar introduces the AI Trust Mark - a comprehensive certification combining EU AI Act, ISO 42001, and NIST frameworks. Monica Fernandez and Stuart Beck guide you through process-focused assessments and flexible certification paths. Learn how organizations achieve regulatory readiness, mitigate AI risks, and build market trust through standardized governance. Whether preparing for upcoming regulations or enhancing AI maturity, discover actionable strategies for trustworthy AI development.

MORE INFO
AI Literacy Webinar Thumbnail-1
MORE INFO
Past Webinar, 2025-06-12 AI Literacy - From Awareness to Action

As AI becomes essential for business competitiveness, organizations struggle with workforce readiness and responsible implementation. This webinar provides Nemkco Digital's proven 4-step AI literacy methodology covering technical skills, governance, ethics, and business application. The framework helps companies avoid costly failures while building innovation capabilities and ensuring regulatory compliance.

MORE INFO
The EU AI Act in Focus - Webinar Thumbnail
MORE INFO
Past Webinar, 2025-05-27 The EU AI Act in Focus: Making Sense of Conformity Assessments

Discover how to navigate the EU AI Act’s new conformity requirements—on demand! Join Nemko Digital’s experts for a concise webinar that demystifies risk categorization, CE marking, and assessment routes for high-risk AI. Watch now to kickstart your compliance journey and secure your AI Trustmark.

MORE INFO
ISO 42001 AI Management System Explained
MORE INFO
Past Webinar, 2025-05-09 ISO 42001 AI Management System Explained

Learn how to implement ISO 42001 for responsible AI governance with insights from Nemko Digital experts on certification steps, business benefits, and risk management.

MORE INFO
AI Maturity for Compliance-Critical Products
MORE INFO
Past Webinar, 2025-04-22 AI Maturity for Compliance-Critical Products

Watch our insightful 30-minute webinar followed by live Q&A, where we'll explore the concept of AI Maturity—what it means, why it matters, and how to benchmark your organization's readiness across eight critical categories.

MORE INFO
AI Trust Mark: Providing a global framework to assess trust in AI products
MORE INFO
Past Webinar, 2025-04-17 AI Trust Mark: Providing a global framework to assess trust in AI products

Discover how to demonstrate your AI's trustworthiness with Nemko Digital's AI Trust Mark certification. This 30-minute webinar reveals a practical framework aligned with global standards that can give your AI products a competitive edge.

MORE INFO
AI Literacy: Future-Proofing Your Workforce
MORE INFO
Past Webinar, 2025-04-11 AI Literacy: Future-Proofing Your Workforce

Watch Nemko Digital's expert webinar on implementing AI literacy programs to upskill your workforce, define AI roles, and ensure responsible AI deployment in line with EU AI Act requirements.

MORE INFO
The EU AI Act: Key Actions for 2025
MORE INFO
Past Webinar, 2025-04-11 The EU AI Act: Key Actions for 2025

Stay ahead of AI regulations with expert insights on the EU AI Act. Learn how to classify your AI systems, meet compliance requirements, and protect market access before enforcement begins.

MORE INFO
ISO 42001:2023 – Artificial Intelligence Management System Webinar
MORE INFO
Past Webinar, 2025-04-11 ISO 42001:2023 – Artificial Intelligence Management System Webinar

Discover the essentials of ISO 42001 certification and its impact on AI governance and compliance in our latest Nemko Digital webinar, featuring expert insights and practical case studies.

MORE INFO
Introduction to AI Governance and Nemko Digital's Services
MORE INFO
Past Webinar, 2025-04-11 Introduction to AI Governance and Nemko Digital's Services

This is the first of a series of our on-demand AI trust webinar. Nemko's Head of AI Assurance, Mónica Fernández Peñalver, leads you through the latest developments regarding the regulatory landscape of AI.

MORE INFO
The EU AI Act: What to expect
MORE INFO
Past Webinar, 2025-04-11 The EU AI Act: What to expect

This is the first of a series of our on-demand AI trust webinar. Nemko's Head of AI Assurance, Mónica Fernández Peñalver, leads you through the latest developments regarding the regulatory landscape of AI.

MORE INFO

Digital Portfolio

Service Brochure
AITrust_Brochure

Newsletters

February 2025
January 2025

Search

  • Policy
    Author
    Area/Country
    Policy Type
    Details
  • A Pro-Innovation Approach to AI Regulation
    The UK Secretary of State for Science, Innovation and Technology
    UK
    Policy guidance
    Details
  • AB 1651 Workplace Technology Accountability Act
    California State Assembly
    USA
    Regulative proposal
    Details
  • AB 331 Automated Decision Tool
    California State Assembly
    USA
    Regulative proposal
    Details
  • AI Liability Directive (AILD)
    European Commission
    EU
    Regulative proposal [WITHDRAWN]
    Details
  • AI Security Concerns in a Nutshell
    German Federal Office for Information Security (BSI)
    Germany
    Regulative proposal
    Details
  • Algorithmic Transparency Recording Standard
    Central Digital and Data Office and Centre for Data Ethics and Innovation
    UK
    Policy guidance
    Details
  • EU Artificial Intelligence Act (AI Act)
    European Commission
    EU
    Regulation
    Details
  • Blueprint for an AI Bill of Rights
    The White House
    USA
    Policy guidance
    Details
  • California Consumer Privacy Act (CCPA)
    California State Legislature
    USA
    Regulation
    Details
  • Children's Code
    Information Commissioners Office (ICO)
    UK
    Supervisory guidance
    Details
  • Data Act
    European Commission
    EU
    Regulative proposal
    Details
  • Data Ethics Requirements for RM6200 Artificial Intelligence Suppliers
    Crown Commercial Service
    UK
    Standard
    Details
  • Data Governance Act
    European Commission
    EU
    Regulation
    Details
  • Digital Services Act (DSA)
    European Commission
    EU
    Regulation
    Details
  • Ethics Guidelines for Trustworthy AI
    European Commission
    EU
    Policy guidance
    Details
  • European Health Data Space Regulation
    European Commission
    EU
    Regulative proposal [ADOPTED]
    Details
  • Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence
    The White House
    USA
    Executive order [REVOKED]
    Details
  • General Data Protection Regulation (GDPR)
    European Commission
    EU
    Regulation
    Details
  • Guidance to Civil Servants on Use of Generative AI
    The Cabinet Office
    UK
    Policy guidance
    Details
  • Maryland Facial Recognition Law
    Maryland General Assembly
    USA
    Regulation
    Details
  • NIST AI Risk Management Framework (AI RMF)
    National Institute of Standards and Technology (NIST)
    USA
    Policy guidance
    Details
  • New York City Automated Employment Decision Tool Law
    New York City Department of Consumer and Worker Protection
    USA
    Regulation
    Details
  • Medical Device Regulation (MDR)
    European Commission
    EU
    Regulation
    Details
  • OECD AI principles
    OECD
    Worldwide
    Policy guidance
    Details
  • Recommendation on the Ethics of Artificial Intelligence
    UNESCO
    Worldwide
    Policy guidance
    Details
  • Responsible Practices for Synthetic Media
    Partnership of AI
    Worldwide
    Policy guidance
    Details
  • The Act Concerning Artificial Intelligence, Automated Decision-Making and Personal Data Privacy
    Connecticut General Assembly
    USA
    Regulative proposal
    Details
  • The Artificial Intelligence Video Interview Act
    Illinois General Assembly
    USA
    Regulation
    Details
  • UNICEF Policy Guidance on AI for Children
    UNICEF
    Worldwide
    Policy guidance
    Details
  • NJ A4909
    New Jersey State Assembly
    USA
    Regulative proposal
    Details
  • NYC A00567
    New York State Assembly
    USA
    Regulative proposal
    Details
  • Stop Discrimination by Algorithms Act
    DC Council
    USA
    Regulative proposal
    Details
  • Federal AI Governance and Transparency Act
    The House Oversight Committee
    USA
    Regulative proposal
    Details
  • Utah Artificial Intelligence Policy Act (SB 149)
    Utah State Legislature
    USA
    Regulation
    Details
  • Living Guidelines on the Responsible Use of Generative AI in Research
    European Commission
    EU
    Policy guidance
    Details
  • AI Treaty - The Framework Convention on AI
    Council of Europe
    EU
    Regulation
    Details
  • Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile
    National Institute of Standards and Technology (NIST)
    USA
    Policy guidance
    Details
  • National Artificial Intelligence Policy Framework
    Department of Communications and Digital Technologies
    South Africa
    Policy guidance
    Details
  • South Korean AI Basic Act
    Ministry of Science and ICT
    South Korea
    Regulation
    Details

Search

  • Title
    Subject Matter
    Type
    Status
    Audience
    AI Aspect
    Summary
  • ISO/IEC 42001:2023
    Management system
    Published
    General, Providers, Developers, User
    All
    Details
  • ISO/IEC 24027:2021
    Bias in AI systems and AI-aided decision making
    Published
    Developer, Evaluator
    Fairness
    Details
  • ISO/IEC 24028:2022
    Overview of trustworthiness in artificial intelligence
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC TR 24029-1:2021
    Part 1: Overview
    Published
    Developer, Evaluator
    Robustness and Safety
    Details
  • ISO/IEC TR 24030:2024
    Use cases
    Published
    General, Developer, Evaluator, User
    N/A
    Details
  • ISO/IEC TR 24372:2021
    Overview of computational approaches for AI systems.
    Published
    General, Developer, Evaluator
    N/A
    Details
  • ISO/IEC 38507:2023
    Governance implications of the use of artificial intelligence by organizations
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC 22989:2022
    Artificial intelligence concepts and terminology
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC 23053:2022
    Systems Using Machine Learning (ML)
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC 23894:2023
    Guidance on risk management
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC 5259-1:2024
    Part 1: Overview, terminology and examples
    Published
    Developer
    Robustness, Safety, Data Governance
    Details
  • ISO/IEC 5259-2
    Part 2: Data quality measures
    Under publication
    Developer
    Robustness, Safety, Data Governance
    Details
  • ISO/IEC 5259-3:2024
    Part 3: Data quality management requirements and guidelines
    Published
    Developer
    Robustness, Safety, Data Governance
    Details
  • ISO/IEC 5259-4:2024
    Part 4: Data quality process framework
    Published
    Developer
    Robustness, Safety, Data Governance
    Details
  • ISO/IEC 5259-5
    Part 5: Data quality governance
    Under publication
    Developer
    Robustness, Safety, Data Governance
    Details
  • ISO/IEC 5338:2023
    AI system life cycle processes
    Published
    General, Developer, Evaluator
    All
    Details
  • ISO/IEC 5339:2024
    Guidance for AI applications
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC 5392:2024
    Reference architecture of knowledge engineering
    Published
    Developer
    N/A
    Details
  • ISO/IEC 5469:2024
    Functional safety and AI systems
    Published
    General, Developer, Evaluator, User
    Safety
    Details
  • ISO/IEC 6254
    Objectives and approaches for explainability of ML models and AI systems
    Under development
    General, Developer, Evaluator, User
    Transparency
    Details
  • ISO/IEC 8183:2023
    Data life cycle framework
    Published
    General, Developer, Evaluator
    Data Governance
    Details
  • ISO/IEC 8200:2024
    Controllability of automated AI systems
    Published
    General, Developer, Evaluator, User
    Human Agency and Oversight
    Details
  • ISO/IEC 12791.2
    Treatment of unwanted bias in classification and regression machine learning tasks
    Under development
    Developer, Evaluator
    Fairness
    Details
  • ISO/IEC 12792
    Transparency taxonomy of AI systems
    Under development
    General, Developer, Evaluator, User
    Transparency
    Details
  • ISO/IEC 24368:2022
    Overview of ethical and societal concerns
    Published
    General, Developer, Evaluator, User
    All
    Details
  • ISO/IEC 24668:2022
    Process management framework for big data analytics
    Published
    General, Developer, Evaluator
    Robustness, Safety, Data Governance
    Details
  • ISO/IEC 4213:2022
    Assessment of machine learning classification performance
    Published
    Developer, Evaluator
    Robustness, Safety
    Details
  • ISO/IEC 27563:2023
    Best practices
    Published
    General, Developer, Evaluator
    Data Governance, privacy, security
    Details
  • ISO/IEC 17847
    Verification and validation analysis of AI systems
    Under development
    Developer, Evaluator
    Robustness and Safety
    Details
  • Data Ethics Requirements for RM6200 Artificial Intelligence Suppliers
    N/A
    Published
    Suppliers
    Data Governance
    Details
  • Algorithmic Transparency Recording Standard
    N/A
    Published
    General
    Transparency
    Details
  • IEEE 2801:2022
    Recommended Practice for the Quality Management of Datasets for Medical Artificial Intelligence
    Published
    Developer
    Privacy and Data Governance
    Details
  • IEEE 2802:2022
    Standard for Performance and Safety Evaluation of Artificial Intelligence Based Medical Devices: Terminology
    Published
    Developer
    Robustness and Safety
    Details
  • IEEE 7000:2021
    Standard Model Process for Addressing Ethical Concerns during System Design
    Published
    Developer
    Fairness, Societal and Environmental Wellbeing
    Details
  • IEEE 7001:2021
    Standard for Transparency of Autonomous Systems
    Published
    Developer, Evaluator
    Transparency
    Details
  • DIN SPEC 92001-1
    Published
    General, Developers
    Quality, Safety
    Details
  • DIN SPEC 92001-2
    Published
    General, Developer, Provider
    Robustness, System lifecycle, System quality
    Details
  • IEEE 2801
    Published
    General, Providers, Developers
    All
    Details
  • IEEE 7000
    Published
    General, Developers
    Accountability, Bias and discrimination, Explainability and transparency, Privacy, Project management, Risk management, Stakeholder engagement and communication, Sustainability
    Details
  • NIST AI RMF 1.0
    Published
    General, Developers, Providers
    Process, Management and Governance
    Details
  • ISO/IEC Guide 51:2014
    Published
    Standardization
    Safety
    Details
  • ISO/IEC 27001:2022
    Published
    Developers
    Cybersecurity
    Details
  • Term
    Definition
    Source
  • Accessibility
    Extent to which products, systems, services, environments and facilities can be used by people from a population with the widest range of user needs, characteristics and capabilities to achieve identified goals in identified contexts of use (which includes direct use or use supported by assistive technologies).
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Accountability
    the ability to explain and justify one's actions.
    EU AI Act
  • Adaptive learning
    An adaptive AI is a system that changes its behaviour while in use. Adaptation may entail a change in the weights of the model or a change in the internal structure of the model itself. The new behaviour of the adapted system may produce different results than the previous system for the same inputs.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Adversarial machine learning (adversarial attack)
    A practice concerned with the design of ML algorithms that can resist security challenges, the study of the capabilities of attackers, and the understanding of attack consequences. Inputs in adversarial ML are purposely designed to make a mistake in its predictions despite resembling a valid input to a human.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • (AI) accuracy
    Closeness of computations or estimates to the exact or true values that the statistics were intended to measure. The goal of an AI model is to learn patterns that generalise well for unseen data. It is important to check if a trained AI model is performing well on unseen examples that have not been used for training the model. To do this, the model is used to predict the answer on the test dataset and then the predicted target is compared to the actual answer. The concept of accuracy is used to evaluate the predictive capability of the AI model. Informally, accuracy is the fraction of predictions the model got right. A number of metrics are used in machine learning (ML) to measure the predictive accuracy of a model. The choice of the accuracy metric to be used depends on the ML task.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • AI (or algorithmic) bias
    Harmful AI bias describes systematic and repeatable errors in AI systems that create unfair outcomes, such as placing. privileged groups at systematic advantage and unprivileged groups at systematic disadvantage. Different types of bias can emerge and interact due to many factors, including but not limited to, human or system decisions and processes across the AI lifecycle. Bias can be present in AI systems resulting from pre-existing cultural, social, or institutional expectations; because of technical limitations of their design; by being used in unanticipated contexts; or by non-representative design specifications.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • AI regulatory sandbox
    a controlled environment set by authorities for testing innovative AI systems under supervision for a limited time.
    EU AI Act
  • AI system
    software that can, for a given set of input data, generate outputs such as predictions, decisions, or recommendations without being explicitly programmed for each specific input-output mapping.
    EU AI Act
  • Algorithm
    An algorithm consists of a set of step-by-step instructions to solve a problem (e.g., not including data). The algorithm can be abstract and implemented in different programming languages and software libraries.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Artificial Intelligence Office
    a part of the European Commission that oversees AI systems, models, and governance.
    EU AI Act
  • Attack
    Action targeting a learning system to cause malfunction.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Auditability of an AI system
    Auditability refers to the ability of an AI system to undergo the assessment of the system’s algorithms, data and design processes. This does not necessarily imply that information about business models and Intellectual Property related to the AI system must always be openly available. Ensuring traceability and logging mechanisms from the early design phase of the AI system can help enable the system's auditability.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Autonomy (autonomous AI system)
    Systems that maintain a set of intelligence-based capabilities to respond to situations that were not pre-programmed or anticipated (i.e., decision-based responses) prior to system deployment. Autonomous systems have a degree of self-government and self-directed behaviour (with the human’s proxy for decisions).
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Big data
    An all-encompassing term for large, complex digital data sets that need equally complex technological means to be stored, analysed, managed and processed with substantial computing power. Datasets are sometimes linked together to see how patterns in one domain affect other areas. Data can be structured into fixed fields or unstructured as free-flowing information. The analysis of big datasets, often using AI, can reveal patterns, trends, or underlying relationships that were not previously apparent to researchers.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Biometric categorisation system
    an AI system that groups people based on their biometric data, unless it's a necessary part of another service.
    EU AI Act
  • Biometric data
    Personal data from physical, physiological, or behavioural traits used to identify a person, like fingerprints or facial recognition.
    EU AI Act
  • Biometric identification
    Using AI to recognize a person's unique physical or behavioural traits to confirm who they are by comparing their data to a database.
    EU AI Act
  • Biometric verification
    confirm someone's identity by comparing their biometric data to previously stored data (one-to-one verification, including authentication).
    EU AI Act
  • CE marking of conformity (CE marking)
    A symbol that shows an AI system meets specific European Union regulations and standards for being sold.
    EU AI Act
  • Chatbot (conversational bot)
    A computer program designed to simulate conversation with a human user, usually over the internet; especially one used to provide information or assistance to the user as part of an automated service.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Classification
    A classification system is a set of “boxes” into which things are sorted. Classifications are consistent, have unique classificatory principles, and are mutually exclusive. In AI design, when the output is one of a finite set of values (such as sunny, cloudy or rainy), the learning problem is called classification, and is called Boolean or binary classification if there are only two values.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Classifier
    A model that predicts (or assigns) class labels to data input.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Data poisoning
    A type of security attack where malicious users inject false training data with the aim of corrupting the learned model, thus making the AI system learn something that it should not learn.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Data protection and transparency
    the obligations on providers and users of AI systems to protect personal data and to be transparent about the operation of their systems.
    EU AI Act
  • Deep learning
    A subset of machine learning based on artificial neural networks that employs statistics to spot underlying trends or data patterns and applies that knowledge to other layers of analysis. Some have labelled this as a way to “learn by example” and as a technique that “perform[s] classification tasks directly from images, text, or sound” and then applies that knowledge independently.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Deployer
    an organisation using an AI system under their authority except for personal non-professional use
    EU AI Act
  • Differential privacy
    Differential privacy is a method for measuring how much information the output of a computation reveals about an individual. It produces data analysis outcomes that are nearly equally likely, whether any individual is, or is not, included in the dataset. Its goal is to obscure the presence or absence of any individual (in a database), or small groups of individuals, while at the same time preserving statistical utility.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Discrimination
    Unequal treatment of a person based on belonging to a category rather than on individual merit. Discrimination can be a result of societal, institutional and implicitly held individual biases or attitudes that get captured in processes across the AI lifecycle, including by AI actors and organisations, or represented in the data underlying AI systems. Discrimination biases can also emerge due to technical limitations in hardware or software, or the use of an AI system that, due to its context of application, does not treat all groups equally. Discriminatory biases can also emerge in the very context in which the AI system is used. As many forms of biases are systemic and implicit, they are not easily controlled or mitigated and require specific governance and other similar approaches"
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Distributor
    an organisation, other than the provider or importer, that makes an AI system available in the EU
    EU AI Act
  • Downstream provider
    a provider of an AI system, including a general- purpose AI system, which integrates an AI model, regardless of whether the model is provided by themselves and vertically integrated or provided by another entity based on contractual relations.
    EU AI Act
  • EU Declaration of Conformity
    a document that states that an AI system has been assessed and found to meet the requirements of the EU AI Act.
    EU AI Act
  • Evaluation
    Systematic determination of the extent to which an entity meets its specified criteria.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Federated learning
    Federated learning is a machine learning model which addresses the problem of data governance and privacy by training algorithms collaboratively without transferring the data to another location. Each federated device shares its local model parameters instead of sharing the whole dataset used to train it and the federated learning topology defines the way parameters are shared.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Floating-point operation
    any mathematical operation or assignment involving floating-point numbers, which are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent of a fixed base.
    EU AI Act
  • Fundamental rights and values
    the rights and values enshrined in the Charter of Fundamental Rights of the European Union.
    EU AI Act
  • General purpose AI model
    an AI model that can do many different tasks well, regardless of how it's used or integrated into other systems or applications.
    EU AI Act
  • General purpose AI system
    an AI system which is based on a general purpose AI model, that has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;
    EU AI Act
  • Generative adversarial network (GAN)
    Generative Adversarial Networks, or GANs for short, are an approach to generative modelling using deep learning methods, such as convolutional neural networks. Generative modelling is an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input data in such a way that the model can be used to generate or output new examples that plausibly could have been drawn from the original dataset.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Harm
    any negative impact on individuals, society, or the environment.
    EU AI Act
  • High-risk AI system
    any AI system that is likely to cause or increase harm to individuals, society, or the environment.
    EU AI Act
  • Human oversight
    the requirement for high-risk AI systems to have mechanisms in place to allow humans to override the decisions of the systems and to monitor their operation.
    EU AI Act
  • Human values for AI
    Values are idealised qualities or conditions in the world that people find good. AI systems are not value-neutral. The incorporation of human values into AI systems requires that we identify whether, how and what we want AI to mean in our societies. It implies deciding on ethical principles, governance policies, incentives, and regulations. And it also implies that we are aware of differences in interests and aims behind AI systems developed by others according to other cultures and principles.functioning of biological neurons in the nervous system, most neural networks are used in artificial intelligence as realisations of the connectionist model.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Importer
    an organisation importing and selling into the EU market an AI system that bears the name or trademark of an organisation located outside the EU
    EU AI Act
  • Input data
    data provided to or directly acquired by an AI system on the basis of which the system produces an output.
    EU AI Act
  • Intended purpose
    the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation.
    EU AI Act
  • Law enforcement
    activities by authorities to prevent, investigate, or prosecute crimes, or to protect public safety.
    EU AI Act
  • Non-high-risk AI system
    any AI system that is not likely to cause or increase harm to individuals, society, or the environment.
    EU AI Act
  • Non-personal data
    data other than personal data
    EU AI Act
  • Notified Body
    an independent organization that is authorized by the EU Commission to assess the conformity of AI systems.
    EU AI Act
  • Operator
    the provider, the product manufacturer, the deployer, the authorised representative, the importer or the distributor
    EU AI Act
  • Personal data
    Information that can identify a person, as defined by article 4 of the GDPR
    EU AI Act
  • Post-market monitoring
    the obligation on providers of AI systems to monitor the performance of their systems and to take corrective action if necessary.
    EU AI Act
  • Post-market monitoring system
    Activities by AI system providers to collect feedback from users to see if any improvements or fixes are needed.
    EU AI Act
  • Post remote biometric identification system
    a system that identifies people but not in real-time, so there's some delay.
    EU AI Act
  • Prior conformity assessment
    the process of having an AI system assessed by an independent body to verify that it complies with the requirements of the EU AI Act.
    EU AI Act
  • Provider
    an organisation developing or commissioning the development of an AI system and selling or putting into service under their own name or trademark
    EU AI Act
  • Real-time remote biometric identification system
    a system that identifies people almost immediately without delay.
    EU AI Act
  • Reasonably foreseeable misuse
    the use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, including other AI systems.
    EU AI Act
  • Remote biometric identification system
    an AI system that identifies people from a distance without their active participation by comparing their biometric data to a database.
    EU AI Act
  • Safety
    the state of not being at risk of harm.
    EU AI Act
  • Scalability
    The ability to increase or decrease the computational resources required to execute a varying volume of tasks, processes, or services.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Socio-technical system
    Technology is always part of society, just like society is always part of technology. This also means that one cannot understand one without the other. Technology is not only design and material appearance but also sociotechnical; that is, a complex process constituted by diverse social, political, economic, cultural and technological factors.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Standard
    Standards are a set of institutionalised agreed upon-rules for the production of (textual or material) objects. They are released by international organisations and ensure quality and safety and set product or services’ specifications. Standards are the result of negotiations among various stakeholders and are institutionalised and thus difficult to change.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Substantial modification
    A change to an AI system after it's been released that was not planned in the initial conformity assessment by the provide and affects how well it meets certain requirements or changes its intended use.
    EU AI Act
  • Technical interoperability
    The ability of software or hardware systems or components to operate together successfully with minimal effort by an end user.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Test
    Technical operation to determine one or more characteristics of or to evaluate the performance of a given product, material, equipment, organism, physical phenomenon, process or service according to a specified procedure.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Test and Evaluation, Verification and Validation (TEVV)
    A framework for assessing, incorporating methods and metrics to determine that a technology or system satisfactorily meets its design specifications and requirements, and that it is sufficient for its intended use.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Testing data
    data used for providing an independent evaluation of the AI system to confirm the expected performance
    EU AI Act
  • Training data
    data used for training an AI system through fitting its learnable parameters
    EU AI Act
  • Transparency
    the ability to be easily understood.
    EU AI Act
  • Unacceptable AI system
    any AI system that is likely to pose an unacceptable risk to human safety or fundamental rights and values.
    EU AI Act
  • Validation
    Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use are fulfilled.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Validation data
    means data used for providing an evaluation of the trained AI system and for tuning its non-learnable parameters and its learning process
    EU AI Act
  • Value sensitive design (values-by- design or ethics- by-design)
    A theoretically grounded approach to the design of technology that accounts for human values in a principled and systematic manner throughout the design process.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
  • Verification
    Provides evidence that the system or system element performs its intended functions and meets all performance requirements listed in the system performance specification.
    EU-U.S. Terminology and Taxonomy for Artificial Intelligence
What is AI Assurance?

AI Assurance refers to the process and methodologies used to ensure that AI systems operate safely, ethically, and in compliance with relevant regulations and standards.

How can I start learning about AI?

Go to our Knowledge Hub of AI Assurance and begin with our "Introduction to AI" section, which covers fundamental concepts such as the agreed definition and properties of AI.

What is AI ethics?

AI ethics involves the study and evaluation of ethical problems associated with AI and automated systems, including issues of bias, privacy, accountability, and the impact on society. In our knowledge hub, we dive into specific concepts of Trustworthy AI.

What are the latest regulatory developments in the US and EU regarding AI?

Our libraries are continuously updated with the latest regulatory developments in both the US and EU. Check the Regulations Library for the most current information.

How do US and EU regulations on AI differ?

The US focuses on sector-specific guidelines and fostering innovation, while the EU aims for a comprehensive, risk-based regulatory framework emphasizing ethical standards and societal values.

What standards and normative frameworks exist for AI?

Explore our library of standards and normative frameworks to understand the international and regional guidelines established for AI development and deployment.

How can standards help in achieving AI assurance?

Standards provide a baseline for quality, safety, and ethics in AI systems, helping organizations align their AI initiatives with best practices and regulatory requirements.

Where can I find regulations and standards related to AI?

Our Libraries section offers comprehensive access to regulations, standards, audit catalogues, and toolkits relevant to AI.

Can I contribute to the libraries?

Yes, contributions are welcome! Please contact our team for guidelines on contributing to our collaborative environment. You can add your inputs in this form.

How can the AI trust agent help me?

Our AI trust agent can provide instant answers to your questions about AI assurance, guide you through the environment, and suggest resources related to your queries. Note that the agent is currently in its beta version and steady improvements will be make over the coming weeks.

What should I do if the chatbot can't answer my question?

If the chatbot is unable to provide an answer it will suggest additional resources for further research.

How often is the content in the AI Trust Hub updated?

We strive to keep our content current with the latest developments in AI assurance. The environment is reviewed and updated regularly.

What is the purpose of the World Map?

The World Map aims to easily illustrate and help you visualize the state of AI governance across various countries. It is color-coded to distinguish between countries with specific AI policies (dark blue) and those subject to general global policies (light blue).

What kind of content can I expect in the News Feed?

We will be regularly populating and updating the News Feed with content related to the latest AI policy developments around the world, as well as with information about our newest features and services provided by Nemko Digital.

What is the AI Governance self-assessment tool?

The AI Governance self-assessment tool allows you to perform a preliminary evaluation of the extent of AI governance in your organization, identifying opportunities for improvement in one or more areas. It is compatible with Nemko’s AI Trust services, which can help you increase your self-assessment score and achieve effective and compliant AI governance within your organization.

Book Your Free Consultation Call

Ready to elevate your AI product’s trustworthiness and compliance?
Don’t wait - connect with our experts for a free 15-minute consultation call to discuss how our trusted services can help your business thrive in today's competitive landscape.