Artificial Intelligence (AI) is increasingly shaping how organisations operate, make decisions, and engage with customers and stakeholders. What was once confined to advanced analytics and automation is now embedded across core business processes, from risk assessment and forecasting to compliance monitoring, customer interaction, and decision support. As AI capabilities mature and adoption accelerates, firms are faced with a fundamental question: how can AI be deployed at scale without undermining proper governance, reliability, and accountability?

Across financial services to the retail sector, AI is no longer experimental. It is becoming a critical business capability. With this shift comes a corresponding evolution in risk. AI systems are data-driven, adaptive, and often opaque. Their outputs may influence decisions directly, either by advising on actions taken by customers, or automate actions previously performed by humans. Ensuring that such systems remain well-governed and fit for purpose has therefore become a strategic imperative rather than a purely technical consideration.

Defining the scope: not all AI carries the same risk

Before considering assurance, it is important to distinguish between the different types of AI in scope. The recent surge in Generative AI (Gen AI) has blurred long-established distinctions in model risk discussions, yet different AI approaches exhibit materially different risk profiles.

Predictive machine learning models have been used for many years to support decision-making and forecasting. These models analyse historical data to classify events or estimate outcomes, such as credit risk parameters, fraud indicators, or pricing inputs. The associated risks are well understood; model stability over time, bias, explainability constraints, and the potential “black-box” nature of complex algorithms.

Gen AI introduces a different set of considerations. Rather than predicting outcomes, these systems generate new content, such as text, code, or images, based on training patterns. While powerful, they raise additional risks around hallucinated or misleading outputs, as well as conduct and reputational exposure, particularly in customer-facing or advisory contexts.

In practice, modern AI solutions often combine multiple components into integrated systems that influence – or in some cases execute – decisions. The key implication is that AI is not one-size-fits-all. Effective assurance must therefore be proportionate and outcome-focused, reflecting how AI is actually used and where the material risks arise.

 

Why this evolution matters?

As AI systems move from analytical tools to embedded decision enablers, their value increases, but so does their complexity and risk exposure. Importantly, the most significant risks rarely stem from model performance alone. They arise from how outputs are interpreted and influence decisions, and how transparency is maintained as reliance on AI evolves.

This shift challenges traditional control frameworks. Validation, IT testing, or compliance reviews address important aspects of risk, but they are not sufficient on their own to manage the full implications of modern AI. What is required is a broader assurance perspective that covers governance considerations and real-world use-case scenarios alongside technical performance.

 

Regulatory and supervisory expectations

Regulatory expectations are evolving in parallel with AI adoption. Supervisors have been clear that the use of advanced analytics or AI does not diminish an organisation’s responsibility to understand and control the tools that influence decisions and outcomes. These principles are not limited to banks. They are increasingly relevant across sectors where AI systems materially influence decisions, customers, or stakeholders.

At the same time, AI-specific regulation is emerging. The EU Artificial Intelligence Act introduces a risk-based framework for AI, with enhanced obligations for systems used in high-impact contexts. These obligations emphasise data governance and quality, model robustness and reliability, transparency and explainability, human oversight, and accountability across the AI lifecycle as well as post-market monitoring. International guidance similarly converges on the concept of “trustworthy AI”, reinforcing expectations around bias and fairness, safety, and the prevention of unintended harm.

These developments signal a clear path: AI innovation must be supported by demonstrable governance and assurance.

 

Balancing opportunity and risk

The rapid adoption of AI is driven by clear benefits. When appropriately governed, AI can improve operational efficiency, enhance decision support, and enable organisations to scale expertise across functions. It can support more consistent application of policies and controls, improve internal workflows, and enhance both employee and customer interactions.

However, these benefits are inseparable from risk. AI systems may generate inaccurate or misleading outputs, embed biases, or behave unpredictably as data and conditions change. Over-reliance on automated outputs can weaken human judgement, while limited transparency can make it difficult to explain or defend decisions. In interactive contexts, inappropriate or unsafe outputs can quickly translate into conduct and reputational issues.

The challenge for organisations is therefore not whether to adopt AI, but how to counterpoise reward against risk. This is the role of AI assurance.

 

An outcome-focused AI Assurance Framework

To support responsible AI adoption, Grant Thornton Risk Advisory has developed a structured AI Assurance Framework designed to operate across the full lifecycle, from design and deployment to ongoing use and monitoring. The framework is deliberately outcome-focused. It recognises that the most material risks often arise where AI outputs influence decisions and engage in human interactions, rather than during model development alone. Our approach combines governance review with targeted model testing and performance procedures to provide defensible, risk-aligned assurance.

Effective data governance and input data quality form the foundation of any reliable AI system. In AI-driven solutions, outputs directly reflect the inputs provided; where data is incomplete, inconsistent, or outdated, unexpected outcomes may arise even when the model itself is performing as designed. Weak data controls can undermine outputs, embed bias, and expose organisations to compliance risks, particularly where AI is deployed at scale. Robust controls over data provenance, representativeness, and input validation are therefore essential to prevent misleading outputs, manage downstream risk, and ensure that performance issues are not incorrectly attributed to model behaviour when they stem from data quality limitations.

Sound model design and development remain critical. As AI models evolve through retraining or configuration changes, organisations must retain visibility over how behaviour changes and be able to explain and evidence results under defined conditions. While development practices are important, many organisations rely on established third-party models and orchestration layers, shifting risk from model training to configuration and operational deployment. Assurance must therefore confirm that models are designed and applied within defined boundaries, aligned to business purpose, and supported by safeguards that promote controlled use and operational resilience across multiple use cases.

Accuracy and reliability are essential to establish trust in AI outputs. This includes managing the risk of unsupported or misleading outputs, particularly in systems that generate content or propose recommendations. Organisations must establish clear performance metrics and acceptance thresholds to assess whether outputs are correct, complete, and supported by available evidence. Defined evaluation criteria enable consistent measurement of output quality, support informed decision-making, and provide a defensible basis for determining whether model performance meets operational expectations.

Robustness and stability address how AI systems behave as data, usage patterns, or operating conditions change. Models that perform well at launch may degrade over time if not appropriately monitored and governed. Effective oversight requires a structured monitoring framework with defined metrics, thresholds, and review frequencies to detect performance degradation, unintended behavioural changes, or emerging risks. 

Bias and fairness considerations are increasingly prominent. AI can unintentionally disadvantage certain groups (cohorts), even where sensitive attributes are not explicitly used. Assurance must therefore consider outcomes, not just design intent, consider both cohort-level performance parity and potential human-related bias risks – where applicable – ensuring that outcomes remain consistent and that safeguards exist to prevent unintended discrimination.

Safety and toxicity are particularly relevant in interactive or customer-facing applications. Clear behavioural boundaries, safeguards, and escalation processes are essential to prevent inappropriate or harmful outputs. Establishing effective guardrails and monitoring practices support responsible AI use and strengthen stakeholder confidence.

Finally, traceability, auditability, and reproducibility underpin accountability. Senior management, auditors, and regulators increasingly expect to understand how AI influences decisions and outcomes. This requires comprehensive audit trails, logging of inputs and outputs, version and prompt control, and documentation sufficient to reproduce and validate results under defined conditions. Strong traceability enhances transparency, supports reproducibility, and enables organisations to demonstrate control in an evolving regulatory landscape.

 

Summary and outlook

We are experiencing rapid change, but this is only the beginning of a much faster transformation. AI will continue to evolve, from analytical models to intelligent systems capable of autonomous actions. Organisations that invest early in robust assurance will be best positioned to innovate responsibly while maintaining trust and control. A structured AI Assurance Framework is no longer optional. It is a foundational element of sustainable AI adoption.

How can Grant Thornton Risk Advisory help?

Our Risk Advisory practice supports organisations across sectors by providing independent assurance over modern AI solutions. We help organisations align AI adoption with regulatory and governance expectations, reduce conduct, reputational, and operational risks, and strengthen transparency, accountability, and oversight.

Our approach combines deep expertise in model risk governance and regulatory compliance with a practical understanding of modern AI systems, including predictive models and generative applications. This enables us to support AI adoption that is both innovative and controlled.

 

Authors:

Andreas Spyrides, Risk Advisory Services Leader

Christina Savva, Assistant Manager, Risk Advisory