Ai in finance education: between detection and evolution | salary sacrifice pension schemes

AI in Finance Education: Between Detection and Evolution

The finance education sector stands at a crossroads. As artificial intelligence becomes increasingly sophisticated, awarding bodies face an uncomfortable reality: traditional approaches to assessment integrity are no longer sustainable. The question is not whether AI will transform how we assess finance professionals, but how quickly we adapt.

The Crisis of Academic Misconduct

Collaborative work in a modern office.Recent scandals underscore the severity of the challenge. In 2024 and 2025, the Dutch arms of Deloitte, PwC, and EY were fined $8.5 million by the US Public Company Accounting Oversight Board for widespread cheating carried out in internal training and ethics assessments. This followed KPMG Netherlands receiving a record $25 million fine, the largest in PCAOB history, for exam cheating that persisted from 2017 to 2022.

The misconduct wasn't confined to junior staff. Senior leaders at Deloitte Netherlands and PwC Netherlands participated in improper answer sharing, with KPMG's former head of assurance receiving a lifetime ban and $150,000 fine. These cases reveal a systemic issue: when commercial pressures and time constraints collide with mandatory assessments, even the most prestigious firms struggle to maintain ethical standards.

Against this backdrop, in 2024, the Financial Reporting Council required the Big Four audit firms to disclose measures to prevent examination cheating facilitated by Generative AI. The emergence of GenAI has amplified these integrity challenges exponentially.

Ofqual's Shifting Position

The UK's qualifications regulator, Ofqual, has moved from resistance towards pragmatic accommodation. In April 2024, Ofqual published its regulatory approach establishing five key objectives: ensuring fairness for students, maintaining validity of qualifications, maintaining security, building public confidence, and enabling innovation.

Their stance reflects a "precautionary principle", not outright prohibition, but careful gatekeeping. Ofqual clarified in September 2023 that using AI as a sole marker does not comply with regulations because it fails to meet requirements for human-based judgement. Similarly, AI cannot be used as a sole remote invigilator, given concerns about bias, inaccuracies, and lack of transparency.

However, Ofqual acknowledges the inevitability of change. Initial reporting suggests only modest numbers of AI-related malpractice cases in non-exam assessments have been identified, though the regulator recognises that technological advances and growing user familiarity introduce uncertainty about future impact. While implementing short-term safeguards, they're actively considering longer-term interventions as the landscape evolves.

The Computing Education Parallel

This situation mirrors a profound transformation in computing education decades ago. Early computing courses focused intensively on programming, writing code from scratch, understanding algorithms, and mastering syntax. Over time, as software tools became more sophisticated and accessible, the educational focus shifted toward using these tools effectively rather than building everything from the ground up.

This transition was neither quick nor uncontroversial. It took years for curricula to acknowledge that most professionals would spend their careers using software applications rather than developing them. Today, this shift is accepted as inevitable, few question whether accountants need to build their own spreadsheet software rather than master Excel.

Finance qualifications may be entering a similar inflection point. If AI tools become standard in financial analysis, risk assessment, and regulatory compliance, should assessments test the ability to perform these tasks without AI, or the competence to use AI effectively while maintaining professional judgment?

Become a Centre

Help bridge the gap between learning and livelihood. Become an accredited centre

Click here →

The Path Forward: Authentication Over Prevention

Several potential futures are emerging:

Process-based assessment: Rather than focusing solely on final outputs, awarding bodies could shift toward observing how learners develop their work. Professional discussions and observations can provide evidence that assessment responses were developed by the learner. This approach authenticates learning while acknowledging that the final deliverable might incorporate AI assistance.

AI literacy as core competency: The finance sector is rapidly integrating AI into daily operations. According to industry research, ACCA professionals proficient in AI tools are expected to have a 25% higher chance of employability in senior financial roles over the next three years. Finance qualifications could assess learners' ability to prompt AI effectively, verify AI-generated outputs for accuracy, identify potential errors or biases, and maintain ethical standards when AI suggests efficiency shortcuts.

Hybrid human-AI workflows: Rather than preventing AI use, assessments could evaluate how learners integrate AI into their work. Can they use AI to accelerate routine analysis while applying critical thinking to complex judgments? Do they understand when to rely on AI and when human expertise is essential? This mirrors how financial professionals actually work, using technology to enhance, not replace, professional judgment.

Industry-led standards: The FRC's review revealed that improvements are required from UK firms and all recognised qualifying bodies. As major employers define what AI competencies they expect, awarding bodies will need to align their assessments accordingly. The market may ultimately determine whether AI proficiency becomes mandatory or supplementary.

The Cost of Inaction

Delay carries significant risk.

If qualifications fail to reflect how finance professionals actually work, they risk becoming irrelevant. Graduates certified without AI literacy may find themselves disadvantaged in workplaces where these skills are standard. Conversely, those who self-educate in AI tools may bypass formal qualifications entirely, undermining the value proposition of structured professional education.

The accountancy scandals demonstrate another danger: when assessment methods don't align with reality, misconduct flourishes. One audit tutor observed that internal training is typically seen as an annoyance that gets in the way of real work, so any shortcuts that speed things up become acceptable. If learners perceive AI restrictions as artificial barriers disconnected from professional practice, compliance will remain fragile.

It’s not ‘If’ but How’ quickly we evolve.

The finance education sector cannot outrun AI development through increasingly sophisticated detection methods. While Ofqual's April 2024 policy established a precautionary principle explicitly prohibiting AI as a sole marker or sole remote invigilator, the regulator acknowledges that innovation is flourishing within these guardrails.

The computing education analogy offers a roadmap: initial resistance, gradual acceptance, eventual transformation. The shift from programming to software use took years, but it was ultimately inevitable. Finance education faces the same trajectory not because AI makes financial professionals obsolete, but because it fundamentally changes what professional competence means.

Awarding bodies that adapt proactively, defining what good AI-assisted work looks like, building assessment methods around authentic practice, and cultivating critical AI literacy will lead this transformation. Those that cling to detection-based approaches risk finding themselves certifying skills for a profession that no longer exists.

The finance professionals of tomorrow will work alongside AI as naturally as today's professionals use spreadsheets. Qualifications must prepare them for that reality.

Take a Qualification

View our qualifications - get the skills - get job ready

Click here →

Sources:

Business & Accountancy Daily: [Clampdown on exam cheating by accountants](https://www.accountancydaily.co/clampdown-exam-cheating-accountants)

Accountancy Age: [Big Four firms fined over exam cheating scandal](https://www.accountancyage.com/2025/06/26/big-four-firms-fined-over-exam-cheating-scandal/)

Accountancy Age: [KPMG's Dutch subsidiary hit with record £20 million fine](https://accountancyage.com/2024/04/13/kpmgs-dutch-subsidiary-hit-with-record-20-million-fine-for-exam-cheating/)

GOV.UK: [Ofqual's approach to regulating AI in qualifications](https://www.gov.uk/government/publications/ofquals-approach-to-regulating-the-use-of-artificial-intelligence-in-the-qualifications-sector)

PQ Magazine: [More exam cheating fines for Big 4 firms](https://www.pqmagazine.com/more-exam-cheating-fines-for-big-4-firms/)

SOE Global: [Understanding the Impact of AI in ACCA in 2025](https://www.soeglobal.com/impact-of-ai-in-acca/)