top of page

Assessment Integrity in the Age of AI: NQual’s Response to Ofqual’s Call for Action

  • Writer: NQual
    NQual
  • 23 minutes ago
  • 4 min read

Artificial intelligence has transformed the way people work, study, and communicate, but with this transformation has come a growing challenge for the educational sector: the misuse of AI in coursework and assessments. This concern has been strongly echoed by Sir Ian Bauckham CBE, Chief Regulator at Ofqual, who recently highlighted the rising problem of AI‑generated submissions appearing in schools and further education settings. His message is clear: undisclosed AI use is cheating. It undermines learning, and it damages the integrity of qualifications.


Ofqual’s Position on AI Misuse


In a recent LinkedIn post, Sir Ian warned that the misuse of AI in coursework is becoming widespread, expressing concern that students are "missing out on the crucial learning that coursework is designed to provide". He called on teachers and educational experts to remain vigilant and emphasised that cheating will be "detected and sanctioned". 

This aligns with the Ofqual letter issued to Awarding Organisations, which explicitly identifies AI misuse as a key malpractice risk. The letter states that when AI is used to generate extended writing in coursework, it strips away meaningful learning and compromises the authenticity of the learner’s work. Awarding Organisations have been formally asked to tighten deterrence, detection, and candidate authentication processes.

 

What the Use of AI in Assessments Means for Awarding Organisations


Ofqual requires all regulated Awarding Organisations to:

  • Improve awareness of what counts as AI misuse 

  • Strengthen detection and deterrence 

  • Implement sanctions if necessary 

  • Enforce more rigorous learner and teacher authentication 

  • Protect the validity and reliability of assessment outcomes 


The message is unmistakably clear: AI misuse is malpractice, and it must be proactively addressed. It is our duty to uphold the integrity and quality of the assessments that we carry out.


NQual’s Position and Policy on AI Use Within Assessments


A man presenting in an office meeting, standing by a whiteboard with charts. Two seated colleagues listen. Papers and a plant are on the table.

At NQual, protecting the credibility of the assessments we carry out has always been central to our mission. In response to growing sector expectations and the ever-changing landscape of technology, NQual has implemented comprehensive procedures through the NQual AI Detection Policy, Plagiarism & AI Policy, and associated quality assurance frameworks.


We’ve already seen real examples within our organisation where AI‑generated work has appeared in assessments, and our teams have discussed concerns about AI systems appearing in meetings or being used inappropriately in learner submissions. These incidents reinforce why a clear and consistent approach is vital.


We use a structured threshold approach when it comes to the use of AI to ensure consistent decision-making by our team of assessors and quality assurers.

 

Our thresholds are designed to:

  • Prevent malpractice 

  • Protect the credibility of qualifications 

  • Avoid penalising learners for permitted, responsible use 

  • Maintain alignment with sector standards 


Skills England states unequivocally that,

  • AI must not be used to generate full reports or portfolios 

  • AI may only support work if it reflects real workplace practice and is fully referenced 

  • Final evidence must always represent the learner’s own competence 


NQual’s policies directly integrate these principles, ensuring regulatory compliance and transparency.


Why it Matters to Detect AI Use in Assessments


AI misuse doesn't just break rules; it carries serious consequences:

  1. It undermines genuine learning 

    Learners who rely on AI instead of demonstrating real competency miss fundamental development opportunities which can have an impact on their future development. 

  2. It compromises qualification integrity 

    If AI‑generated submissions are not detected, results no longer represent the learner’s ability. This can then affect employers, progression routes, and public confidence. 

  3. It introduces inequity 

    Learners who follow the rules are unfairly disadvantaged compared to those who misuse AI. 

  4. It increases sector-wide risk. 

    Assessors are already encountering AI-produced work or automated attendance systems (like AI ‘bots’ appearing in meetings), which creates new compliance and safeguarding issues.

 

How NQual Are Tackling the Challenge of AI Use


Our approach includes multiple layers of protection that are accessible and transparent for the training providers, learners and employers that we work with.

 

  1. AI Detection Technology 

    We use established detection systems, supported by assessor expertise, human cross‑referencing, and authentication. 

  2. Learner Declarations 

    When coming onboard to complete assessments with NQual, every learner must confirm that their work is their own, with AI use declared and referenced where applicable. 

  3. Assessor Training 

    Our assessors receive ongoing guidance and training to be able to spot AI‑generated text, unexpected stylistic changes, or inconsistencies between written work and observed performance. 

  4. IQA Oversight 

    Our Internal Quality Assurance team apply consistent checks and escalate concerns where detections exceed acceptable thresholds. 

  5. Transparency and Compliance Support for Training Providers


A smiling woman holding a tablet in a casual office setting. Two people work in the blurred background. Warm lighting and modern decor.

We ensure Training Providers receive clear, consistent information on our AI policies and detection processes so they can confidently meet regulatory expectations. By openly communicating how AI misuse is identified and addressed, we aim to support Providers to reinforce authenticity requirements with their learners and maintain the integrity of assessment outcomes.

 

We are currently in the process of producing a dedicated explainer video that outlines the importance of authenticity and highlights common causes of AI detection flags, enabling learners to understand how their work is authenticated, make informed choices about responsible AI use, and avoid practices that could compromise the validity of their assessments.


By combining robust policy, technology, assessor expertise, and clear communication, we can ensure that every assessment that we carry out genuinely reflects the knowledge and skills a learner has achieved throughout the course of their apprenticeship.

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
bottom of page