Compliance & Standards

Credentialing and the EU AI Act: What You Need to Know

AI is no longer an optional layer in credentialing operations. It is in the scoring pipeline, in the proctoring tools, in the content generation workflows, and in the systems your vendors use behind the scenes. As of this year, it is also a regulated activity. The EU AI Act has moved AI in credentialing from a technology decision into a compliance obligation, and the implications reach beyond Europe.

This article translates the Act into what credentialing leaders need to do now to stay defensible. It is written for boards, executives, and operational owners who need to know what to demand from their teams and their suppliers, not just what the regulation says.

AI in your credentialing pipeline is now a regulated activity

The EU AI Act treats AI used in education and vocational training as high-risk when it influences decisions about people. For credentialing bodies, that captures most of the places AI is being deployed today. Automated scoring, pass and fail recommendations, admission and placement decisions, and proctoring or anomaly detection all fall inside the scope.

The Act applies extraterritorially. If a credentialing decision affects a candidate located in the EU, the obligations attach even when the credential owner sits outside Europe. International associations and certification bodies that serve EU candidates need to assume the Act applies and design accordingly.

This is not a future concern. Prohibited practices have been enforceable since 2 February 2025. The general purpose AI rules and the governance and penalty framework have applied since 2 August 2025. The high-risk system rules are scheduled to apply from 2 August 2026.

“A reactive approach to this Act will not survive an audit or a stakeholder challenge.”

The dates that matter, and the one you cannot bank on

There is active discussion at EU level about adjusting the high-risk timeline. The Commission has proposed a Digital Omnibus package linked to the readiness of standards and guidance, with reporting suggesting a possible backstop later in 2027. Nothing is confirmed.

The defensible position is to plan for August 2026. If a delay materialises, you have bought yourself preparation time. If it does not, you are not scrambling in the final quarter to assemble evidence that should have taken six months to build.

A reactive approach to this Act will not survive an audit or a stakeholder challenge. The work is structural, and structural work runs slowly even when the team knows what it is doing.

What the Act treats as high-risk in credentialing

Annex III is the relevant section for credentialing programmes. It identifies AI as high-risk in education and vocational training when it is used in any of the following ways:

  • evaluating learning outcomes, including the steering of a learner through a programme
  • determining access or admission to a programme
  • assessing the level a person can be admitted to or assessed against
  • monitoring and detecting prohibited behaviour during tests

In credentialing terms, that maps directly to AI scoring, automated pass and fail recommendations, placement and progression decisions, and AI proctoring or cheating detection. If your platform uses AI to triage scoring, support second marking, or flag integrity concerns, you should assume those uses are in scope and document why each one is or is not high-risk under the Act.

The Act is also explicit that classifying a use as lower risk requires evidence. There is no informal exemption for assessments you consider routine. The decision needs to be recorded.

The red line: emotion recognition is prohibited

One specific use deserves immediate attention. The Act prohibits emotion recognition in educational and workplace contexts. If a proctoring vendor in your stack claims to detect stress, deception, engagement, or any other inferred emotional state from video or audio, that is not a grey area. It is a prohibited practice, and continuing to use it creates direct regulatory exposure.

The action is simple. Audit every proctoring and integrity tool you use, including features that may have been added to existing products by vendors as part of routine updates. Disable any feature that infers emotion or affective state. Get written confirmation from vendors that these features are off and will remain off. Do this now, not in the next procurement cycle.

The audit-pack mindset

High-risk AI under the Act carries documentation expectations that should feel familiar to anyone who has worked in regulated industries. The pattern is consistent with ISO 9001 quality management discipline applied to AI-specific risks. Expect to maintain:

  • structured AI risk management for each high-risk use case, with mitigations and residual risk sign-off
  • data governance evidence covering training data, validation data, quality, and privacy
  • technical documentation describing what each system does, how it was validated, and how it should be used
  • human oversight that is real, not symbolic, with documented authority and training
  • logging and traceability for AI outputs, human overrides, and the reasons for overrides
  • monitoring for performance drift, fairness, and false positive or false negative rates
  • transparency to candidates about where AI is used and how decisions are made

This is what I mean by an audit-pack mindset. The question to ask of every AI use in your pipeline is not whether it works today, but whether you could demonstrate that the decision was sound if a regulator, an employer, or a candidate’s lawyer asked you to. If the honest answer is no, the gap is documentation discipline, not technology. The companion piece on ISO 42001 and 23894 walks through the operating model that produces that discipline.

Scale your controls by stakes

A practice diagnostic does not need the same governance as a chartered status exam. The Act does not require you to treat every AI use identically. It does require you to think clearly about which controls apply where, and to document the reasoning.

Low-stakes uses, such as practice tests and formative feedback, need transparency, basic quality assurance, logging, and an honest classification rationale. Medium-stakes uses, such as micro-credentials and modular components, need stronger bias monitoring, structured human review, and documented risk treatment. High-stakes uses, such as licensure and regulated practice gating, need the full audit pack and human authority that can stand up in an appeal.

“The trap to avoid is treating low stakes as no paperwork.”

The trap to avoid is treating low stakes as no paperwork. Even a low-stakes classification needs the rationale on file. That is a small effort that saves a large argument later.

A 90-day plan that survives a board challenge

If you are starting from a standing start, the following sequence is achievable in three months and gives you a defensible foundation.

In the first 30 days, build your AI register. Map every AI touchpoint across content creation, delivery, proctoring, scoring, and credential decisions. Include the AI features your vendors have embedded in products you already buy. Classify each use by stakes and decision impact. Assign a single accountable owner who can coordinate legal, psychometrics, assessment operations, and product. AI governance fails when it lives in a committee with no decision rights.

In days 31 to 60, run risk assessments for every medium and high-stakes use. Document validity, fairness, transparency, security, and privacy considerations. Define human oversight points: who can approve, pause, or stop an AI use, and what training they need. Update vendor contracts to require change notification, log access, audit support, and incident reporting.

In days 61 to 90, build the audit pack itself. Construct statements for each component, validity evidence, monitoring plans, override logs, incident playbooks, and the supporting policies. Train assessment operations and leadership on AI literacy and consistent enforcement. Run a tabletop incident scenario so the playbook is not theoretical.

This is not a project that ends. It is the new operating model for credentialing in a regulated AI environment.

The 90-day playbook

  • Days 1 to 30: Build the AI register. Assign a single accountable owner with decision rights across legal, psychometrics, operations, and product.
  • Days 31 to 60: Run risk assessments for every medium and high-stakes use. Define human oversight points. Update vendor contracts.
  • Days 61 to 90: Assemble the audit pack. Train your teams. Run a tabletop incident scenario.

Trust is the credential’s moat

A credential is a public trust product. Its value depends entirely on the confidence that candidates, employers, and regulators have in the soundness of the decision. AI can sharpen that confidence when it is governed well. It can erode it overnight when it is not.

The EU AI Act is not asking credentialing bodies to do anything that good practice did not already suggest. It is formalising expectations that the market was already moving toward. The organisations that move first will have an audit pack ready when buyers and regulators ask for one. The organisations that wait will find themselves explaining gaps under pressure.

If you are unsure where your programme sits on this curve, the most useful thing you can do this quarter is build the AI register and assign the owner. Everything else follows from those two decisions. The companion articles on the AERA, APA, and NCME Testing Standards and on design-led integrity drawing on Ofqual and JCQ walk through the parallel work on validity evidence and assessment integrity.

Ready to make your AI use audit-ready under the EU AI Act?

Talk to our team about how Globebyte can help you build a defensible AI operating model for credentialing.

Explore our services

Ready to explore AI for your organisation?

Talk to our team about how Globebyte can help.

More insights