You cannot govern what you have not mapped. Every credentialing organisation that has tried to build AI governance from the top down has discovered the same thing: until you have a complete inventory of where AI is being used across your assessment lifecycle, every other governance artefact rests on guesswork. The AI register is the artefact that fixes that. It is unglamorous, it is mostly housekeeping, and it is the single most useful thing your governance group can build in its first thirty days.
“You cannot govern what you have not mapped.”
This article is a practical guide to building one. It covers what each entry needs to capture, how to structure the register so it serves both regulators and operations, who owns it, how it connects to vendor management, and how to keep it from becoming the dead document that lives in a SharePoint folder no one opens.
Why the register exists
The AI register exists because AI in credentialing is rarely deployed all in one place. A typical programme has AI in item generation, in scoring, in proctoring, in candidate support, in fraud detection, in operational analytics, and increasingly in features the platform vendor added in the last release that nobody on the operations team knows about. None of these uses are inherently problematic. What is problematic is not knowing they exist.
The register serves three audiences. For your governance group, it is the inventory that tells them what they are responsible for. For your auditors, regulators, and accreditors, it is the demonstration that your organisation knows where AI lives in its operation. For your assessment design team, it is the trigger that says “here is a use that needs a construct statement, a risk assessment, and a monitoring plan”.
If you are working toward ISO 42001 alignment, the register is part of the AI management system. If you are working toward EU AI Act readiness, it is the foundation of your high-risk classification work. If you are aligning to the NCME Testing Standards, it is the inventory that tells you which uses need updated validity and fairness evidence. The same artefact serves all three purposes.
What each entry needs to capture
A workable AI register entry covers eleven fields. This is the minimum. You can add more for your context, but going below this makes the register too thin to be useful.
The first field is the unique identifier. A short code, sequentially numbered, that lets the rest of your governance pack reference this entry without ambiguity. AI-001, AI-002, and so on. Trivial, but it saves arguments later.
The second is the use case name. Short, descriptive, and specific. Not “scoring AI”, but “automated short-answer scoring for the medium-stakes professional knowledge component”. Specificity matters because the same vendor’s AI may be used in different ways across different components, and each use needs its own row.
The third is the assessment lifecycle stage. Where in the credentialing pipeline does this AI sit? Item generation, item review, registration, identity verification, delivery environment, proctoring, scoring, results processing, credential decision, post-credentialing analytics, candidate-facing support. Pick one. If a single tool spans more than one stage, split it into multiple entries.
The fourth is the system owner inside your organisation. Not the vendor, not the team. A named individual who is accountable for this use. Without an owner, the entry has no one to follow up with when something needs attention.
The fifth is the supplier or source. The vendor name where applicable, or “in-house” where the AI is built or operated by your own team. Include the specific product and version where you can.
The sixth is the construct linkage. Which assessment component does this AI affect, and what does the construct statement say? This is the field that connects the register to the construct decision. If a use case has no clear connection to a construct, that is a finding worth investigating.
The seventh is the stakes classification. Low, medium, or high. This drives the level of governance the use needs.
The eighth is the decision impact. Does this AI generate content that humans then review, support a human decision, recommend a decision, or make a decision? The further down that list a use sits, the more oversight, evidence, and audit logging it needs.
The ninth is the regulatory and standards classification. Is this use likely high-risk under the EU AI Act? Does it touch the AERA, APA, and NCME Testing Standards expectations on validity, fairness, or scoring? Is it covered by an ISO 42001 control? This is the field that maps each entry to the obligations that attach to it.
The tenth is the evidence pointer. Where do the supporting artefacts live? The risk assessment, the construct statement, the validity evidence, the monitoring plan, the change log, the incident records. Not the artefacts themselves, just the pointer. The register stays compact and the audit trail stays connected.
The eleventh is the review status and next review date. When was this entry last reviewed, and when is it next due? This is the field that keeps the register alive instead of becoming a snapshot that ages out.
Eleven fields per AI register entry
- Unique identifier — sequential code (AI-001, AI-002…), never reused
- Use case name — specific, not generic
- Assessment lifecycle stage — one stage per row
- System owner — a named accountable individual
- Supplier or source — vendor and product version, or “in-house”
- Construct linkage — which component, with the construct statement
- Stakes classification — low, medium, or high
- Decision impact — generate, support, recommend, or decide
- Regulatory and standards classification — EU AI Act, Testing Standards, ISO 42001
- Evidence pointer — where the supporting artefacts live
- Review status and next review date — keeps the register alive
How to structure the register
A spreadsheet is fine to start. A purpose-built governance tool is better at scale, but the format matters less than the discipline of keeping it complete and current. Whatever you use, three structural rules help.
The first is one row per use case, not one row per system. A scoring vendor used for two components needs two rows, because the construct, stakes, and evidence will differ.
The second is a stable identifier scheme that does not get reused. Once AI-001 is retired, AI-001 stays retired. New entries get the next number. This protects the audit trail when entries are removed or archived.
The third is a clear status convention. Active, under review, retired, blocked. Knowing the difference matters when something appears in the register but is not in current operational use, or when a new use is being assessed but has not yet been approved.
Who owns it
The register has one accountable owner. Not a committee. A named individual with the authority to add entries, request risk assessments, escalate concerns, and block deployments that have not been classified. In most credentialing organisations this sits with a head of assessment governance, a risk lead, or a director of assessment operations. In smaller organisations it can sit with the assessment director directly.
The owner is supported by a small group with decision rights. Legal, psychometrics, assessment operations, and product or technology should all be represented. This is the group that approves new entries, signs off classifications, and handles exceptions. It does not need to meet weekly. It does need clear authority and a published terms of reference, so candidates and stakeholders know that decisions about AI in the credential are being made by accountable people, not by procurement default.
How the register connects to vendor management
Vendor governance is the area where most credentialing programmes find unmapped AI. Platform vendors add features. Proctoring suppliers update models. Scoring services swap in new components. Each of these changes can introduce a new AI use into your operation without anyone in the assessment team noticing.
The register closes this loop in two directions. Outbound, every vendor relationship that touches an AI use case needs a row in the register and a corresponding contractual position on change notification. Inbound, every notification of a model update or feature change should trigger a register review for the affected entries. This is where the eleventh field, the next review date, earns its place. The register tells you which vendors are due to confirm that nothing has changed since the last cycle.
“The register is the gatekeeper. Without it, change control is a wish.”
If a vendor change introduces a use case the register did not previously contain, the new use needs its own row, its own classification, and its own approval before it goes into service. The register is the gatekeeper. Without it, change control is a wish.
How to keep it alive
The biggest risk to an AI register is not that it gets built badly. It is that it gets built well and then forgotten. Three habits keep it alive.
The first is a fixed review cadence. Quarterly works for most credentialing programmes. Every entry is reviewed at least once a year, and every high-stakes entry is reviewed at least twice. The review is not a re-write. It is a check that the entry is still accurate, the evidence pointers still resolve, and the monitoring plan is still being executed.
The second is integration with change management. Every new vendor procurement, every contract renewal, every product release, and every assessment redesign should include a register check. Has anything changed that needs a new row, an updated classification, or a retired entry? If yes, the change does not complete until the register is updated.
The third is visibility. The register is not a confidential document. It does not need to be public, but it should be visible inside the organisation to anyone who is making decisions that touch AI in the credential. Hiding it in a folder controlled by one team makes it impossible to use as a decision tool. Making it accessible to assessment design, product, legal, and compliance turns it into the shared reference it is supposed to be.
Three habits that keep the register alive
- A fixed review cadence — quarterly, with high-stakes entries reviewed at least twice a year
- Integration with change management — no procurement, renewal, release, or redesign completes until the register is updated
- Visibility across the organisation — accessible to assessment design, product, legal, and compliance
Common mistakes to avoid
A few patterns recur across credentialing organisations that have built registers and not got the value they expected.
Treating it as an IT inventory rather than an assessment governance artefact. The register is not a list of software. It is a list of AI uses with assessment consequences. A general purpose chatbot used to answer candidate enquiries about exam dates is a different entry from the same chatbot embedded in an assessment workflow.
Letting it stop at vendor names. “We use Vendor X” is not an entry. “Vendor X’s automated short-answer scorer is used for the medium-stakes knowledge component, supports human reviewer decisions, classified medium under EU AI Act guidance” is an entry.
Building it without a construct statement. The register and the construct statements are the same governance artefact viewed from two angles. Build them in parallel or one will be incomplete.
Treating retirement as deletion. When a use is retired, the entry stays. The status changes, the next review date is removed, and the audit trail of the retirement decision goes into the evidence pointer field. This protects you when an old use is queried in a future audit.
The board-ready version
For board reporting, the register needs a summary view, not the full thing. The summary is one slide: total entries by stakes classification, total entries with high decision impact, percentage with current risk assessments, percentage with current monitoring evidence, and the count of overdue reviews. That is enough to tell the board whether the AI estate is under control and where the gaps are.
The same summary feeds external reporting. Procurement asks how you govern AI in your operation. You show the summary. Accreditation reviews ask the same question. So do candidates and employers in due diligence conversations. The register is the source of truth that feeds all of these conversations without the assessment team having to reconstruct the picture each time.
Build the register first
Of all the AI governance work credentialing organisations need to do this year, this is the one with the highest return on the smallest effort. Two days of assessment operations time, a half day of vendor outreach, and a structured first pass through the assessment lifecycle is enough to get a workable v1. From there, every other governance artefact has somewhere to anchor.
“The biggest risk to an AI register is not that it gets built badly. It is that it gets built well and then forgotten.”
Without it, the construct decisions, the risk assessments, the monitoring plans, the vendor contracts, and the audit pack are all building on guesswork. With it, they are building on fact. That is the difference between governance that holds together when challenged and governance that does not.
Ready to build the AI register your governance has been missing?
Talk to our team about how Globebyte can help you stand up a workable v1 in days, not months.