Most of the difficult conversations credentialing programmes are having about AI start in the wrong place. They start with tools, vendors, and detection. The conversation that actually unlocks every other decision is one level up. It is the construct decision: are you measuring competence with AI present, or competence without AI? Until that question has an answer for each component of your credential, every other policy will drift.
This article makes the construct decision visible. It shows why it is the policy fork that everything else hangs off, how to write it down so the rest of your governance has something to anchor to, and why getting it on paper is the single most useful thing your assessment design team can do this quarter.
“What is not credible is leaving the question unanswered.”
The question that comes before every other AI question
When a credentialing organisation asks whether candidates can use ChatGPT in a take-home assessment, the honest answer is another question. What does this assessment certify? If the answer is unaided professional judgement under time pressure, then permitting AI invalidates the construct. If the answer is the responsible use of professional tools to reach a sound recommendation, then prohibiting AI invalidates the construct. The same tool, the same submission, the same candidate, two opposite conclusions. The construct is what tells you which one is right.
This is not a new principle. The AERA, APA, and NCME Standards for Educational and Psychological Testing have always required that the evidence supports the intended interpretation of scores. What is new is that credentialing programmes can no longer leave the construct implicit. AI forces the question. If you do not answer it deliberately, your candidates and your scoring vendors will answer it for you, in directions you have not chosen.
Two valid positions, one explicit choice
There are only two defensible positions for any given assessment component, and both are valid in their place.
The first is competence without AI. The credential certifies what the candidate can do unaided. This is appropriate for baseline knowledge, regulated procedures, safety-critical judgement, and any context where the public reasonably expects the credential holder to perform without external assistance. A licensure exam for a regulated profession typically sits here. So does any component that gates progression and where the construct is foundational capability rather than tool-supported practice.
The second is competence with AI. The credential certifies what the candidate can do when AI is part of the working environment, because that is what real practice looks like. This is appropriate when the profession has already adopted AI tools and the credential needs to keep pace. A modern coding certification is likely here. So is any assessment of professional judgement in a domain where AI is now embedded in workflow. The construct shifts from “can the candidate do this” to “can the candidate use the tools well, verify the outputs, and take accountability for the result”.
Both positions are credible. What is not credible is leaving the question unanswered, or worse, answering it differently in different parts of the same credential without saying so.
Why the same component cannot have it both ways
The trap many credentialing programmes are walking into is the unstated middle position. AI is not banned, but it is also not assessed. Candidates may use it, but the scoring rubric does not reward verification or critical evaluation. The result is a component that measures neither unaided competence nor responsible AI use. It measures whichever one happens to dominate in any given submission, and the validity argument cannot defend either interpretation.
This is the position that creates appeals. A candidate who used AI heavily and passed cannot be distinguished from a candidate who worked entirely independently. A candidate who is later challenged for inappropriate AI use has a legitimate response: the rules did not say. The credential owner has no construct statement to point to and no evidence base to defend.
The fix is not to write a longer policy. It is to make the construct decision explicit for the component, document it in one sentence, and design the assessment, the rubric, and the candidate guidance to that decision.
The construct statement that closes the gap
The blueprint format is short and load-bearing. For each assessment component, write one line.
The construct statement template
This assessment measures [competency] under [conditions] and is designed to certify performance [WITH AI / WITHOUT AI].
Example: unaided licensure component
This assessment measures clinical decision-making in time-constrained scenarios under closed-book proctored conditions, and is designed to certify performance WITHOUT AI.
Example: tool-supported professional component
This assessment measures the secure design of cloud architecture using approved AI assistants, under open-resource conditions with mandatory disclosure, and is designed to certify performance WITH AI.
That single sentence does more work than any other governance artefact in your AI policy pack. It tells the assessment designer what to build. It tells the rubric author what to reward. It tells the candidate guidance writer what to permit and prohibit. It tells the appeals panel what evidence to weight. It tells the validator what argument to construct. And it tells your auditor that the construct is on file.
Both worked examples are defensible. Both tell every downstream stakeholder exactly what the credential is claiming. Neither leaves room for the unstated middle position.
How the decision changes everything downstream
Once the construct is on paper, the rest of the policy stack is largely mechanical.
If the construct is without AI, then the assessment environment must enforce that. Secure conditions, controlled devices, monitored access, item types that resist pattern recall. The rubric grades the candidate’s own reasoning. Misconduct categories cover any AI use. Verification mechanisms, including viva or oral defence for take-home work, exist to confirm that the work is the candidate’s own. Vendor governance includes the obligation that no embedded AI feature creeps into the test environment.
If the construct is with AI, the work changes shape. The assessment is open-resource by design. The rubric grades judgement, verification, and accountability rather than the surface output. Disclosure is mandatory and structured: tool name, purpose, where it influenced the work, what verification the candidate performed. The misconduct framework focuses on undisclosed use, fabrication, and unsafe reliance on outputs. The validity argument now includes evidence that the assessment can distinguish responsible AI use from naive AI use, and that the cut score reflects credential-worthy performance with the tools present.
“Reverse engineering policy from tool preferences gives you a stack that does not hold together when challenged.”
The point is that almost every policy choice in your AI pack flows from the construct decision. Reverse engineering policy from tool preferences, or from a generic AI position copied from another organisation, gives you a stack that does not hold together when challenged.
What this means for components you have not yet redesigned
Most credentialing programmes have legacy components designed before AI was a serious factor. The construct was implicit, because there was no realistic alternative to unaided performance. Now there is. The legacy validity argument may no longer support the legacy interpretation of scores, because the conditions of administration have changed even if the test items have not.
The honest exercise is to walk through every component in your credential and ask three questions.
Three questions for legacy components
- What did this component originally certify?
- What does it now certify in practice, given how candidates can prepare and respond?
- Are those two things the same?
Where the answer is yes, write the construct statement and move on. Where the answer is no, you have a choice. Either re-secure the conditions so the original construct still holds, or rewrite the construct so it matches what the assessment now actually measures. Either is defensible. Doing nothing is not.
The board-level position
For boards and credential owners, the construct decision is also a strategic position. It tells the market what your credential stands for. A credential that explicitly certifies unaided competence in a regulated practice has a different market value to one that certifies responsible AI use in a tool-supported profession. Both can be commercially strong. Both communicate something specific about what holders are trusted to do.
“What erodes credential value is not the choice between with AI and without AI. It is the absence of a choice.”
What erodes credential value is not the choice between with AI and without AI. It is the absence of a choice. Stakeholders who cannot tell what the credential certifies cannot rely on it, and a credential that cannot be relied on does not retain its market position for long.
Make the decision, then govern from it
The other articles in this Compliance and Standards series talk about regulation, governance frameworks, professional standards, and assessment integrity. Every one of them assumes the construct decision has been made. Without it, the AI register has no classification logic. The risk assessments have no construct to assess against. The vendor governance has no rules to enforce. The audit pack has no anchor. The companion piece on the EU AI Act sets out the regulatory frame; the ISO 42001 and 23894 article walks through the operating model.
The work is small in scope and large in effect. Schedule a half-day with your assessment design lead, your psychometrician, and a representative from credential strategy. Walk through each component. Write the construct statement for each one. Get sign-off from the construct owner. Add the statements to your AI register and to your evidence pack.
Once those statements exist, every other AI governance decision becomes faster, fairer, and more defensible. They are the foundation. Build them first.
Ready to make the construct decision explicit across your credential?
Talk to our team about how Globebyte can help you write defensible construct statements for every component.