top of page
Search

The Future of AI in Credentialing

  • Writer: credentialingadvice.com
    credentialingadvice.com
  • Jul 7
  • 5 min read

I should preface this blog post by saying these ideas are speculation based on my professional experience and personal beliefs.


Many are asking what role AI will play in the credentialing industry. It is too early to say with certainty, but we can make some reasonable predictions about the future of AI in credentialing based on how AI is being used today. A prevailing theme in this blog post will be the continued use of human subject-matter experts (SMEs) throughout the development process, as AI now and for the foreseeable future will not and should not have the authority to make the meaningful and final decisions that impact candidates by itself.

person looking to future

Looking at the first stage of the development process, job analysis, we already see AI being used to some extent. Some job analysts are using AI to generate draft versions of the exam blueprint/content outline very early in the job analysis process. Importantly, this occurs prior to human SMEs reviewing the draft content outline. I foresee AI also being used to help summarize open-ended comments from job analysis surveys, such as the missing job tasks or topics of knowledge. Any other meaningful application of AI during a job analysis would most likely be an over-extension. Trained job analysts should always oversee the entire process and determine when AI can and cannot be used.


At this point, I have not seen AI incorporated into the standard setting process at all. This is because the process of determining the passing standard is filled with human opinion and judgement, from rating examination items to a governance board making the final decision. One place I can see AI being used is when drafting the hypothetical minimally competent practitioner (MCP). It is common for a group of SMEs to create a hypothetical definition of an MCP for them to think of when providing their standard setting ratings. However, it is important for SMEs to review, edit, and agree on the final MCP definition.


Item writing is definitely where AI has been used the most so far. I have seen numerous presentations, webinars, and articles talking about how to craft AI prompts that lead to better item writing outputs. If human SMEs were to almost completely be removed from a step in the credentialing process, I believe item writing is where it would happen first. Although it isn’t perfect, psychometricians and test developers can already use an exam blueprint to craft AI prompts that generate reasonable “skeletons” of items. SMEs can also use AI to make their item writing more efficient and generate more items. Some organizations want to use a process where psychometricians/test developers help item writers to perfect items at their earliest stages, and those organizations will always want human SMEs writing items. However, for organizations that don’t expect perfection during item writing and focus more effort during item review, I can see new items becoming almost entirely AI-generated in the very near future.


Item review is such a dynamic and fast-paced process that incorporates the opinions of several SMEs simultaneously that it is difficult to incorporate AI. If AI were to be used in the item review process, I’d say it could be used as a screening tool and a reference checker. One could enter newly written items into an AI prompt and ask for recommendations on how to improve and edit the items. This would work better for human SME written items but could also be used in an iterative process with AI-generated items. Although psychometricians and test developers would be better at reviewing and improving the structure of items, it allows someone else to make potential improvements and suggestions prior to the item review meetings. Additionally, anyone who has facilitated item review meetings will lament about how difficult it can sometimes be for SMEs to find reasonable references for items. When AI gets better at finding appropriate (and not fake and fabricated) references for items, it could be used to help make item referencing before or during item review meetings more efficient.


Similar to item review, exam review meetings are dynamic and require the opinions of several SMEs. Currently, it doesn’t seem like AI is being used during exam review, however we can imagine a couple ways it could be used. The first way AI could be used in the future is to review all of the items on a newly assembled form for enemies (items too similar in content or one giving the answer to another). AI currently seems to do a good job of summarizing content, so it is conceivable that AI might be able to review item content and determine if some items have very similar content. AI would need to advance quite a lot for it to be able to determine if one item gives the answer to another item, but it is certainly possible. Another way AI could be used during exam review is to determine if item content is accurate and current; although this would also require some improvements to AI’s capabilities. Human SMEs would still be necessary to determine if item content is fair and meaningful, but it would reduce the cognitive load on SMEs during exam review, because they wouldn’t have to focus on reviewing as many things.


As a psychometrician, one of the more interesting ways I see AI being used in credentialing is to generate item statistics without pretesting. For those less familiar, you need to have candidates respond to an item so you can generate item statistics (e.g., difficulty, discrimination) to know how the item performs. Having data on all of the scored items that determine whether a candidate passes or fails an exam is how you are able to instantly score a candidate’s test and provide them a score report immediately. AI uses natural language processing (NLP) to understand language and make predictions on the probability of the next word in a sentence. If you are using Copilot in Microsoft Word to write something, you see it happening every time grey words pop up as recommendations. In the same way, we can use very large item banks with statistics to make predictions on how items will perform. Some researchers have started investigating this possibility, but the research is in its infancy. Although this potential to use AI to generate/predict item statistics will only be available to large certification programs testing many candidates with very large item banks, it is still exciting to think about!


We are only just beginning to consider how AI will be incorporated into the credentialing industry. But it is important to remember that AI is a tool for those working in the credentialing industry. When considering augmenting your processes with AI, you should try to reasonably improve the working conditions of credentialing professionals and the candidates that they serve. There is much talk about AI replacing humans or humans using AI replacing humans who don’t use AI. I think it is important to remind everyone that the credentialing industry is here to verify and improve the qualifications and competencies of professionals. In the same way that credentialing organizations wish to expand their candidate base and benefit the profession, the credentialing industry itself should also try to expand and improve itself. Let’s make sure that the credentialing industry’s use of AI focuses on improvement and doesn’t inadvertently reduce the number of credentialing professionals.

 
 
bottom of page