top of page
Search

An Overview of the NCCA AI-Guidance Document

  • Writer: credentialingadvice.com
    credentialingadvice.com
  • Jun 25
  • 5 min read

Updated: Jun 25

Artificial intelligence (AI) is a new and emerging technology that is beginning to greatly affect our lives and how we do work. The National Commission for Certifying Agencies (NCCA) recently came out with a document outlining some guidance on how AI impacts some of the NCCA accreditation standards. We will provide a brief overview for each of them.

artificial intelligence (AI)

Standard 2: Governance and Autonomy

Essentially, AI systems cannot make final decisions that are essential to the certification board's governance. This means AI cannot make a final decision related to eligibility standards, initial certification, recertification, disciplinary actions, test development, test administration, scoring, or selecting subject-matter experts.


Standard 3: Education, Training, and Certification


Certification programs can use AI for creating educational and training materials but must not have any communication with an AI system involved in core certification decisions. Training/education AI systems can use publicly available resources, such as exam content outlines, but cannot use confidential resources, such as certification exam items or scoring rubrics. If you do use AI for education/training, make sure to document what you are doing and be ready to provide that evidence upon request.


Standard 5: Human Resources


A general theme throughout the entire guidance document is that qualified human personnel must oversee AI systems. The people overseeing the AI systems must know what they are doing, understand the system outputs, manage potential issues, and be able to override AI system recommendations. These qualified people must also audit the AI systems to ensure they are making fair and accurate decisions, identify system biases and errors, and ensure decisions that negatively affect candidates are reviewed by a human.


Standard 6: Information for Candidates


Guidance on this standard currently only applies to AI chatbots and automated communication systems. If you happen to use one of these, you need to make sure candidates have the ability to escalate an issue and provide feedback to a human. Auditing a reasonable sample of these interactions will ensure adequate quality control.


Standard 7: Program Policies


The guidance here is to basically follow the law. Regulatory requirements regarding the use of AI are constantly changing. Stay aware of local, state, and federal regulations and follow the news on how AI court cases are litigated.


Standard 8: Awarding of Certification


The previous sentiment that AI systems cannot make core certification decisions is reiterated here; specifically, regarding the determination of a candidate's certification status. Any process that involves AI-generated decisions must be documented and able to be audited by qualified human personnel.


Standard 9: Records Retention and Management Policies


If an AI system is used in records retention or document management, then the system and the humans who oversee the system must maintain data integrity, auditability, and compliance with relevant policies. AI systems should not modify confidential records unless explicitly authorized by human personnel. If AI-generated or AI-managed records are subject to a security breach, programs must have clear policies on how to respond.


Standard 10: Confidentiality


Certification programs must ensure AI systems follow all confidentiality policies. Only authorized personnel should have access and use AI systems when processing confidential certification information. There should also be a verification process when using AI-generated insights to ensure confidential information isn't misused or exposed.


Standard 11: Conflict of Interest


Although AI systems do not have personal biases or conflicts of interest, that doesn't mean certain groups may be given an unfair advantage or a conflict of interest may unintentionally be created. Certification programs must conduct due diligence when creating their own AI system or working with a vendor. Ultimately, certification programs will still be held accountable for vendor-created issues that arise from AI systems.


Standard 12: Security


AI systems must follow all applicable security policies to ensure the integrity and confidentiality of certification data. This can be done by using a closed AI system or creating a "walled garden."


Standard 13: Panel Composition


AI cannot, and should not, replace subject-matter experts (SMEs). Although AI can be used in many different ways throughout the development cycle to make things more efficient, qualified human SMEs must make final decisions in everything from job analysis and standard setting to item and exam development.


Standard 14: Job Analysis


If a certification program uses AI during the job analysis, they must document what they did and how qualified human personnel oversaw the process and made final decisions.


Standard 15: Examination Specifications


AI systems can certainly help certification programs in summarizing job analysis results, but final decisions must be made by qualified human personnel. Guidance here specifically calls out final exam weightings and how you must document AI's role in developing the examination specifications.


Standard 16: Examination Development


Using AI to improve efficiency in exam development is perfectly fine, but qualified humans must provide oversight. Item writers can use AI to help write their items, but they must be reviewed with the same level of rigor as non-AI generated items during item review meetings. Of particular note here is guidance that certification programs must ensure due diligence in not infringing on others' intellectual property.


Standard 17: Setting and Maintaining Passing Standards


The guidance here follows the theme of AI being used to assist in standard setting activities, but not without qualified human SMEs critically evaluating AI recommendations.


Standard 18: Examination Administration


AI can be used to assist proctors in detecting irregularities but must not make decisions independently that invalidate candidates' exams. Certification programs should ensure AI-assisted proctoring systems do not introduce bias and promote a fair testing experience.


Standard 19: Scoring and Score Reporting


If AI systems are involved in the scoring of candidates or providing score reporting feedback, qualified human personnel must ensure the fairness and accuracy of those decisions and outcomes. Humans may not need to directly review every decision/outcome, but there should be systemic and frequent oversight.


Standard 20: Evaluation of Items and Examinations


You can use AI to make recommendations on which items should be kept/removed from an exam, but final decisions must be made by qualified human personnel (e.g., psychometricians).


Standard 21: Maintenance of Certification


You can use AI to help generate continuing education content and assessments, but they must be reviewed to confirm the accuracy of the content by qualified human personnel (e.g., SMEs). AI-generated recertification outputs must meet the same quality standards as traditional content and evaluations.


Standard 22: Quality Assurance


If a certification program uses AI to assist in quality assurance activities, they must provide documentation showing the processes and protocols.


Standard 23: Maintaining Accreditation


If a certification program uses AI, they must disclose how they are using AI through initial applications, 5-year re-accreditation applications, and annual reports. Using AI for core certification activities that significantly changes aspects of those processes will require a material change. This means that the AI system has modified aspects of a certification program such as the following: the legal status or governance structure, the purpose or scope of the program, the purpose or scope of the examination, the program name/designation, or the exam development, administration, or evaluation procedures.


A major theme throughout the entire document is that generally speaking, the use of AI is acceptable, but AI cannot and should not replace the need for qualified humans. Qualified personnel and SMEs should be involved whenever a certification program is using an AI system. If you’d like to access the document in it’s entirety, please use the following link: https://www.credentialingexcellence.org/Portals/0/NCCA%20AI%20Guidance%20Document_v1_1%20FINAL.pdf

 
 
bottom of page