image description
Solutions through Innovation

Validity Assurance

There are two important aspects of validity assurance for credentialing exams. Role delineation studies (sometimes referred to as "job analyses") assure that content of the exam and the mechanisms of assessment are consistent with the role from the perspectives of public protection and competence. Minimum passing score studies (sometimes reffered to as "cut score studies") assure that the ultimate decisions to grant the credential or not are soundly based on necessary elements of knowledge, skill, and ability and are neither arbitrary nor capricious.

JOB ANALYSIS

The job analysis (also known as a role delineation study) is the foundation of a valid, legally-defensible  exam program. Performing a job analysis and using the resulting data to develop exam specifications ensures that candidates are tested on the knowledge, skills, and abilities (KSAs) that are relevant to the role for which candidates are being certified or licensed. Validity (the appropriateness, meaningfulness, and usefulness of specific inferences made from test scores) is the most important consideration in test development, and a job analysis is the essential first step in establishing a valid exam.

The SMT job analysis process includes:

  • Initial research with existing documentation.
  • KSA review by a small group of subject matter experts (SMEs).
  • Recommending focus group or survey-based approaches to gather additional response data. For survey-based job analyses, SMT can utilize mail or web-based approaches.
  • Data analysis and content recommendations.
  • Test blueprint and exam specification development.
  • Comprehensive report of the process to establish legal defensibility of the exam.

At the core of the job analysis process is one or more meetings between SMT's psychometric staff and SMEs. In addition to being experienced psychometricians, SMT personnel are highly experienced in conducting efficient and effective meetings in order to streamline the process.

DETERMINATION of ASSESSMENT TYPES

Following the development of a test blueprint through a job analysis or other documentable process, SMT will work with your staff and subject matter experts (SMEs) to determine the appropriate method in which to assess the desired cognitive level. In many instances, the cognitive skills can appropriately  be measured with a selected response assessment, using traditional multiple-choice questions. However, higher-level cognitive skills can now be more easily addressed through advances in technology. SMT has developed advanced diagnostic capabilities to determine which of these innovative item types can be readily employed in your testing program. SMT also has an application suite, DaVinci's Tool Chest, to develop these innovative item types.

PASSING STANDARDS

Following the job analysis study, a form is developed to meet the new exam specifications. The next step is to determine a passing standard in order to differentiate successful (competent) from unsuccessful candidates. There are numerous ways in which to establish a passing standard, and factors such as candidate volume, item bank condition, and exam history help decide which methodology is most appropriate. The more common methodologies employed by SMT are cut score studies, item response theory (IRT) equating and common item linear equating.

Cut Score Studies

A cut score study invovles the judgement of SMEs in setting a passing standard for an exam form. SMT generally utilizes a modified Angoff method, in which SMEs make item difficulty decisions based upon a definition of minimal competence. A cut score study can be conducted before, during, or after exam administration depending on how much impact data the client would like the SMEs to have when making their item difficulty decisions. Generally, cut score studies are used for the first form of an exam, when there is not enough candidate volume to employ equating methods, or when there are no common items between exam forms.

IRT Equating

When an exam program has sufficient candidate volume, multiple forms of an exam, and common items between exam forms, IRT equating can be used to determine a passing standard. IRT equating methodology utilizes a special mathematical model to ensure the passing standard from the current form is equivalent to the passing standard on previous forms. SMT recommends IRT equating whenever possible as it is an objective method of setting a passing standard.

Common Item Linear Equating

When it is not possible to utilize IRT equating methodology, but their are multiple forms of an exam with common items, common item linear equating can be used to determine the passing standard for an exam form. Common item linear equating takes into account the previous passing standard and candidate performance on common and non-common items in order to set a passing standard for a new exam form.