loading gif icon


Are You Really a Wizard? Assessing for Real Competency

We’ve all been tempted at some point to take free so-called “tests” or “exams” online: pop-culture tests (Ravenclaws, unite!), character tests, skills tests, IQ tests, and so on. Have you ever taken a role-playing game character test and disagreed with the results? “I’m not a Paladin. I’m a Wizard!”

It can get dangerous when people take—and take seriously—free online “tests” promising insight into competence, knowledge, skills, and so forth. These quizzes may not be psychometrically sound and may have little research behind the measures.

How would you feel if your entry into programs or even social circles were judged based on your scores on a “free IQ test” that you found online? Sounds ridiculous, right?

Well, what if you wanted to gauge the skills and fit of the CNAs in your workforce, so you went online and found something that claimed to measure everything you were looking for…

Don’t Trust Everything You Find on the Internet

Maybe that tool looks applicable to you (it’s “face valid”), so you decide to take it before giving it to your CNAs. You find most of the questions to be rather easy, common-sense affairs. On others, you think to yourself, “Hmm, I’m not really sure how some of this would apply to my CNAs,” as you progress.

You finish it without much thought and view your results… Congratulations, you’re the most perfect CNA ever! Never mind that you have no background as a CNA.

Enthusiastically, you set out to have your CNA workforce take this assessment. You’re ready to begin gathering and interpreting the valuable personnel data.

A few weeks later, the results are pouring in. At first, your managers are surprised. Some star employees didn’t score well, while others who were struggling on the job seemed to do just fine. Your CNAs argued the validity of the results.

Then you and your managers began creating individual development plans with each person based on competence results from the free assessment. A few months went by, and the quality of care for the individuals you serve and the overall health outcomes among those you serve decreased.

What happened?

Not a Wizard, but a Whimper

A few things likely contributed to this scenario:

  • The free assessment, while face valid, wasn’t entirely job related.
  • The free assessment wasn’t psychometrically sound.
  • The free assessment wasn’t an assessment at all.

Honestly, there’s a good chance that all or at least a combination of these factors led to the undesirable outcome. Let’s unpack each of these and what could have happened.

Looks Can Be Deceiving

Perhaps that free online tool wasn’t truly related to everything that makes a CNA successful. It might have been face valid, meaning that it appears to be relevant to laypeople and even some subject matter experts. But it might have included questions or even focused on entire competencies not truly related to CNA success. It’s unlikely that creators of a free online tool put in the time, resources, and effort to conduct a job analysis, identify critical incidents for the job, conduct content or criterion validation studies, and create a maintenance plan for the assessment to identify adverse impact events or item functioning.

Think of it like this: Someone with no experience as a CNA could probably Google a few CNA job descriptions, read a few articles about their responsibilities, and could probably write 10 to 15 questions that appear relevant to CNA work.

After further scrutiny, however, managers and CNAs will likely notice many things wrong with the questions that a layperson would not immediately notice. Terminology, setting, procedure, equipment, and so on might all be slightly or completely off. The situations may not be realistic. Even the competencies that the questions are trying to capture may not apply to CNAs at all.

Psychowhat? Psychobabble?

If you’ve never heard the term before, “psychometrically sound” likely sounds like some kind of psychobabble that scientists in white lab coats mutter to themselves when their experiments go wrong. While that’s not too far off if you replace “scientists” with psychologists and “lab coats” with khakis and penny loafers, “psychometrics” refers to the science behind testing or assessment.

In order for an assessment to hold up under the scrutiny of subject matter experts, psychologists, and even government bodies such as the Equal Employment Opportunity Commission, a certain level of scientific rigor must be applied.

This rigor involves:

  • Studying the job and all its components (job analysis) and identifying day-to-day and irregular tasks and responsibilities that contribute to success and how to best handle them (critical incidents).
  • Ensuring that the assessment measures what you claim it measures for its intended population (content validation), as well as sometimes tying the assessment to performance metrics (criterion validation).
  • Minimizing bias and discrimination as much as possible (adverse impact analyses).

What feeds into every single factor above—the secret sauce behind each assessment—is its items, or questions. Sound item construction is vital to creating a good assessment.

What separates a good assessment from a poor assessment? Think of some features of those free online tools:

  • Questions with easy-to-guess answers. Anyone, even a layperson, could guess correctly.
  • Leading questions. They essentially teach rather than measure the knowledge.
  • Loaded items. These questions require “all of the above” or “none of the above” answers.

These are relatively easy things to spot. What can be tougher to determine without scientific analysis is the difficulty of each item, how tightly each item is tied to a core competence of the job, and how well each item measures the competence to which it’s tied.

Don the Robe and Wizard Hat

You’ve probably guessed by now that not all assessments are created equal. In fact, many tools are simply masquerading as an assessment. Don’t be fooled by tools that claim to measure competence with simplistic items constructed as more of a “check the box” endeavor than a true measure of job-related competence.

In fact, these types of tools may be more like self-guided checklists than true competency assessments. These tools give you the illusion of measuring competence by being face valid and allowing you to easily total your scores for different factors. However, they leave much to be desired in terms of scientific rigor or actionable steps to address the competency gaps.

Hocus Pocus? Or Status Quo?

If you aren’t using psychometrically sound competency assessments as part of your workforce improvement efforts, you have no objective way of knowing what gaps in competence each individual in your organization has. Nor can you gauge what competence gaps may be shared among your entire workforce. This can lead to trouble and even trigger an audit if an adverse event happens.

Furthermore, by using psychometrically sound competency assessments, you can be intentional in how you develop your workforce. By objectively measuring deficiencies and providing relevant training only on those deficiencies, you can save your organization time, resources, and money. You can also reassess after the training to show improvements. In the event of an audit, this data could prove helpful as evidence of your ongoing remediation efforts.

You Might Just Be a Wizard After All

Assessments don’t have to be confusing or require some sort of magical key to decode. You now know some of the most important factors to look for in good competency assessments. You can use this knowledge to appear almost magical to others when you spot those less-than-stellar alternatives. Go forth and play your most important role: authentic leader.

Demonstrate Clinical Competency

Staff competencies are fundamental to providing person-centered, quality care. Download our fact sheet to learn how to measure, track, and prove competency.

Read the fact sheet →

Connect with Us

to find out more about our training and resources

Request Demo