How can school psychologists use performance validity testing to provide more accurate assessments of students?MHS
Understanding performance validity is important any time maximum performance tests are used (e.g., tests of intelligence, aptitude, achievement, and/or neuropsychological performance). Knowing if a student has given their best effort during assessment is critical to the accuracy of test score interpretation. This need has long been recognized in adults; however, pediatric validity testing has been strongly encouraged only in the last decade and is now considered a standard of care in child and adolescent assessments (Guilmette et al.; 2020).
We sat down with Dr. Cecil Reynolds, co-author of MHS’ Pediatric Performance Validity Test Suite (PdPVTS), to discuss the importance of creating a standard around performance validity testing within educational assessment settings.
The interview below has been edited for length and clarity.
What was the motivation that began your work on the Pediatric Performance Validity Test Suite?
Research over the last several decades has shown that many children and adolescents are not giving “best effort” or displaying maximum performance on measures of intelligence, achievement, memory, and various neuropsychological functions. And, as it turns out, we as clinicians are not as good at detecting such sub-optimal effort as we thought and quantitative methods are required to be accurate. The vast majority of practitioners have been using performance validity tests (PVT) designed for adults with children and youth. There was but one PVT on the market designed for children, and it was a single task. We know that best practice requires the use of multiple PVTs over the course of a comprehensive assessment. So, we set out to develop a battery of five co-normed PVTs explicitly designed for use with children and youth—with age-appropriate stimuli and age-corrected cutoffs. This also then allows a major advance—one that is not even available with PVTs for adults that have been around far longer—we provide base rates of pass/fail for any and every combination of two to five of the tests that make up the PdPVTS, and we provide these base-rates for non-clinical as well as nine different clinical populations, something no one else has been able to provide for any age. Base-rates of pass/fail across multiple PVTs are critical to correct interpretation of the results.
PVTs are well ingrained in neuropsychological testing – but why would a school psychologist need to add these to their test batteries?
Evaluations by school psychologists lead to life-changing events in the lives of children and adolescents. Given the state of knowledge and our best practices documents and guidance from our professional organizations, it is clear that quantitative and objective assessments of effort should be compulsory in all evaluations of children and adolescents that include tests of maximum performance. Maximum performance tests include intelligence, achievement, memory, and the like.
Without documentation of appropriate levels of effort, one can no longer assume effort was adequate and you cannot interpret test scores as intended on maximum performance tests. We have an ethical and moral obligation to get it right as often as possible, and PVTs help us. School psychologists have used symptom validity tests (SVTs) since the 1980s and so have recognized the issue of potentially invalid or compromised results on self-reports and rating scales. The addition of PVTs is just the next logical step.
A psychologist might not get best effort during assessment due to many factors such as the examinee’s boredom or simple fatigue, and it is not always purposeful. If you’re going to interpret results from whatever you may be testing for, you need to make sure you’re receiving best effort the whole way through. If you do not get maximum effort from the examinee, you cannot draw inferences about levels of intelligence, academic achievement, or brain-behavior relationships.
So, how do you measure this? Most examining clinicians think it is obvious when children are not giving their best effort or are engaged in malingering or other forms of dissimulation. The research on detecting poor effort and deception indicates this is not true. In fact, we are only slightly better than chance at detecting such issues in the absence of objective, quantitative data. Our professional organizations have recognized this, and current consensus documents state that using objective PVTs is best practice.
Part of the best practice recommendations for PVTs is that they should be given throughout the course of assessment. How does this practice improve the overall assessment of the student?
With children, we know that the level of effort will change during the course of their exam. As practitioners, we think we are really good at what we do—and as stated above, we often think we know exactly the moment when effort starts to wain or when a child starts to engage in dissimulation. Again, we now know we were wrong about this. We’re not as good as we think we are at detecting this. I’ve examined thousands of children throughout my career since I first began in 1975. I thought, like many others, “I’m really good at this. I’ll recognize, and I’ll know when a child is not giving me their best effort, and I’ll bring them back to best effort.” We’ve all done a lot of exams, and that’s why we think we can trust our gut from a clinical standpoint. However, we know that the research proves this wrong. We must follow the research and engage in evidence-based practice and admit that we’re not as good at detecting this as we think we are. It’s a hard thing to accept. It feels like you’re questioning your professional judgment.
Once you accept this, using PVT’s particularly with children, throughout assessment, will make you a better examiner. It will make your exam results better, more interpretable, and more relatable to whatever assessment and intervention strategies you’re aiming for.