Performance Validity Testing: Youth Assessment in Public Safety Settings

A girl sits across from a practitioner. She looks at the practitioner, smiling.

Performance Validity Testing: Youth Assessment in Public Safety Settings

Understanding performance validity is important any time maximum performance tests are used (e.g., tests of intelligence, aptitude, achievement, and/or neuropsychological performance). Knowing if a child or adolescent has given their best effort during assessment is critical to the accuracy of test score interpretation. This need has long been recognized in adults; however, pediatric validity testing has been strongly encouraged only in the last decade and is now considered a standard of care in child and adolescent assessments (Guilmette et al.; 2020).

The reasons for not giving optimal or maximum performance can vary in a public safety setting. In some cases, a child or adolescent may not provide maximum performance in an attempt to feign symptoms to gain a more lenient sentence or access to services and medication. In other cases, the stress a child or adolescent is under while being held in a facility or simple boredom or frustration may occur. Lack of maximum performance can be a detriment to a true analysis of an individual’s performance on a cognitive or behavioral assessment.

The ability to detect a non-maximum performance without the use of performance validity measures is not easy as you may think. In one study, three teenagers were instructed to “fake bad” on neuropsychological testing. Test results and a fabricated history of a mild to moderate head injury were sent to a representative sample of neuropsychologists, each of whom reviewed one case. Seventy-five percent of clinicians judged the cases to be abnormal and attributed results to cortical dysfunction; no respondent detected non-credible performance (Faust, Hart, Guilmette, T. J., & Arkes, 1988). In another study, researchers found that the base rates of pediatric noncredible presentations are highest in children seen frequently by rehabilitation providers, for example, or children from families seeking disability benefits on their behalf (Kirkwood, Kirk, Blaha, & Wilson, 2010).

We sat down with Dr. Cecil Reynolds and Dr. Robert Leark, co-authors of MHS’ Pediatric Performance Validity Test SuiteTM (PdPVTSTM), to discuss the importance of creating a standard around performance validity testing within public safety related settings.

The interview below has been edited for length and clarity.

What was the motivation that began your work on the Pediatric Performance Validity Test Suite?

Dr. Reynolds: The vast majority of practitioners who use performance validity tests (PVTs) have been using PVTs designed for adults with children and youth. There was but one PVT on the market designed for children, and it was a single task. We know that best practice requires the use of multiple PVTs over the course of a comprehensive assessment. So, we set out to develop a battery of five co-normed PVTs explicitly designed for use with children and youth—with age-appropriate stimuli and age-corrected cut-offs. This also then allows a major advance—one that is not even available with PVTs for adults that have been around far longer—we provide base rates of pass/fail for any and every combination of two to five of the tests that make up the PdPVTS, and we provide these base-rates for non-clinical as well as nine different clinical populations, something no one else has been able to provide for any age. Base-rates of pass/fail across multiple PVTs are critical to correct interpretation of the results.

Why is performance validity of crucial importance while assessing children and adolescents in public safety settings?

Dr. Leark: In a judiciary setting, we need to be able to demonstrate that the results of any assessment performed with a child or adolescent are accurate, scientifically valid, and psychometrically sound. The only way to do this is to ensure that the results of the assessment (and potential diagnoses or identification) are accurate and ensure maximum performance during assessment was given. During any sort of trial or hearing, the courts may challenge evidence presented. For the child or adolescent at hand, their assessment results add to the decision-making process.

The use of performance validity measures applies to any cognitive and behavioral testing that is undertaken during assessment. As a clinician, you want to ensure the youth at hand is being accurately assessed and that any psychiatric symptoms identified are done so in a scientifically valid manner. By leveraging the PdPVTS during assessment, in the setting of a trial or a hearing, you can use the PdPVTS to support the validity and reliability of the data brought forth.

Further, many individuals who end up in juvenile placement have Individualized Education Program (IEP) histories. Incorporating data that has factual inability versus inability related to insufficient effort is critical. I encourage any the clinician performing an evaluation to use performance validity measures during assessment process. This may be more important when the juvenile being assess obtains scores lower than average.

The PdPVTS is the only assessment on the market that provides base-rates within the testing suite. How does this impact the understanding of the assessment itself?

Dr. Reynolds: If you’re going to use a PVT, you need to know the base rates of failure for the combinations of tests that you’re giving. The PdPVTS is the only set of tests for children or adults that provides base rates across more than one test. Once you complete the PdPVTS, the base-rates of pass/fail are part of the report generated at the end of the assessment. As we know, other tests on the market not made for youth are being used as PVTs for youth. A practitioner may use one of these tests either independently or in combination with another PVT assessment. What happens when they pass one of these tests but fail the other? What is the base-rate of that pattern within the population? No one knows. We don’t have that information.

Now suppose you give any two tests within the PdPVTS—let’s say the youth passes one and fails one. Whichever one is passed and whichever one is failed, you as a practitioner are provided the base-rates of that pattern in the non-clinical population as well as nine separate clinical populations. Over 600 youth diagnosed with ADHD (attention deficit hyperactivity disorder), anxiety, depression, disruptive disorders (including Oppositional Defiant Disorder, Conduct Disorder, and Intermittent Explosive Disorder), Intellectual Disability, language disorders, learning disabilities, and traumatic brain injuries (mild, moderate, or severe) are part of that clinical population. If you don’t know the base-rate of the combination of tests you’re providing, you can’t interpret your data correctly.

Are PVTs just for neuropsychologists? If not, why is it important that all psychologists within clinical settings incorporate these tests within their assessments?

Dr. Reynolds: Statistically speaking, the rate of non-credible effort within the clinical population is relatively high, around 20-25%. In a non-clinical population, the rate of non-credible effort is quite a bit lower. The majority of the time where a clinician might not get best effort during assessment can frequently happen due to many factors such as boredom or tiredness and is not purposeful. If you’re going to interpret results from whatever you may be testing for, you need to make sure you’re receiving best effort the whole way through. If you do not get maximum effort from the examinee, you cannot draw inferences about brain-behavior relationships. How do you measure this?

Most examining clinicians think it is obvious to them when children are not giving their best effort or are engaged in malingering or other forms of dissimulation. The research on detecting poor effort and deception indicates this is not true. In fact, we are only slightly better than chance at detecting such issues clinically, in the absence of objective, quantitative data. Our professional organizations have recognized this, and current consensus documents clearly state that using objective PVTs is best practice.

Part of the best practice recommendations for PVTs is that they should be given throughout the course of assessment. How does this practice improve the overall assessment of the examinee?

Dr. Reynolds: With children, we know that the level of effort will change during the course of their exam. As practitioners, we think we are really good at what we do—and as stated above, we often think we know exactly the moment when effort starts to wain or when a child starts to engage in dissimulation. Again, we know we were wrong about this. We’re not as good as we think we are at detecting this. I’ve examined thousands of children throughout my career since I first began in 1975. I thought, like many others, “I’m really good at this. I’ll recognize, and I’ll know when a child is not giving me their best effort, and I’ll bring them back to best effort.” We’ve all done a lot of exams, and that’s why we think we can trust our gut from a clinical standpoint. However, we know that the research proves this wrong. We must follow the research and engage in evidence-based practice and admit that we’re not as good at detecting this as we think we are. It’s a hard thing to accept. It feels like you’re questioning your clinical judgment.

Once you accept this, using PVTs particularly with children, throughout assessment, will make you a better examiner. It will make your exam results better, more interpretable, and more relatable to whatever diagnoses and intervention strategies you’re aiming for.

Learn more about MHS’ Pediatric Performance Validity Test Suite.

Sign-up for MHS’ monthly Public Safety Newsletter for exclusive content from our research and development teams, industry news, events, and product highlights delivered straight to your inbox!

 

Share this post