Enhancing Care with MHS Tools: Bridging Gaps in Open-Source Solutions

Two people sit across from each other, having a conversation. There is a green chair and flowers in the background.

Enhancing Care with MHS Tools: Bridging Gaps in Open-Source Solutions

The healthcare landscape is evolving towards patient-centered care as technology advances and needs change. Measurement-Based Care (MBC) is a systematic approach to improving treatment and intervention outcomes by routinely gathering and using client-reported data. While open-source tools can provide some of this information, complementing these tools by using scientifically backed standardized measurements of care will ensure accuracy and reliability in making informed diagnostic and care decisions and tracking patient progress.

First, let’s look at how MBC improves patient care in several ways:

Data-Driven Decision-Making: MBC empowers individuals to actively participate in their care by using client-reported data. Practitioners can base their decisions on real-time data using this approach — moving away from assumptions and ensuring more accurate and effective treatment plans.

Progress Monitoring: With MBC, individuals can track treatment progress while practitioners gain reliable data to measure outcomes. MBC ensures decisions are based on valid assessments, bridging the gap between perceived and actual progress. Healthcare professionals fulfill their ethical duty to monitor intervention and demonstrate treatment effectiveness with this method. Additionally, routine data collection also helps assess the cost-efficiency of interventions and accommodation planning.

Client-Practitioner Collaboration: Routinely administered assessments offer individuals a non-confrontational way to share concerns, particularly when direct communication with practitioners may feel daunting. This approach creates a more comfortable and open environment for client-practitioner interactions.

Improved Engagement and Outcomes: Research shows that MBC is associated with symptom reduction and a decreased likelihood of symptom worsening compared to one-time screenings and infrequent assessments1,2,3. Clients also gain a better understanding of their conditions, feel more at ease sharing their experiences with practitioners, and recognize that their feedback is valued, leading to increased satisfaction with treatment.

However, MBC’s success depends on reliable, inclusive tools that also address challenges like limited stakeholder feedback, usability, and resource inequities. To tackle these barriers, experts advocate for integrating stakeholder input, leveraging digital tools that provide real-time feedback, and prioritizing equity to enhance accessibility4.

Practitioners today often rely on a combination of open-source tools and standardized assessments to support their work. Open-source solutions, which are publicly accessible and can be freely used, modified, and distributed, offer flexibility but may have limitations in validation and standardization. Here, we will explore how commercially validated tools, such as those from MHS, contribute to MBC by addressing these barriers, complementing open-source solutions, and closing gaps in patient care.

Standardized and norm-referenced 

A standardized and norm-referenced psychological measure is a test administered and scored uniformly to ensure consistency, and it compares an individual’s performance to a larger norm group. This allows for interpreting scores based on how an individual performs relative to others in the norm group. Standardized and norm-referenced measures, hallmarks of MHS tools, ensure clarity and precision in assessments. Open-source tools, by contrast, may not always capture the nuances needed for precise clinical decision-making. This limitation exists because open-source tools can vary in quality and may not undergo the same rigorous validation and standardization processes.

The Conners Continuous Performance Test™ 3rd Edition Online (Conners CPT™ 3 Online), a performance measure for attention-related issues, is one example of MHS’ commitment to rigorous standardization and norm-referencing. Like other MHS tools, normative data for the Conners CPT 3 were carefully collected from diverse and representative populations, including 1,400 participants—800 youth between the ages of 8 and 17 years, and 600 adults between the ages of 18 and 89 years—to ensure inclusivity across demographic variables such as age, gender, region, and education level.

Scores on the Conners CPT 3 Online are computed relative to these norm groups, allowing for precise comparisons between an individual’s performance and that of others with similar demographic characteristics. This approach provides insight into whether an individual’s behaviors or responses deviate from what is typical for their age, gender, and other relevant factors. These comparisons inform what is concerning relative to developmental expectations, enabling practitioners to identify areas requiring intervention, track progress over time, and make more targeted diagnostic and treatment decisions.

Reliability and validity

Open-source tools bring fresh ideas and collaboration thanks to their community-driven nature. However, they can lack the systematic, professionally managed development processes used for commercially validated tools. MHS follows structured workflows that include extensive psychometric validation, updates based on current research, strict quality control, and representative and diverse normative data. In contrast, open-source tools can vary widely in quality, sample applicability, and update frequency, which can pose challenges, particularly in high-stakes healthcare environments.

A prime example is the Conners 4th Edition™ (Conners 4®), a comprehensive assessment of symptoms and impairments associated with Attention-Deficit/Hyperactivity Disorder (ADHD) and common co-occurring problems and disorders in children and youth. Extensive studies were conducted to ensure reliability, including internal consistency checks and test-retest evaluations, resulting in scale scores that are both precise (low standard error of measurement) and consistent over time. Validity was thoroughly examined to confirm that the scales measured their intended constructs. This process involved analyzing factor structures, comparing scores with other ADHD-related measures, and assessing the scales’ ability to differentiate between clinical diagnoses. These processes highlight MHS’ commitment to providing reliable and valid tools for practitioners, with detailed documentation about the quality of these measures to build confidence in its users.

Fairness

Fairness is a foundational priority for MHS tools. The Standards for Educational and Psychological Testing define fairness as a fundamental validity issue in test development, encompassing the responsiveness of a test to individual characteristics and contexts, impartial treatment of test takers, accessibility to constructs being measured, and unbiased score interpretations5. MHS aligns with these principles to create assessments that are accessible and equitable.

MHS tools are developed to uphold these principals with inclusivity and cultural sensitivity in mind. These tools are available for print on demand, ensuring flexibility for users with different needs and varying levels of access to technology. Multilingual support further reduces barriers by allowing individuals to take the assessments in their preferred language.

Recent MHS products reflect a commitment to more than surface-level inclusivity, undergoing thorough reviews to ensure cultural sensitivity and meaningfulness across diverse populations, starting from the earliest stages of development. Analyses are conducted to confirm that additional language versions are comparable to the English version. This process includes validating translations, evaluating item performance across languages, and ensuring consistent score interpretation. By empirically verifying fairness and usability, rather than assuming equivalence, this approach actively addresses systemic barriers to care.

Psychometric validation is another critical step in ensuring fairness. For example, the Conners Adult ADHD Rating Scales™ 2nd Edition (CAARS™ 2), designed to assess the core and associated symptoms of ADHD in adults, was developed to meet the strict standards of psychometric fairness. Key demographic factors—such as gender, race/ethnicity, country of residence, and English language proficiency—were examined through measurement invariance, differential item functioning, and mean differences. Results showed no meaningful differences in scores across demographic groups. This evidence demonstrates that the CAARS 2 meets the fairness requirements outlined in the Standards for Educational and Psychological Testing 5

While open-source tools benefit from broad community input, they are often limited by the scope of their volunteer contributors and can be developed using convenience samples, such as undergraduate students, who may not represent the broader population. This practice can result in less comprehensive inclusivity considerations compared to commercially validated tools. By contrast, MHS tools are designed to address gaps in accessibility and fairness, ensuring that individuals receive assessments that are robust, culturally sensitive, and empirically validated to meet the highest standards of care.

Technology integration in clinical settings

Integrating assessments into existing healthcare workflows is critical for both service delivery and scalability. Many MHS tools can seamlessly integrate into existing business platforms, leveraging trusted science to drive automation and inform clinical decision-making. These integrations address operational challenges, enhance efficiency, and improve patient outcomes by reducing administrative labor and reallocating resources to clinical impact and patient engagement. For instance, one major behavioral healthcare organization using MHS tools reported saving up to 30 minutes per patient, significantly improving access to care. Both patients and providers reported higher even as their service volume more than doubled. While valuable in specific contexts, open-source tools may not consistently offer the same operational and clinical value. They can often be adapted for use across various platforms but with varying degrees of accuracy. In contrast, MHS tools can be fully integrated into these same systems, ensuring precise, reliable measurements that remain relevant and valid over time. Additionally, every stage of the assessment lifecycle—administration, scoring, and reporting—can be customized to meet the unique needs of organizations without compromising sensitivity or clinical validity.

Actionable insights and data utilization

MHS tools offer advanced analytics and real-time reporting capabilities, enabling practitioners to make data-informed decisions easily. The Conners 4 exemplifies flexibility in behavioral assessments, offering various lengths to suit different needs: Full-length for detailed evaluations, Short for time-sensitive situations, and ADHD Index for quick checks or repeated assessments. It provides customizable report options, such as Single-Rater and Multi-Rater Reports, ensuring relevance and reliability. Additionally, response style metrics and customizable scoring options enhance the precision and validity of the assessments. These features make the Conners 4 a versatile and robust tool for practitioners, tailored to meet individual client needs effectively. While open-source tools are popular for their simplicity, short length, and quick completion times, they often lack the advanced features necessary for comprehensive data utilization. This constraint reduces their ability to address complex clinical scenarios effectively. MHS tools overcome this challenge by offering flexible form lengths, allowing practitioners to balance the need for detailed information with time constraints, all while maintaining advanced analytics capabilities.

Bridging the gaps: Complementary use of open-source tools

Open-source tools have undeniable value in specific settings, offering accessibility and cost-effectiveness. However, standardized tools complement these solutions by providing the rigor, comparability, inclusivity, and advanced analytics that open-source tools often lack.

For example, a community health center might adopt a hybrid approach, using open-source tools for initial screenings while relying on MHS tools for in-depth assessments and treatment monitoring. This balanced strategy leverages the strengths of both, creating a robust system that aligns with MBC principles.

MHS’ standardized tools embody the principles of MBC by offering reliable, inclusive, and actionable solutions to healthcare challenges. Their extensive development process, commitment to fairness, seamless integration, and advanced analytics set them apart, making them invaluable in improving clinical outcomes.

When choosing tools for your practice, consider your specific needs and how standardized rating scales and assessments can complement open-source solutions to deliver the best patient care.

Explore MHS tools and intergrations today and take the next step toward enhancing outcomes in your clinical practice.

Have questions? Get in touch with a member of our team!

References

1 de Jong, K., Conijn, J. M., Gallagher, R. A. V., Reshetnikova, A. S., Heij, M., & Lutz, M. C. (2021). Using progress feedback to improve outcomes and reduce drop-out, treatment duration, and deterioration: A multilevel meta-analysis. Clinical Psychology Review, 85, Article 102002.

2 Fortney, J. C., Unützer, J., Wrenn, G., Pyne, J. M., Smith, G. R., Schoenbaum, M., & Harbin, H. T. (2017). A tipping point for measurement-based care. Psychiatric Services, 68(2), 179–188.

3 Lewis, C. C., Boyd, M., Puspitasari, A., Navarro, E., Howard, J., Kassab, H., Hoffman, M., Scott, K., Lyon, A., Douglas, S., Simon, G., & Kroenke, K. (2019). Implementing measurement-based care in behavioral health: A review. JAMA Psychiatry, 76(3), 324–335.

4 Whitmyre, E.D., Esposito-Smythers, C., López, R. et al. (2024). Implementation of measurement-based care in mental health service settings for youth: A systematic review. Clin Child Fam Psychol Review, 27, 909–942.

5 American Educational Research Association (AERA), American Psychological Association (APA), & National Council on Measurement in Education (NCME). (2014). Standards for educational and psychological testing.

Share this post