The Quality Roll-up Indicator was introduced in D2D 2.0. It is a composite measure intended to better reflect the comprehensive nature of primary carethrough a single measure, a measure that reflects what matters to patients in a way that also considers what is important to providers. It is part of AFHTO’s efforts to advance manageable meaningful measurement in primary care.
and is cost-effective, and therefore more sustainable for the healthcare system.
To get a true picture of the quality of comprehensive primary care, multiple indicators and the many different ways they are measured must be considered at the same time. The Starfield Model sets the guiding principles for developing this method.
How was the Quality Roll-Up Indicator created?
Creating the indicator involved several steps, including:
Sorting the measures to minimize the number needed to generate a stable roll-up indicator, and deprioritizing measures which tended to be answered the same way as other indicators (i.e. there was a high correlation between them).
See the table below for a list of indicators and their weightings.
If teams do not submit data for one or more of the 14 indicators, a random value is assigned to them for the indicator(s) with missing data. This is called imputationof missing data.
Why do we impute random values for missing data? A random number is chosen because there is no way of knowing what the actual value is. If we used an average for the indicator, we would be saying “it is likely the team is average for this.” If we left the indicator, we would effectively be saying the same thing. If we included the indicator anyway, we would be saying “it is likely the team had 0% for this indicator.” None of these are rational assumptions. So we intentionally use a random number out of recognition that we really don’t have any idea what the number should be for the team.
Performance data for each of the indicators is normalized – i.e., converted to the same scale.
WHY do we normalize the data? Consider this example: The average value for readmission scores is in the 6% range, with high numbers being “bad,” whereas the average value for patient experience indicators is above 85% and high is “good.” Without normalization, these numbers are not comparable. Normalization makes it possible to compare indicators without having the high-average indicators essentially washing out all the other indicators with lower scores.
HOW do we normalize the data?To make these numbers comparable, actual performance data are converted into values based on which percentile they are at for that indicator. A value for “high” performance is assigned to teams exceeding the 75th percentile, and a value for “low” performance is assigned to teams below the 25th percentile. Teams with values between 25th and 75th percentile are assigned an intermediate value based on how close they are to either end.
Normalized performance data is then weighted according to patient priorities for each indicator, depending on its importance to each domain in the patient-provider relationship.
What are “domains?”Domains refer here to the domains of the patient-provider relationship, as determined by a full literature review done with a post-doctoral fellow about 2 years ago. The domains are trust, knowledge, sensitivity, access, commitment and collaboration. A score for each domain is calculated by multiplying the normalized performance data by the domain-specific weights.
The literature review on domains has not yet been published because funding ran out and we haven’t had the internal resources at AFHTO to prepare it for publication.
How are the domains weighted?
The overall weights are included in the table below. The weights are based on results of a patient survey conducted 2 years ago.
The weights are being refreshed with a second iteration of the survey out right now – please feel free to encourage your team/patients to complete it. However, please be warned that it is very complicated survey that many think patients cant cope with, even though patients designed it and are clearly more than willing and able to complete it.
The NORMALIZED, WEIGHTED domain-specific scores are then added together and divided by the total maximum score possible.
The result is a team’s Quality R0ll-Up Score.
Quality Roll-Up Components and Weights
Quality Roll-up components (in descending patient priority)
% of patients involved in decisions about their care as much as they want
% of patients who had opportunity to ask questions
% of patients who felt providers spent enough time with them
% of patients who can book an appointment within a reasonable time
% of patients with readmission within 30 days after hospitalization
% of visits made to patients’ regular primary care provider team
Emergency department visits per patient
Ambulatory care sensitive hospitalizations per 1000 patients
% of eligible patients screened for colorectal cancer
% of eligible patients screened for cervical cancer
% of eligible patients screened for Breast cancer
% of eligible patients with Diabetic management & assessment
% of eligible children immunized according to guidelines
% of patients able to get an appointment on the same or next day
Quality Roll-Up Indicator: The Movie! Check out the videos that explain what it is and why AFHTO is doing it, how it is calculated, what it means to your team and a national perspective from Dr. Danielle Martin.