American Association for Physician Leadership

Quality and Risk

2021 Hospital Quality Star Ratings: How Public Input Pressured Reform in Hospital Compare Star Ratings Calculation

Robert Steele, MS | Connor Ford, BS | Brett Niles, DO | Janis Coffin, DO, FAAFP, FACMPE

December 8, 2021


Abstract:

The CMS website Hospital Compare allows many consumers to have direct access to quality measures of a hospital in order to make an informed consumer decision. In 2016, CMS created a method to generalize and rate the diverse hospital systems represented on this website. In 2018, otherwise benign events in the star rating methods caused a large number of hospitals to change their ratings. The public began to critique the methods used by CMS, and, in response, changes were implemented in 2021.




Hospital Compare and Overall Hospital Quality Star Ratings

CMS has provided the public with access to the website Hospital Compare since 2002. The Overall Hospital Quality Star Rating (OHQSR) system, established in 2016, was created to aid Medicare consumers in critically evaluating and comparing hospitals using a five-star rating scale. Hospital Compare displays many quality measures for more than 4000 hospitals. Some of these quality measures are used to calculate and generate the hospitals’ star ratings, which directly impact patient consumer choice.(1) Not only is fiduciary impact observed, but this metric is crucial in accurately representing the quality of a hospital to consumers, and any deviation from a predictable self-auditable score has massive implications for consumer and hospital system alike. In 2018, a data refresh created significant differences in the star rating for many hospital systems. CMS sought public input and is conducting internal audits to review the comprehensive methodology report that was responsible for this calculation. This article aims to provide an informed outline regarding previous methodologies, difficulties, and the public’s impact on the current 2021 star rating updates.

The Comprehensive Methodology Report (v3.0) (presented before the 2021 change) outlines the star rating calculation methodology and required measured data.(2) These collected measures represent data in which all hospital systems, regardless of demographic and volume, are able to meet for standardized reporting purposes. Fifty-seven of these measures are aggregated into categories based on similarity of characteristics, yielding seven “measure groups,” and a statistical regression model, the latent variable model (LVM), is applied to each of these. The LVM was created by CMS and Yale New Haven Health with the intent of using this model to reduce the dangers in extrinsic dimensionality of the data and make it easier to use the observed data to infer the quality variable. This then creates an intrinsic quality to a star rating designed to minimize highly variable non-parametric data. Normalization of this data is a requirement, especially when each hospital system represents vastly different populations. Often this is required in modeling because it provides mathematical normality among each metric. The LVM creates a weighted measure group score that contributes a calculated percent to the final star rating. All of the measure group scores are added to create a one- through five-star rating scale called the star rating. This final rating is now a public-facing value of a hospital’s quality as determined by CMS.(3)

Complexity Begets Variability

To maintain accurate star rating metrics, CMS’s database relies on client data “refreshes,” which are scheduled updates of each measure score on Hospital Compare reflecting the most recent available data for each hospital. In 2018, a refresh of data in conjunction with a number of updates to the business rules driving the calculation caused a significant difference to appear in the final star rating for many hospital systems. CMS noticed that ratings showed significant changes from those in 2017, despite no apparent differences in the methodology for calculating the OHQSR. This aberrant shift in data shows star rating changes in 49% (n = 3692) of hospitals, with 3% of those seeing shifts of two stars or greater.(4) In an effort to hedge detrimental ratings to hospitals and determine the source of these changes, CMS did not report star ratings in 2018. Hospital systems subsequently inundated CMS with requests for an updated, transparent, star rating methodology.

Many say that they do not believe CMS has addressed the major concerns about the methodology and usefulness of star ratings.

In response, CMS collaborated with outside developers to find methodology deficits and conduct an internal audit of why many of the statistical systems shifted. This audit identified a number of changes to individual measures that were potential sources of the observed star rating discrepancies that occurred between 2017 and the 2018 data refresh. Some of the measurement data formats were changed during this time, including patient safety indicator; inpatient quality reporting; sepsis and shock measure; pain assessment; and pneumonia readmission measures. Other measures were updated with new business rule changes, multiple individual measures were added or removed, and others had their individual weights changed. CMS sought public stakeholder input for feedback on potential updates for future consideration of the star rating calculation.(5)

Equivocation: Stakeholder Input and CMS Response

Both CMS and public stakeholders understand how calculation changes have the potential to result in fiduciary impact on hospital systems around the country. During these public input periods, many topics were covered by hospital stakeholders, and more than 800 comments were received from 140 individual stakeholders, ranging from large corporate systems to academic centers to small community hospitals. The commentary was focused on increasing simplicity, predictability, and comparability among hospitals. Topics that received many comments from stakeholders during the public input periods in 2019 included proposing a movement away from LVM to an explicit approach as well as making changes to the peer grouping methods and revisiting how CMS performs peer grouping of hospitals.(6)

The LVM’s “measure loadings” were prone to shifts between periods and could shift scores and star ratings. Institutions including the American Hospital Association, Association of American Medical Colleges, Johns Hopkins, and hospitals both large and small, expressed explicit frustration with the current LVM methodology. Many say that they do not believe CMS has addressed the major concerns about the methodology and usefulness of star ratings and urge CMS to continue to be receptive to public feedback. Some believe they could not predict their expected hospital star rating because of this method. Others even requested that CMS cease publication of the ratings. Those hospital systems most vocally in favor of CMS ceasing publishing of star ratings were those community hospitals that often provide service in community health centers, school clinics, and mobile health units, and provide large amounts of yearly charity care. A consistent argument made by many stakeholders was that there is inherent difficulty in providing healthcare to high-risk, low-income populations, in teaching hospitals, and in larger hospitals, which will score lower in their star ratings despite providing overall high-quality care to the most vulnerable of patients. They note that lowering their OHQSR due to deficits in LVM has significant long-term financial impact for many hospital systems that serve vulnerable populations.(6) CMS notes that although an explicit approach is “easier to understand and explain,” it may not be best for the high levels of measure precision or measure reporting currently required to infer quality.(5)

Additionally, two-thirds of commenters supported peer grouping changes. The objective of peer grouping is to compare similar hospitals for the purpose of establishing star ratings at the level of aggregate information created by star rating methodology. Many commented that they would like to see peer grouping based on differences in hospital case mix or service offering. Others noted that volume adjustments based on hospital size played a role in lower star ratings and that hospitals with more complex care needs or those located in underserved areas are unfairly penalized. Reliable peer grouping is required for a consistent score—regardless of hospital system—that truly reflects peer difference, rather than representing a statistical assumption on Hospital Compare. CMS understood from these public input periods that fundamental changes to the star rating process were needed.(7)

Updates for Coverage Year 2021

Recent changes led and directed by CMS forced, by its account, “simplification and transparency” into star rating methods. These changes are beginning with a new and refined star rating process for coverage year 2021.(5) For that year, 4586 hospitals were represented, and each received a one- through five-star rating. Of those, 455 received a rating of 5; 988 were rated 4; and 3142 were rated 3 and under or had no information available.(9) The 2020 star ratings (still using the old calculation procedure) of 5340 hospitals were as follows: 396 received a rating of 5; 1132 were rated 4; and 3805 rated 3 and under or had no information available.

A change in the data refresh timeline from CMS placed the typical implementation of changes from January to July 2021. In 2019, after the public input period, CMS announced it was removing the LVM methodology previously used to update star ratings. A number of studies questioned the accuracy of this statistical model and uncovered an inherent bias for low socioeconomic locations with large hospitals (or academic centers). In turn, these institutions (which provide expert level of care to vulnerable populations) were inadvertently penalized.(7)

The Hospital Compare website provides crucial data for patients and providers alike, which helps build informed decisions on where one might obtain care.

Next, CMS chose to reduce the number of measure groups from seven to five and also standardized the measure group score. This now requires the score to be placed on a common scale to compare across multiple variables. Additionally, hospitals will now be placed into one of three peer groups characterized by the number of measure groups submitted. Hospitals will now be grouped by similar gross data reporting rather than demographic size, claim volume, and so on. Lastly, critical access hospitals and Veterans Affairs hospitals will now be included in the star rating system. However, these systems will still have an “opt out” option.(8)

The Hospital Compare website provides crucial data for patients and providers alike, which helps build informed decisions on where one might obtain care. Providing patients with easy-to-access data that is quick to interpret is important, because it appears that many aspects of the future of healthcare will be driven by government compliance ratings such as the OHQSR. Because the overall star rating is a fairly recent addition to Hospital Compare, it is to be expected that there will be debates regarding the best methodologies to create this important index of care. Four thousand hospitals are represented by this website. CMS must continue to provide clear and transparent statistical methodologies in calculating these simple empiric scores. In doing so, many stakeholders feel this will allow them to track and monitor data, allowing them to monitor blanket qualitative inferences of a hospital in real-time. It is clear that key stakeholders with large patient, physician, and economic impact have concerns with the current use of mathematical modeling and complex statistical methodology to simplify and aggregate the data behind the star ratings. Therefore, it is important that CMS continue to uphold its pledge to involve key hospital stakeholders and technical experts. Long-term updates to the calculated methodologies are extremely technical and require a hospital system to have robust data analytics and business intelligence teams that can handle these technical future updates. Hospital administrators and compliance officers must understand that the business of medicine going forward is data heavy. It will require in-house business intelligence experts and government compliance teams to help hospital business leaders navigate through the future of star ratings as they are driven by mathematical modeling.

References

  1. Hospital Compare. Centers for Medicare and Medicaid services. CMS.gov. www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/HospitalCompare .

  2. Methodology resources. CMS.gov. (2019). www.qualitynet.org/outpatient/public-reporting/overall-ratings/resources#tab2 .

  3. Centers for Outcomes Research & Evaluation, Yale New Haven Health Services. Overall hospital quality star rating on Hospital Compare public input request. February 2019. www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/Overall-Hospital-Quality-Star-Rating-on-Hospital-Compare-Public-Input-Period.pdf .

  4. Map: New overall star ratings are out. How did your hospital fare? Advisory.com. January 3, 2018. www.advisory.com/blog/2018/01/new-cms-overall-star-ratings .

  5. CMS announces upcoming enhancement overall hospital quality star rating. Centers for Medicare & Medicaid Services (2019). www.cms.gov/newsroom/press-releases/cms-announces-upcoming-enhancement-overall-hospital-quality-star-ratings .

  6. Public Comment Summary Report. Centers for Medicare & Medicaid Services (2019). www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/Downloads/Overall-Hospital-Quality-Star-Ratings-on-Hospital-Compare.zip.

  7. Overall star ratings—challenges to credibility: new insights. Chicago Healthcare Analytics. January 21, 2019. https://webalyticos.home.blog/2019/01/21/overall-star-ratings-challenges-to-credibility-new-insights/#LVM .

  8. Department of Health and Human Services. Medicare hospital outpatient prospective payment system and ambulatory surgical center payment system final rule. https://public-inspection.federalregister.gov/2020-26819.pdf .

  9. Overall hospital quality star rating. CMS.gov. 2021. https://data.cms.gov/provider-data/topics/hospitals/overall-hospital-quality-star-rating .

Robert Steele, MS

Third Year Medical Student, Kansas City University, Kansas City, Missouri


Connor Ford, BS

Third-year medical student, Kansas City University, Kansas City, Missouri.


Brett Niles, DO

Captain, United States Air Force, Family Medicine Resident Physician, Nellis Air Force Base, Nevada.


Janis Coffin, DO, FAAFP, FACMPE

Janis Coffin, DO, FAAFP, FACMPE, Chief Transformation Officer, Augusta University, Augusta, Georgia; email: jcoffin@augusta.edu.



Interested in sharing leadership insights? Contribute



This article is available to AAPL Members.

Log in to view.

For over 45 years.

The American Association for Physician Leadership has helped physicians develop their leadership skills through education, career development, thought leadership and community building.

The American Association for Physician Leadership (AAPL) changed its name from the American College of Physician Executives (ACPE) in 2014. We may have changed our name, but we are the same organization that has been serving physician leaders since 1975.

CONTACT US

Mail Processing Address
PO Box 96503 I BMB 97493
Washington, DC 20090-6503

Payment Remittance Address
PO Box 745725
Atlanta, GA 30374-5725
(800) 562-8088
(813) 287-8993 Fax
customerservice@physicianleaders.org

CONNECT WITH US

LOOKING TO ENGAGE YOUR STAFF?

AAPL providers leadership development programs designed to retain valuable team members and improve patient outcomes.

American Association for Physician Leadership®

formerly known as the American College of Physician Executives (ACPE)