[Skip to Content]
[Skip to Content Landing]
Views 6,910
Citations 0
September 9, 2019

The Relative Value Scale Update Committee: Time for an Update

Author Affiliations
  • 1Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts
  • 2Department of Medical Ethics and Health Policy, Perelman School of Medicine, University of Pennsylvania, Philadelphia
JAMA. Published online September 9, 2019. doi:10.1001/jama.2019.14591

In 1992, the Centers for Medicare & Medicaid Services (CMS) introduced the Resource-Based Relative Value Scale (RBRVS) as a new system for physician payment. Rather than paying physicians their “usual, customary, and reasonable”1 charges, this system was designed to pay physicians based on the time, technical skill, and mental effort required to perform each procedure. Thus, the relative value unit (RVU) was born.

Given the detailed medical knowledge required to assign numerical values to each procedure, the American Medical Association formed the Relative Value Scale Update Committee (RUC) to assist CMS with assigning and updating RVU values. Today, the RUC has become sufficiently integral to the RVU updates that its recommendations are accepted without change by CMS more than 90% of the time.2 Because commercial insurers base their payments on a multiple of the CMS Physician Fee Schedule, the RUC also shapes private insurance payments. Consequently, the recommendations of the RUC guide 70% or more of all physician payment in the United States, equal to an estimated $500 billion each year.3

Problems With the RUC

In recent years, however, the RUC has come under criticism focused on 5 problems.2,3

First, the methodology used by the RUC to determine physician time has been challenged. Time is critical because service time accounts for more than 80% of the variability in RVU valuations.3

The RUC bases its time estimates on surveys sent to physicians who perform the procedure under review. These surveys typically have low response rates (median, 2.2%) and small absolute numbers of completed surveys (median, 52),4 with 10% of procedures valued based on data from 30 or fewer surveys.4 Even for common procedures such as hip and knee arthroplasty, which are collectively performed approximately 1 million times each year and account for more than $10 billion in direct medical costs,3 the recommendations from the RUC are based on responses from 150 and 157 physicians, respectively.3

The data also may be unreliable because they are based on human memory and subjective approximations of procedure duration. Human recall is unreliable, especially when estimating numeric quantities such as time.5 Recall is also subject to a variety of cognitive biases, including recall bias, anchoring bias, response bias, and recency bias. Moreover, because the RUC does not allow public access to its survey data, it is unclear how pervasive these biases may be. The surveys are also limited to a single clinical vignette that may or may not be representative of actual practice.

In addition, RVU valuations are updated periodically, but infrequently, and updates may not be linked to advances in technology. Each year only 2% of all Current Procedural Terminology codes are reviewed.3 This has led to many procedures being reimbursed based on data that are 5, 10, and even 20 years old. For instance, RVUs for revision hip and knee arthroplasty have not been updated since 1995, nearly 25 years ago.3

Second, by its very nature the RUC reflects a potential conflict of interest. Physicians who participate in the RUC process do so knowing that higher time estimates and higher estimates of work intensity will increase their own income. Because survey numbers are small, the opinions of only a few physicians could potentially significantly influence the data.4

Third, the RUC has been criticized for its relatively small size (31 members), lack of transparency, lack of representativeness, and inherent conflicts of interest.4 The majority of its members are appointed by specialty societies that lobby the RUC for higher payment. One study that analyzed the RVU recommendations from the RUC between 1994 and 2013 found that having a representative on the RUC was associated with a 3% to 5% increase in reimbursement for procedures that the specialty performs.6 Despite this, the RUC is self-described as an “expert panel,”7 rather than an advocacy and lobbying forum.

Fourth, the RUC and, more broadly, the continued reliance of CMS on RVUs has been criticized as perpetuating a system of adverse incentives that rewards clinicians for providing more rather than better care.8 This productivity-driven system creates an environment that fails to incentivize high-value care and could potentially harm patients by overtreatment.

Most important, a number of empirical studies have now directly challenged the accuracy of the RUC’s recommendations,2,3,9 suggesting that the recommendations consistently overestimate physician time for some specialties. Two recent large peer-reviewed studies have provided evidence about the unreliability of the RUC’s methodology and recommendations.2,3

The first investigation2 examined data from the American College of Surgeons’ National Surgical Quality Improvement Program (NSQIP) registry and compared empirical time-stamp data from electronic health records (EHRs) with the RUC’s time estimates for 293 common procedures.2 The authors calculated the mean discrepancy between registry time-stamp data and the RUC’s estimates to be 20%, ranging from 2% to 58% per procedure. These discrepancies were estimated to lead to $400 million in potentially misappropriated payments, with some specialties receiving $130 million less and others receiving $160 million more during the study period (2011-2015). This was the largest and most complete review of the RUC’s recommendations and found “substantial absolute discrepancies” across all specialties.

The second investigation3 focused on a single integrated health system and compared empirical time-stamp data with the RUC’s time estimates and actual survey data for 4 common surgical procedures (original and revision total hip and knee arthroplasty).3 The authors found that the RUC overestimated intraservice times by between 18% to 61% and that procedures reviewed less recently (1995 vs 2013) were significantly more overvalued. Another finding was that 10% of survey respondents estimated times twice as long as actual operating times, suggesting possible intentional skewing of survey data by respondents.

Using Empirical Data for RVU Values

The RUC process may be both inaccurate and antiquated. Two simple changes could update the process to make the RVU values used by CMS and private insurers more accurate and to reflect quality of care. The potential changes are (1) to use empirical data currently available from EHRs of the actual time it takes to perform procedures to determine RVU values and (2) to modify payment based on patient-specific complication rates such as surgical site infections.

Time-stamp data are universally available for all procedures requiring anesthesia or an operating room. These data are also collected for a wide range of nonsurgical procedures through EHRs, and a substantial proportion of these data are already available through national registries (eg, the Society for Thoracic Surgery’s National Database or the American College of Surgery’s NSQIP registry). However, CMS could also require hospitals and surgical centers receiving Medicare payments to report their empirical time-stamp data for each submitted claim.

Having an up-to-date database of procedure durations would allow CMS to accurately value procedures and rapidly adjust for technologic innovations. Rather than only being able to review 2% of procedures each year, all procedure durations could be adjusted with real-time data on an annual basis.

CMS could also mandate reporting of quality data. Health care centers and individual physicians with low or improving preventable complication rates should receive higher payments. Likewise, institutions and physicians with consistently high levels of preventable complications should have their payment adjusted downward. Great care will need to be taken to adequately risk-stratify populations and not penalize physicians who provide care for medically complex and vulnerable populations.

In 1992, the RBRVS and RVUs were introduced because the prior system of “usual, customary, and reasonable” payments was viewed as imprecise and archaic. At that time the RUC’s incorporation of survey data was groundbreaking. After 30 years and the widespread dissemination of EHRs to provide actual, empirical data, it is time for an update. It is time to base physician reimbursement on empirical data rather than inaccurate, potentially biased, and outdated survey data.

Back to top
Article Information

Corresponding Author: Ezekiel J. Emanuel, MD, PhD, Department of Medical Ethics and Health Policy, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Blockley Hall, 11th & 14th Floors, Philadelphia, PA 19104 (zemanuel@upenn.edu).

Published Online: September 9, 2019. doi:10.1001/jama.2019.14591

Conflict of Interest Disclosures: Dr Emanuel reported receiving personal fees from Tanner Healthcare System, Mid-Atlantic Permanente Group, American College of Radiology, Marcus Evans, Loyola University, Oncology Society of New Jersey, Good Shepherd Community Care, Remedy Partners, Medzel, Kaiser Permanente Virtual Medicine, Wallace H. Coulter Foundation, Lake Nona Institute, Allocation, Philadelphia Chamber of Commerce, Blue Cross Blue Shield Minnesota, United Health Group, Futures Without Violence, Children’s Hospital of Pennsylvania, Washington State Hospital Association, Association of Academic Health Centers, Blue Cross Blue Shield of Massachusetts, American Academy of Ophthalmology, Lumeris, Roivant Sciences Inc, Medical Specialties Distributors LLC, Vizient University Healthcare System, Center for Neuro-Degenerative Research, Colorado State University, Genentech Oncology Inc, Council of Insurance Agents and Brokers, Grifols Foundation, America's Health Insurance Plans, Montefiore Physician Leadership Academy, Greenwall Foundation, Medical Home Network, Healthcare Financial Management Association, Ecumenical Center–UT Health, American Association of Optometry, Associação Nacional de Hospitais Privados, National Alliance of Healthcare Purchaser Coalitions, Optum, Massachusetts Association of Health Plans, District of Columbia Hospital Association, and Washington University; holding stock in Gilead, Allergan, Amgen, Baxter, and United Health Group; and that he is a venture partner at Oak HC/FT. No other disclosures were reported.

Glaser  WA.  The politics of paying American physicians.  Health Aff (Millwood). 1989;8(3):129-146. doi:10.1377/hlthaff.8.3.129PubMedGoogle ScholarCrossref
Chan  DC, Huynh  J, Studdert  DM.  Accuracy of valuations of surgical procedures in the Medicare fee schedule.  N Engl J Med. 2019;380(16):1546-1554. doi:10.1056/NEJMsa1807379PubMedGoogle ScholarCrossref
Urwin  JW, Gudbranson  E, Graham  D, Xie  D, Hume  E, Emanuel  EJ.  Accuracy of the Relative Value Scale Update Committee’s time estimates and Physician Fee Schedule for joint replacement.  Health Aff (Millwood). 2019;38(7):1079-1086. doi:10.1377/hlthaff.2018.05456PubMedGoogle ScholarCrossref
Government Accountability Office. Medicare physician payment rates: better data and greater transparency could improve accuracy. https://www.gao.gov/assets/680/670366.pdf. Published 2015. Accessed July 17, 2019.
Josephs  RA, Hahn  ED.  Bias and accuracy in estimates of task duration.  Organ Behav Hum Decis Process. 1995;61(2):202-213. doi:10.1006/obhd.1995.1016Google ScholarCrossref
Gao  YN.  Committee representation and Medicare reimbursements—an examination of the Resource-Based Relative Value Scale.  Health Serv Res. 2018;53(6):4353-4370. doi:10.1111/1475-6773.12857PubMedGoogle ScholarCrossref
American Medical Association. An introduction to the RUC. https://www.ama-assn.org/sites/ama-assn.org/files/corp/media-browser/public/rbrvs/introduction-to-the-ruc-updated.pdf. Accessed August 17, 2019.
Nurok  M, Gewertz  B.  Relative value units and the measurement of physician performance  [published online August 5, 2019].  JAMA. doi:10.1001/jama.2019.11163PubMedGoogle Scholar
Zuckerman  S, Merrell  K, Berenson  R, Mitchell  S, Upadhyay  D, Lewis  R. Collecting empirical physician time data: piloting an approach for validating work relative value units. https://www.urban.org/sites/default/files/publication/87771/2001123-collecting-empirical-physician-time-data-piloting-approach-for-validating-work-relative-value-units_1.pdf. Published 2016. Accessed December 30, 2017.
Limit 200 characters
Limit 25 characters
Conflicts of Interest Disclosure

Identify all potential conflicts of interest that might be relevant to your comment.

Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.

Err on the side of full disclosure.

If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.

Not all submitted comments are published. Please see our commenting policy for details.

Limit 140 characters
Limit 3600 characters or approximately 600 words
    2 Comments for this article
    Surgery vs Oncology
    Richard Reiling, Clinical Professor | Wright State University School of Medicine
    This Viewpoint seeks to show the fallacies of the RUC process for reimbursement using only data on several surgical procedures. The basic error is the time of a procedure as noted from records which would only include the time of the procedure itself, such as time in the operating room! This is a grossly inadequate measure of the actual time which must include the pre-operation discussion with the patient and the family in the hospital/clinic, scrubbing hands (at least 10 minutes), pre-procedure positioning of the patient, post-op order writing, discussion with the family. Initially with the RUC system we were faced with the inguinal hernia as the standard for the whole RUC system and did not include all the time involved - the 30-45 minutes in the OR is about one half of the time usually involved. In addition, there is a global surgical period which would include hospital visits as often as necessary and outpatient care.

    The author of this study is a medical oncologist. In the initiation of the RUC, medical oncologists refused to participate because they were making income on the 'selling' of chemotherapy and didn't rely on time spent with their patients. That nirvana ended with restrictions on drug reimbursement. The medical oncologists then tried to enter the RUC system and push that their services were more important than other internal medicine specialties. This report can be read as a 'lot of hot air' by an author whose specialty was injured by the RUC system.
    Are Primary Care Physicians Overlooked by RVU Changes?
    Edward Volpintesta, MD | 155 Greenwood Avenue Bethel CT
    The changes based on ‘empirical’ evidence ‘ may help with reimbursing for medical and surgical procedures, but for primary care doctors who often spend considerable time dealing with patients’ social and emotional problems, answering questions that consultants may have overlooked or simply didn’t have the time for, it will be impossible to fairly reimburse them.

    This ‘cognitive’ function of primary care, because it does not translate as a ‘procedure,’ is undervalued and the time has come to appraise it accurately and fairly. This may draw criticism from the ‘proceduralists’ because any increases that go to primary
    care will probably cause decreases to them.