Customize your JAMA Network experience by selecting one or more topics from the list below.
In 1992, the Centers for Medicare & Medicaid Services (CMS) introduced the Resource-Based Relative Value Scale (RBRVS) as a new system for physician payment. Rather than paying physicians their “usual, customary, and reasonable”1 charges, this system was designed to pay physicians based on the time, technical skill, and mental effort required to perform each procedure. Thus, the relative value unit (RVU) was born.
Given the detailed medical knowledge required to assign numerical values to each procedure, the American Medical Association formed the Relative Value Scale Update Committee (RUC) to assist CMS with assigning and updating RVU values. Today, the RUC has become sufficiently integral to the RVU updates that its recommendations are accepted without change by CMS more than 90% of the time.2 Because commercial insurers base their payments on a multiple of the CMS Physician Fee Schedule, the RUC also shapes private insurance payments. Consequently, the recommendations of the RUC guide 70% or more of all physician payment in the United States, equal to an estimated $500 billion each year.3
In recent years, however, the RUC has come under criticism focused on 5 problems.2,3
First, the methodology used by the RUC to determine physician time has been challenged. Time is critical because service time accounts for more than 80% of the variability in RVU valuations.3
The RUC bases its time estimates on surveys sent to physicians who perform the procedure under review. These surveys typically have low response rates (median, 2.2%) and small absolute numbers of completed surveys (median, 52),4 with 10% of procedures valued based on data from 30 or fewer surveys.4 Even for common procedures such as hip and knee arthroplasty, which are collectively performed approximately 1 million times each year and account for more than $10 billion in direct medical costs,3 the recommendations from the RUC are based on responses from 150 and 157 physicians, respectively.3
The data also may be unreliable because they are based on human memory and subjective approximations of procedure duration. Human recall is unreliable, especially when estimating numeric quantities such as time.5 Recall is also subject to a variety of cognitive biases, including recall bias, anchoring bias, response bias, and recency bias. Moreover, because the RUC does not allow public access to its survey data, it is unclear how pervasive these biases may be. The surveys are also limited to a single clinical vignette that may or may not be representative of actual practice.
In addition, RVU valuations are updated periodically, but infrequently, and updates may not be linked to advances in technology. Each year only 2% of all Current Procedural Terminology codes are reviewed.3 This has led to many procedures being reimbursed based on data that are 5, 10, and even 20 years old. For instance, RVUs for revision hip and knee arthroplasty have not been updated since 1995, nearly 25 years ago.3
Second, by its very nature the RUC reflects a potential conflict of interest. Physicians who participate in the RUC process do so knowing that higher time estimates and higher estimates of work intensity will increase their own income. Because survey numbers are small, the opinions of only a few physicians could potentially significantly influence the data.4
Third, the RUC has been criticized for its relatively small size (31 members), lack of transparency, lack of representativeness, and inherent conflicts of interest.4 The majority of its members are appointed by specialty societies that lobby the RUC for higher payment. One study that analyzed the RVU recommendations from the RUC between 1994 and 2013 found that having a representative on the RUC was associated with a 3% to 5% increase in reimbursement for procedures that the specialty performs.6 Despite this, the RUC is self-described as an “expert panel,”7 rather than an advocacy and lobbying forum.
Fourth, the RUC and, more broadly, the continued reliance of CMS on RVUs has been criticized as perpetuating a system of adverse incentives that rewards clinicians for providing more rather than better care.8 This productivity-driven system creates an environment that fails to incentivize high-value care and could potentially harm patients by overtreatment.
Most important, a number of empirical studies have now directly challenged the accuracy of the RUC’s recommendations,2,3,9 suggesting that the recommendations consistently overestimate physician time for some specialties. Two recent large peer-reviewed studies have provided evidence about the unreliability of the RUC’s methodology and recommendations.2,3
The first investigation2 examined data from the American College of Surgeons’ National Surgical Quality Improvement Program (NSQIP) registry and compared empirical time-stamp data from electronic health records (EHRs) with the RUC’s time estimates for 293 common procedures.2 The authors calculated the mean discrepancy between registry time-stamp data and the RUC’s estimates to be 20%, ranging from 2% to 58% per procedure. These discrepancies were estimated to lead to $400 million in potentially misappropriated payments, with some specialties receiving $130 million less and others receiving $160 million more during the study period (2011-2015). This was the largest and most complete review of the RUC’s recommendations and found “substantial absolute discrepancies” across all specialties.
The second investigation3 focused on a single integrated health system and compared empirical time-stamp data with the RUC’s time estimates and actual survey data for 4 common surgical procedures (original and revision total hip and knee arthroplasty).3 The authors found that the RUC overestimated intraservice times by between 18% to 61% and that procedures reviewed less recently (1995 vs 2013) were significantly more overvalued. Another finding was that 10% of survey respondents estimated times twice as long as actual operating times, suggesting possible intentional skewing of survey data by respondents.
The RUC process may be both inaccurate and antiquated. Two simple changes could update the process to make the RVU values used by CMS and private insurers more accurate and to reflect quality of care. The potential changes are (1) to use empirical data currently available from EHRs of the actual time it takes to perform procedures to determine RVU values and (2) to modify payment based on patient-specific complication rates such as surgical site infections.
Time-stamp data are universally available for all procedures requiring anesthesia or an operating room. These data are also collected for a wide range of nonsurgical procedures through EHRs, and a substantial proportion of these data are already available through national registries (eg, the Society for Thoracic Surgery’s National Database or the American College of Surgery’s NSQIP registry). However, CMS could also require hospitals and surgical centers receiving Medicare payments to report their empirical time-stamp data for each submitted claim.
Having an up-to-date database of procedure durations would allow CMS to accurately value procedures and rapidly adjust for technologic innovations. Rather than only being able to review 2% of procedures each year, all procedure durations could be adjusted with real-time data on an annual basis.
CMS could also mandate reporting of quality data. Health care centers and individual physicians with low or improving preventable complication rates should receive higher payments. Likewise, institutions and physicians with consistently high levels of preventable complications should have their payment adjusted downward. Great care will need to be taken to adequately risk-stratify populations and not penalize physicians who provide care for medically complex and vulnerable populations.
In 1992, the RBRVS and RVUs were introduced because the prior system of “usual, customary, and reasonable” payments was viewed as imprecise and archaic. At that time the RUC’s incorporation of survey data was groundbreaking. After 30 years and the widespread dissemination of EHRs to provide actual, empirical data, it is time for an update. It is time to base physician reimbursement on empirical data rather than inaccurate, potentially biased, and outdated survey data.
Corresponding Author: Ezekiel J. Emanuel, MD, PhD, Department of Medical Ethics and Health Policy, Perelman School of Medicine, University of Pennsylvania, 423 Guardian Dr, Blockley Hall, 11th & 14th Floors, Philadelphia, PA 19104 (email@example.com).
Published Online: September 9, 2019. doi:10.1001/jama.2019.14591
Conflict of Interest Disclosures: Dr Emanuel reported receiving personal fees from Tanner Healthcare System, Mid-Atlantic Permanente Group, American College of Radiology, Marcus Evans, Loyola University, Oncology Society of New Jersey, Good Shepherd Community Care, Remedy Partners, Medzel, Kaiser Permanente Virtual Medicine, Wallace H. Coulter Foundation, Lake Nona Institute, Allocation, Philadelphia Chamber of Commerce, Blue Cross Blue Shield Minnesota, United Health Group, Futures Without Violence, Children’s Hospital of Pennsylvania, Washington State Hospital Association, Association of Academic Health Centers, Blue Cross Blue Shield of Massachusetts, American Academy of Ophthalmology, Lumeris, Roivant Sciences Inc, Medical Specialties Distributors LLC, Vizient University Healthcare System, Center for Neuro-Degenerative Research, Colorado State University, Genentech Oncology Inc, Council of Insurance Agents and Brokers, Grifols Foundation, America's Health Insurance Plans, Montefiore Physician Leadership Academy, Greenwall Foundation, Medical Home Network, Healthcare Financial Management Association, Ecumenical Center–UT Health, American Association of Optometry, Associação Nacional de Hospitais Privados, National Alliance of Healthcare Purchaser Coalitions, Optum, Massachusetts Association of Health Plans, District of Columbia Hospital Association, and Washington University; holding stock in Gilead, Allergan, Amgen, Baxter, and United Health Group; and that he is a venture partner at Oak HC/FT. No other disclosures were reported.
Identify all potential conflicts of interest that might be relevant to your comment.
Conflicts of interest comprise financial interests, activities, and relationships within the past 3 years including but not limited to employment, affiliation, grants or funding, consultancies, honoraria or payment, speaker's bureaus, stock ownership or options, expert testimony, royalties, donation of medical equipment, or patents planned, pending, or issued.
Err on the side of full disclosure.
If you have no conflicts of interest, check "No potential conflicts of interest" in the box below. The information will be posted with your response.
Not all submitted comments are published. Please see our commenting policy for details.
Urwin JW, Emanuel EJ. The Relative Value Scale Update Committee: Time for an Update. JAMA. Published online September 09, 2019. doi:10.1001/jama.2019.14591
Create a personal account or sign in to: