Importance
Published evaluations of medical home interventions have found limited effects on quality and utilization of care.
Objective
To measure associations between participation in the Northeastern Pennsylvania Chronic Care Initiative and changes in quality and utilization of care.
Design, Setting, and Participants
The northeast region of the Pennsylvania Chronic Care Initiative began in October 2009, included 2 commercial health plans and 27 volunteering small primary care practice sites, and was designed to run for 36 months. Both participating health plans provided medical claims and enrollment data spanning October 1, 2007, to September 30, 2012 (2 years prior to and 3 years after the pilot inception date). We analyzed medical claims for 17 363 patients attributed to 27 pilot and 29 comparison practices, using difference-in-difference methods to estimate changes in quality and utilization of care associated with pilot participation.
Exposures
The intervention included learning collaboratives, disease registries, practice coaching, payments to support care manager salaries and practice transformation, and shared savings incentives (bonuses of up to 50% of any savings generated, contingent on meeting quality targets). As a condition of participation, pilot practices were required to attain recognition by the National Committee for Quality Assurance as medical homes.
Main Outcomes and Measures
Performance on 6 quality measures for diabetes and preventive care; utilization of hospital, emergency department, and ambulatory care.
Results
All pilot practices received recognition as medical homes during the intervention. By intervention year 3, relative to comparison practices, pilot practices had statistically significantly better performance on 4 process measures of diabetes care and breast cancer screening; lower rates of all-cause hospitalization (8.5 vs 10.2 per 1000 patients per month; difference, −1.7 [95% CI, −3.2 to −0.03]), lower rates of all-cause emergency department visits (29.5 vs 34.2 per 1000 patients per month; difference, −4.7 [95% CI, −8.7 to −0.9]), lower rates of ambulatory care–sensitive emergency department visits (16.2 vs 19.4 per 1000 patients per month; difference, −3.2 [95% CI, −5.7 to −0.9]), lower rates of ambulatory visits to specialists (104.9 vs 122.2 per 1000 patients per month; difference, −17.3 [95% CI, −26.6 to −8.0]); and higher rates of ambulatory primary care visits (349.0 vs 271.5 per 1000 patients per month; difference, 77.5 [95% CI, 37.3 to 120.5]).
Conclusions and Relevance
During a 3-year period, this medical home intervention, which included shared savings for participating practices, was associated with relative improvements in quality, increased primary care utilization, and lower use of emergency department, hospital, and specialty care. With further experimentation and evaluation, such interventions may continue to become more effective.
The medical home concept, which encompasses a diverse set of primary care practice models intended to achieve high quality and efficiency of care, has gained wide support.1,2 Pilot interventions that encourage primary care practices to receive recognition as medical homes generally feature new resources (such as technical assistance and per-patient per-month fees to support practice transformation) and, more recently, new payment incentives such as shared savings.3,4
Systematic reviews and studies of medical home interventions have reported mixed effects on quality of care and little evidence of reductions in utilization or costs among community-based primary care practices.5-10 However, with few exceptions,11 these studies have evaluated interventions that lacked financial incentives for practices to control the utilization or costs of care. Moreover, conveners of early medical home pilot programs have modified their approaches over time, potentially enhancing the effectiveness of their interventions.
We evaluated the northeast region of the Pennsylvania Chronic Care Initiative (PACCI), a medical home intervention that was led by experienced conveners and that featured shared savings for participating practices, thereby creating direct financial incentives to reduce costs and utilization of care. We hypothesized that this intervention would be associated with improvements in quality and efficiency during a 3-year period.
As detailed elsewhere,7,12,13 a broad coalition of payers, clinicians, delivery system, and government stakeholders formulated the PACCI as a series of regional medical home pilot interventions. The northeast region intervention began in October 2009, included 2 commercial health plans and an initial cohort of 29 volunteering small primary care practice sites, and was designed to run for 36 months. The state selected these 29 practices from among all volunteering practices according to criteria available in eAppendix 1 in the Supplement. As in other PACCI regions, the northeast intervention targeted diabetes care improvement among adult patients and included learning collaboratives, web-based registries to generate quality reports, and practice coaching to facilitate practice transformation.7 However, the intervention did not exclusively target patients with diabetes; only 3 of the 14 quality benchmarks for participating practices (details in eAppendix 2 in the Supplement) focused on diabetes.
Participating practices were required to obtain National Committee for Quality Assurance (NCQA) Physician-Practice Connections–Patient Centered Medical Home (PPC-PCMH) recognition as medical homes, by intervention month 18, at “Level 1 Plus” or greater based on 2008 PPC-PCMH criteria (details in eAppendix 2 in the Supplement). Practices also received $1.50 per patient per month in “Care Management Payments,” which were earmarked for dedicated care manager salaries, and $1.50 per patient per month in “Practice Support Payments,” which could be used to support other costs of practice transformation.
Unlike most previous medical home interventions, practices participating in the northeast PACCI were eligible to receive shared savings bonuses, contingent on meeting quality benchmarks, if total spending on their patients was less than expected in a given year (ie, if savings were observed). Each health plan could decide its own method for calculating savings, and the bonus payments could range from 40% to 50% of calculated savings in each year (details in eAppendix 2 in the Supplement). Practices faced no financial penalties if observed spending was greater than expected. Participating health plans also provided each participating practice with semiannual feedback on hospital and emergency department utilization.
On January 1, 2012 (27 months after the pilot began), the northeast PACCI joined the Medicare Advanced Primary Care Practice Demonstration, and Medicare became a participating payer. No other intervention components changed substantially during the first 3 years of this pilot.
The northeast PACCI intervention had multiple design features not present in the southeast PACCI intervention, which began 16 months earlier.7 Unlike the southeast PACCI, the northeast PACCI included shared savings incentives and provided utilization data to participating practices (including lists of each practice’s patients who had recent emergency department visits or hospitalizations). Also, the northeast PACCI placed less emphasis on early NCQA medical home recognition than did the southeast PACCI, where monthly per-patient bonuses (which were larger for higher levels of NCQA recognition) began as soon as recognition was received.
We used a difference-in-differences design to compare changes during a 3-year period in the quality and utilization of care for patients attributed to practices that participated in the northeast PACCI and comparison practices that did not participate in this medical home intervention. The study was approved by the RAND Human Subjects Protection Committee.
Based on lists of practices provided by the 2 participating health plans, a state contractor selected 29 comparison practices in northeast Pennsylvania that had the same approximate composition as the pilot practices in terms of practice size and specialty (family practice, internal medicine). The comparison practices were selected after the pilot practices were identified but before the pilot intervention began. Data on quality and utilization of care were not used to select comparison practices.
As detailed elsewhere, we developed a survey instrument to measure practices’ structural capabilities, including use of disease management, registries, and electronic health records (EHRs) (instrument available from the authors on request).7,14 We mailed this survey to 1 leader of each participating and comparison practice in September 2010, querying baseline capabilities present in September 2009, and we mailed a second survey in October 2012 to assess capabilities after the third year of the pilot.
Measures of Quality and Utilization
Both participating health plans supplied medical claims and enrollment data spanning October 1, 2007, to September 30, 2012 (2 years prior to and 3 years after the pilot inception date), for their members who, at any time during this 5-year period, had 1 or more medical claims for any service with a pilot or comparison practice.
To facilitate comparison of our evaluation with evaluations of other medical home interventions, we calculated claims-based performance measures following recommendations from the Commonwealth Fund’s medical home evaluators’ collaborative.15,16 These included NCQA Healthcare Effectiveness and Data Information Set process measures of quality, modified to account for duration of observation in some instances; rates of hospitalization (all-cause and ambulatory care–sensitive); emergency department visits (all-cause and ambulatory care–sensitive); and ambulatory visits. eAppendix 3 in the Supplement presents measure specifications.
Based on the qualifying services detailed in eAppendix 4 in the Supplement, we attributed patients to the primary care clinicians who provided the plurality of qualifying services in each of 4 periods (the preintervention period and intervention years 1, 2, and 3), with the most recent service breaking ties.7 In sensitivity analyses, we reattributed patients based on the majority (>50%) of qualifying services.
We compared the preintervention characteristics and patient populations of pilot and comparison practices using Wilcoxon rank-sum and Fisher exact tests. We compared practices’ baseline and postintervention possession of structural capabilities using Liddell exact tests.
It is possible that the intervention could affect patients’ likelihood of leaving or staying with their primary care practices. We tested for this possibility by fitting a linear probability model with patient retention (ie, whether a given patient attributed to a practice in the preintervention period remained so attributed during each intervention year) as the dependent variable and pilot participation interacted with intervention year as the independent variables. We reasoned that if there was evidence of selection (manifesting as differential patient retention during the intervention), patients should be assigned to practices based on preintervention attribution only (ie, an intent-to-treat approach). In the presence of selection, patient assignment using sequential cross-sectional attribution (ie, reattributing patients annually) could lead to biased estimates of intervention effects.
Among continuously enrolled patients attributed to a study practice at baseline, the observed rate of patient retention (ie, same-practice attribution in pilot year 3) was 57.2% among pilot practices and 50.2% among control practices (difference, 7.0% [95% CI, −1.2% to 15.2%]; P = .09 for difference). Because of this difference in patient retention rates, we performed an intent-to-treat analysis using preintervention attribution of patients to practices.
We evaluated associations between practice exposure to the northeast PACCI pilot intervention and changes in performance on quality measures by fitting linear probability models with “average treatment effect on the treated” propensity weights to balance pilot and comparison practices’ baseline shares of patients from each health plan and performance on each measure.17 For each quality measure, the dependent variable was receipt of the indicated service, and independent variables were indicators for period (preintervention and each intervention year), interactions between period and practice participation in the pilot, indicators for the health plan contributing each observation and patient enrollment in a health maintenance organization (HMO), and fixed effects (dummy variables) for each practice.
For measures of utilization, we fit 2-part logistic and negative binomial models, using propensity weights to balance practices’ shares of patients from each health plan and baseline utilization rates. The dependent variables were utilization counts in each period. Independent variables were indicators for period; interaction between period and pilot or comparison status; indicators for the health plan contributing each observation and patient enrollment in an HMO; patient age, sex, and preintervention Charlson comorbidity score18; and practice fixed effects.
In all models, we used generalized estimating equations with robust standard errors to account for practice-level clustering.19,20 Because empirical standard error estimates can be sensitive to missing data, we included continuously enrolled health plan members in the regression models. To display adjusted data from nonlinear regressions on their original measurement scales, we generated recycled predictions and used practice-level bootstrapping (1000 resamples) to generate single confidence intervals from 2-part models.21,22 To generate single P values for display purposes, we fit 1-part negative binomial models; P values from each part of the 2-part models are available in eAppendix 5 in the Supplement.
In sensitivity analyses, we substituted logistic for linear probability models and included patients who lacked continuous health plan enrollment. We considered P < .05 (2-tailed) significant and conducted data management and analyses using SAS version 9.2 (SAS Institute Inc) and SQL Server 2008 (Microsoft).
Of the 29 practices that volunteered to participate in the pilot, 1 withdrew before the intervention began and 1 withdrew during the first intervention year. The remaining 27 practices completed the 3-year intervention as planned and are included for analysis. The pilot and comparison practices were similar in baseline size, specialty, and patient case-mix (Table 1).
Each pilot practice received NCQA PPC-PCMH recognition during the intervention: 23 at level 3, 2 at level 2, and 2 at level 1. Twelve pilot practices were recognized under the 2008 criteria and 15 under the 2011 criteria. Two of the comparison practices received NCQA recognition during the pilot.
Twenty-three pilot practices (85%) responded to both the baseline and year 3 structural surveys, but only 6 comparison practices (21%) did so, precluding analysis of their responses. Pilot practices adopted capabilities in performance feedback, registry use, care management, patient outreach, and electronic test ordering (detailed results available in eAppendix 6 in the Supplement). All responding pilot practices had EHRs at baseline.
Pilot participation was statistically significantly associated with higher performance on all 4 examined measures of diabetes care quality and breast cancer screening but not colorectal cancer screening (Table 2). These associations emerged in intervention year 1 for each of these measures, except low-density lipoprotein cholesterol testing among patients with diabetes, for which performance was statistically significantly greater in intervention year 3 only.
By year 3, pilot participation was statistically significantly associated with lower rates of all-cause hospitalization per 1000 patients per month (−1.7 [95% CI, −3.2 to −0.03]), all-cause emergency department visits (−4.7 [95% CI, −8.7 to −0.9]), ambulatory care–sensitive emergency department visits (−3.2 [95% CI, −5.7 to −0.9]), and ambulatory visits to specialists (−17.3 [95% CI, −26.6 to −8.0]) and with higher rates of ambulatory primary care visits (77.5 [95% CI, 37.3 to 120.5]) (Table 3 and Table 4). For all-cause hospitalizations, statistically significant differences between pilot and comparison practices emerged in year 2. For all-cause and ambulatory care–sensitive emergency department visits, statistically significant differences between pilot and comparison practices were present in year 3 only. Rates of ambulatory care–sensitive hospitalization also were lower among pilot practices, but this difference was not statistically significant.
Sensitivity analyses differed from the main results in one way only: in logistic models, there was no statistically significant association between pilot participation and rates of low-density lipoprotein cholesterol testing among persons with diabetes.
To our knowledge, the northeast region of the PACCI is the first evaluated multipayer medical home intervention to feature shared savings in addition to the financial resources, technical assistance, and recognition requirements typical of previously evaluated medical home interventions. In contrast to other recent evaluations of medical home interventions among small primary care practice sites,7-10 participation in the northeast PACCI was associated with relative improvements in the majority of quality measures examined, more use of ambulatory primary care visits, and lower use of hospital, emergency department, and ambulatory specialist visits.
Why did the northeast PACCI pilot intervention produce more quality improvements and utilization changes than previous medical home pilots evaluated by our team and others?7-10 Our study was not designed to identify specific mechanisms of improvement, but intervention attributes suggest several possibilities. First, the inclusion of a substantial shared savings incentive, with shared savings bonus payments being contingent on meeting quality measure benchmarks, may have been a particularly strong motivator for practices to invest and engage more effectively in care management efforts. Better care management may have contributed to the higher rate of patient retention that we observed among the pilot practices relative to comparison practices. Second, pilot practices received regular feedback from participating health plans on utilization of hospitals, emergency departments, and other medical services by their patients. Timely feedback may have enabled practices to more quickly adjust their efforts to meet quality and utilization benchmarks.
Third, the northeast PACCI intervention did not include a financial incentive tied to early achievement of medical home recognition, potentially enhancing participating practices’ abilities to focus on learning collaborative activities and other process improvement efforts. Fourth, all of the pilot practices had EHRs at baseline. Adopting new EHRs can be stressful for primary care practices and distract from other efforts to improve patient care.23 Fifth, pilot practices received relatively high levels of NCQA medical home recognition, with some recognized under the newer 2011 PPC-PCMH criteria released during the intervention. Thus, they may have been better positioned to implement case management and other advanced capabilities.
Our study was designed to evaluate a particular medical home intervention (the northeast PACCI) rather than changes associated with practice-level implementation of a particular medical home model. Despite important differences in study design and setting, we note that studies of medical home implementation within the Veterans Health Administration also have found associations with quality and utilization of care.24,25 Like the practices participating in the northeast PACCI, Veterans Health Administration primary care practices had access to data on their patients’ utilization of hospital and emergency department services, potentially enhancing their abilities to function effectively as medical homes.
To our knowledge, there are 4 prior evaluations of medical home interventions that have found statistically significant reductions in 1 or more measures of hospitalization or emergency department utilization.9-11,26 The relative rate reductions in these evaluations (6%-18% for all-cause hospitalizations11,26; approximately 30% for all-cause emergency department visits10,26; and 12% for ambulatory care–sensitive emergency department visits9) were comparable to or greater than those observed in the northeast PACCI, with mutually overlapping confidence intervals surrounding these point estimates. We note, however, that multiple other medical home intervention evaluations have not detected such effects.5-8 By examining design differences between medical home interventions, the reasons for these discrepant results may become clearer.
We saw declines in quality measure performance and primary care ambulatory visit rates among comparison practices. These decreases may have been due to prospective patient attribution, which required that patients visit their primary care practices in the preintervention period but not thereafter. Also, performance on most evaluated quality measures was high at baseline, limiting room for improvement in general. It is also possible that unobserved changes in northeast Pennsylvania during the study period (which coincided with an economic recession) could have reduced primary care visit rates and use of recommended treatment and preventive services.
Our study has limitations. First, unobserved differences between pilot and comparison practices could affect our results, despite application of propensity score weighting and statistical adjustment. Second, the findings we observed may not generalize to other settings, other types of primary care practices, and other medical home initiatives. Third, the range of quality measures for which sufficient sample size existed was limited, and we did not assess changes in patient or clinician experience. Fourth, complete data on the costs of implementing the northeast PACCI intervention (eg, the costs of practice coaching) and on shared savings payments made from health plans to participating practices were unavailable to us. Therefore, financial effects of the pilot could not be estimated. Fifth, patients in the main evaluation models were enrolled continuously in commercial health plans, and no data on patients’ sociodemographic characteristics were available for analysis; these characteristics of the evaluation may limit the generalizability of findings to patients lacking continuous health insurance and to patient populations dissimilar to the one we studied. Sixth, we lacked complete data on any structural transformation that may have occurred among comparison practices.
More than 100 medical home interventions are under way in the United States.4 They vary considerably in the mix of new resources, technical assistance, contractual obligations, performance measures, and incentives available to primary care practices. We believe evaluation results from the first 3 years of the northeast Pennsylvania Chronic Care Initiative offer guidance for program designers and policy makers. Medical home interventions that incentivize activities in addition to structural transformation may produce larger improvements in patient care. In particular, providing shared savings incentives and timely availability of data on emergency department visits and hospitalizations may encourage and enable primary care practices to contain unnecessary or avoidable utilization in these settings. Additional studies will be needed to determine empirically whether these features or others are indeed the key “active ingredients” in medical home interventions. Continuing experimentation and careful evaluation of the features of medical home interventions can inform the design of future programs intended to strengthen primary care.
Corresponding Author: Mark W. Friedberg, MD, MPP, 20 Park Plaza, Ste 920, Boston, MA 02116 (mfriedbe@rand.org).
Published Online: June 1, 2015. doi:10.1001/jamainternmed.2015.2047.
Author Contributions: Dr Friedberg had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Friedberg, Volpp, Schneider.
Acquisition, analysis, or interpretation of data: Friedberg, Rosenthal, Werner, Schneider.
Drafting of the manuscript: Friedberg.
Critical revision of the manuscript for important intellectual content: Rosenthal, Werner, Volpp, Schneider.
Statistical analysis: Friedberg.
Obtained funding: Friedberg, Volpp.
Administrative, technical, or material support: Volpp.
Study supervision: Schneider.
Economic and policy analysis: Rosenthal.
Conflict of Interest Disclosures: Dr Friedberg has received compensation from the United States Department of Veterans Affairs for consultation related to medical home implementation and research support from the Patient-Centered Outcomes Research Institute via subcontract to the National Committee for Quality Assurance. Dr Volpp has received compensation as a consultant from CVS Caremark and VALHealth and has received research funding from CVS Caremark, Humana, Horizon Blue Cross Blue Shield, Weight Watchers, and Discovery (South Africa). No other authors have potential conflicts of interest to disclose.
Funding/Support: This study was sponsored by the Commonwealth Fund.
Role of the Funder/Sponsor: The Commonwealth Fund had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and the decision to submit the manuscript for publication.
Previous Presentation: The authors have published a related report7 from the same overall study, which evaluated the southeast region of the Pennsylvania Chronic Care Initiative (a “medical home” intervention with components different from the intervention evaluated in the current manuscript).
Additional Contributions: We gratefully acknowledge Aaron Kofner, MA, MS (RAND), Scott Ashwood, PhD (RAND), Scot Hickey, MA (RAND), and Samuel Hirshman, BA (RAND), for assistance with programming and data management; Claude Setodji, PhD (RAND), for statistical consultation; and Marcela Myers, MD (Commonwealth of Pennsylvania), and Michael Bailit, MM (Bailit Health Purchasing, LLC), for providing data on pilot intervention design and NCQA recognition levels and for facilitating other data collection. Mr Kofner, Dr Ashwood, Mr Hickey, Mr Hirshman, and Dr Setodji received compensation for their roles in the study.
Correction: This article was corrected online on June 2, 2015, to correct the Role of the Funder/Sponsor statement.
3.Friedberg
MW, Lai
DJ, Hussey
PS, Schneider
EC. A guide to the medical home as a practice-level intervention.
Am J Manag Care. 2009;15(10)(suppl):S291-S299.
PubMedGoogle Scholar 4.Edwards
ST, Bitton
A, Hong
J, Landon
BE. Patient-centered medical home initiatives expanded in 2009-13: providers, patients, and payment incentives increased.
Health Aff (Millwood). 2014;33(10):1823-1831.
PubMedGoogle ScholarCrossref 5.Peikes
D, Zutshi
A, Genevro
JL, Parchman
ML, Meyers
DS. Early evaluations of the medical home: building on a promising start.
Am J Manag Care. 2012;18(2):105-116.
PubMedGoogle Scholar 6.Jackson
GL, Powers
BJ, Chatterjee
R,
et al. Improving patient care: the patient centered medical home: a systematic review.
Ann Intern Med. 2013;158(3):169-178.
PubMedGoogle ScholarCrossref 7.Friedberg
MW, Schneider
EC, Rosenthal
MB, Volpp
KG, Werner
RM. Association between participation in a multipayer medical home intervention and changes in quality, utilization, and costs of care.
JAMA. 2014;311(8):815-825.
PubMedGoogle ScholarCrossref 8.Werner
RM, Duggan
M, Duey
K, Zhu
J, Stuart
EA. The patient-centered medical home: an evaluation of a single private payer demonstration in New Jersey.
Med Care. 2013;51(6):487-493.
PubMedGoogle ScholarCrossref 9.Rosenthal
MB, Friedberg
MW, Singer
SJ, Eastman
D, Li
Z, Schneider
EC. Effect of a multipayer patient-centered medical home on health care utilization and quality: the Rhode Island chronic care sustainability initiative pilot program.
JAMA Intern Med. 2013;173(20):1907-1913.
PubMedGoogle ScholarCrossref 10.Fifield
J, Forrest
DD, Burleson
JA, Martin-Peele
M, Gillespie
W. Quality and efficiency in small practices transitioning to patient centered medical homes: a randomized trial.
J Gen Intern Med. 2013;28(6):778-786.
PubMedGoogle ScholarCrossref 11.Gilfillan
RJ, Tomcavage
J, Rosenthal
MB,
et al. Value and the medical home: effects of transformed primary care.
Am J Manag Care. 2010;16(8):607-614.
PubMedGoogle Scholar 12.Gabbay
RA, Bailit
MH, Mauger
DT, Wagner
EH, Siminerio
L. Multipayer patient-centered medical home implementation guided by the chronic care model.
Jt Comm J Qual Patient Saf. 2011;37(6):265-273.
PubMedGoogle Scholar 13.Gabbay
RA, Friedberg
MW, Miller-Day
M, Cronholm
PF, Adelman
A, Schneider
EC. A positive deviance approach to understanding key features to improving diabetes care in the medical home.
Ann Fam Med. 2013;11(suppl 1):S99-S107.
PubMedGoogle ScholarCrossref 14.Friedberg
MW, Safran
DG, Coltin
KL, Dresser
M, Schneider
EC. Readiness for the patient-centered medical home: structural capabilities of Massachusetts primary care practices.
J Gen Intern Med. 2009;24(2):162-169.
PubMedGoogle ScholarCrossref 16.Rosenthal
MB, Beckman
HB, Forrest
DD, Huang
ES, Landon
BE, Lewis
S. Will the patient-centered medical home improve efficiency and reduce costs of care? a measurement and research agenda.
Med Care Res Rev. 2010;67(4):476-484.
PubMedGoogle ScholarCrossref 17.Rosenbaum
PR, Rubin
DB. The central role of the propensity score in observational studies for causal effects.
Biometrika. 1983;70(1):41-55.
Google ScholarCrossref 18.Charlson
ME, Pompei
P, Ales
KL, MacKenzie
CR. A new method of classifying prognostic comorbidity in longitudinal studies: development and validation.
J Chronic Dis. 1987;40(5):373-383.
PubMedGoogle ScholarCrossref 19.Liang
K, Zeger
S. Longitudinal data analysis using generalized linear models.
Biometrika. 1986;73:13-22.
Google ScholarCrossref 22.Setodji
CM, Scheuner
M, Pankow
JS, Blumenthal
RS, Chen
H, Keeler
E. A graphical method for assessing risk factor threshold values using the generalized additive model: the multi-ethnic study of atherosclerosis.
Health Serv Outcomes Res Methodol. 2012;12(1):62-79.
PubMedGoogle ScholarCrossref 23.Nutting
PA, Miller
WL, Crabtree
BF, Jaen
CR, Stewart
EE, Stange
KC. Initial lessons from the first national demonstration project on practice transformation to a patient-centered medical home.
Ann Fam Med. 2009;7(3):254-260.
PubMedGoogle ScholarCrossref 24.Nelson
KM, Helfrich
C, Sun
H,
et al. Implementation of the patient-centered medical home in the Veterans Health Administration: associations with patient satisfaction, quality of care, staff burnout, and hospital and emergency department use.
JAMA Intern Med. 2014;174(8):1350-1358.
PubMedGoogle ScholarCrossref 25.Yoon
J, Rose
DE, Canelo
I,
et al. Medical home features of VHA primary care clinics and avoidable hospitalizations.
J Gen Intern Med. 2013;28(9):1188-1194.
PubMedGoogle ScholarCrossref 26.Reid
RJ, Coleman
K, Johnson
EA,
et al. The Group Health medical home at year two: cost savings, higher patient satisfaction, and less burnout for providers.
Health Aff (Millwood). 2010;29(5):835-843.
PubMedGoogle ScholarCrossref