Importance
In 2010, the Veterans Health Administration (VHA) began implementing the patient-centered medical home (PCMH) model. The Patient Aligned Care Team (PACT) initiative aims to improve health outcomes through team-based care, improved access, and care management. To track progress and evaluate outcomes at all VHA primary care clinics, we developed and validated a method to assess PCMH implementation.
Objectives
To create an index that measures the extent of PCMH implementation, describe variation in implementation, and examine the association between the implementation index and key outcomes.
Design, Setting, and Participants
We conducted an observational study using data on more than 5.6 million veterans who received care at 913 VHA hospital-based and community-based primary care clinics and 5404 primary care staff from (1) VHA clinical and administrative databases, (2) a national patient survey administered to a weighted random sample of veterans who received outpatient care from June 1 to December 31, 2012, and (3) a survey of all VHA primary care staff in June 2012. Composite scores were constructed for 8 core domains of PACT: access, continuity, care coordination, comprehensiveness, self-management support, patient-centered care and communication, shared decision making, and team-based care.
Main Outcomes and Measures
Patient satisfaction, rates of hospitalization and emergency department use, quality of care, and staff burnout.
Results
Fifty-three items were included in the PACT Implementation Progress Index (Pi2). Compared with the 87 clinics in the lowest decile of the Pi2, the 77 sites in the top decile exhibited significantly higher patient satisfaction (9.33 vs 7.53; P < .001), higher performance on 41 of 48 measures of clinical quality, lower staff burnout (Maslach Burnout Inventory emotional exhaustion subscale, 2.29 vs 2.80; P = .02), lower hospitalization rates for ambulatory care–sensitive conditions (4.42 vs 3.68 quarterly admissions for veterans 65 years or older per 1000 patients; P < .001), and lower emergency department use (188 vs 245 visits per 1000 patients; P < .001).
Conclusions and Relevance
The extent of PCMH implementation, as measured by the Pi2, was highly associated with important outcomes for both patients and providers. This measure will be used to track the effectiveness of implementing PACT over time and to elucidate the correlates of desired health outcomes.
Although the patient-centered medical home (PCMH) has been endorsed by most major primary care groups as a promising model to strengthen primary care, decrease costs, and improve quality,1 early assessment of PCMH impact have yielded mixed results.2-10 Since 2010, the Veterans Health Administration (VHA) has undertaken national adoption of a PCMH model, called PACT (Patient Aligned Care Team).11 The focus of PACT has been to restructure primary care to provide team-based care that is more comprehensive, coordinated, and patient centered.11
The PACT initiative is a multifaceted and complex intervention, creating challenges to measuring implementation across diverse clinic sites. One of the most widely recognized PCMH recognition tools is the National Committee for Quality Assurance (NCQA) certification process, which focuses on practice infrastructure and health information technology,12 an area in which the VHA has made considerable past investments.13,14 The VHA has a universally deployed electronic health record, electronic prescribing, patient registries, and a national quality improvement and performance measurement infrastructure for which all clinics in the VHA would receive “credit.” Many national programs for coordinating care, such as home-based primary care, integrated mental health services, and palliative care, were already widely available before PACT was initiated. The focus within the VHA has been on how effectively these extensive resources are being applied and coordinated to fulfill the goals of the PACT initiative.11
Our goal was to derive a comprehensive index from existing data and survey instruments that would have a low respondent burden and would reflect processes and attributes that are essential to effective primary care. Our approach differs from other PCMH measurement tools15 by incorporating multiple data sources, including a primary care personnel survey, patient surveys, and administrative data. We sought to develop a measure to represent areas of focus of the PACT initiative, including continuity through team-based care, patient access, care coordination, and patient-centered care.11 We desired an instrument that would facilitate comparisons across clinical sites within the VHA, assist in identifying sites that had most effectively implemented PACT, and determine the relationship between effective implementation and important outcomes, such as patient satisfaction, quality of care, provider experience, and use of health care services.
Survey Instruments and Data Sources
We used data from the previously validated Consumer Assessment of Health Plans–Patient Centered Medical Home (CAHPS PCMH) survey16 that was administered to a nationally weighted random sample of veterans who received outpatient care from June 1 to December 31, 2012. The CAHPS PCMH scales have acceptable internal consistency reliability estimates for access (Cronbach α, 0.74), comprehensiveness (0.68), self-management support (0.62), patient-centered care and communication (0.91), and shared decision making (0.61).16 To test convergent validity, we used information on patient satisfaction from another sample of veterans from the Survey of the Health Experiences of Patients, an ongoing national mailed US Department of Veterans Affairs (VA) survey that assesses the health care experiences of veterans who receive care at the VHA and uses a stratified random sampling method.17 The evaluation efforts are part of an going quality improvement effort at the VHA and are not considered research activity; they are thus not subject to institutional review board review or waiver.
Primary Care Personnel Survey
The PACT Primary Care Personnel Survey was an internally developed instrument designed to measure team functioning in PACT and has been described elsewhere.18 The target population of the survey was all VHA primary care personnel, including the 4 occupations included in PACT teams: primary care providers, nurse care managers, medical associates (eg, licensed practical nurses and medical technicians), and administrative clerks. Data were collected from May 21 through June 29, 2012. Team-based care was represented by items from the primary care personnel survey related to delegation, staffing, team functioning, and team assignment.18
Information about demographics, clinical characteristics, and use of health services was obtained from the VHA Corporate Data Warehouse for fiscal year 2012 (n = 5 653 616). Using data from the Primary Care Management Module contained within the Corporate Data Warehouse, we identified all patients who were enrolled in primary care and assigned to a primary care provider.19 We included administrative data for important PACT programmatic goals,11 including (1) access to care and use of non–face-to-face care, such as telephone clinics and secure messaging; (2) continuity of care; and (3) use of VHA programs to support care coordination (eg, home telemonitoring, 2-day posthospital follow-up).
We used data collected by the VHA External Peer Review Program (EPRP) during fiscal year 2012 to assess quality of care. The EPRP is an audit program designed to assess clinical performance using standard performance criteria. National data are collected through manual abstraction of electronic health records by an independent external contractor.20 Previous studies have found high interrater reliability (κ = 0.9) within the EPRP program.14
Construction of the PACT Implementation Progress Index
The method for developing the PACT Implementation Progress Index (Pi2) and a full description of all items are provided in eTable 1 in the Supplement. Briefly, we mapped data items to PACT conceptual domains, calculated domain scores based on these items, and then generated site-level rankings for each domain. Table 1 outlines Pi2 domains and provides examples of representative variable items. A Pi2 score was assigned to each clinic based on the number of domains in the top and bottom quartiles for the domain scores, ranging from 8 (all domain scores in the top quartile) to –8 (all domain scores in the bottom quartile). Using these scores, we categorized sites in the top decile of the Pi2 (score, 5 to 8) as having achieved effective implementation and those in the lowest decile of the Pi2 (score, –7 to –5) as having been less effective.
Patient- and Provider-Level Outcome Measures
Patient satisfaction was assessed by using a single item from the CAHPS PCMH survey16 as follows: “Using any number from 0 to 10, where 0 is the worst provider possible and 10 is the best provider possible, what number would you use to rate this provider?”
Staff burnout was assessed with both a single-item measure and the emotional exhaustion subscale of the Maslach Burnout Inventory, a widely used measure of burnout.21-23 The single item measure asks: “Overall, based on your definition of burnout, how would you rate your level of burnout,” with 5 ordinal response options.24 We defined burnout as a response of 3 or higher, where 3 corresponds to “I am definitely burning out and have one or more symptoms of burnout, such as physical and emotional exhaustion.”24 We also used a 3-item version of the Maslach Burnout Inventory subscale.25 Items reflecting burnout symptoms are scored using a Likert scale ranging from 0 (never) to 6 (every day) and summed to form a scale score. We defined burnout as a score of 10 or higher (range, 0-18).
To assess quality of care, we examined outpatient measurements from the EPRP for chronic disease management, behavioral health screening, and prevention services. These indicators include frequently used measures of the quality of prevention (eg, vaccinations, screening tests) and outpatient care of chronic diseases (eg, annual retinal examinations in patients with diabetes mellitus). The performance measurement for preventive services and chronic disease cohorts, sampling frame, and criterion for meeting the measurement are provided in eTable 2 in the Supplement.
The EPRP selects a random sample of patient records from VHA facilities to monitor quality and appropriateness of medical care.26 The sample includes veterans who used VHA health care at least once in the 2 years before the assessment. Patients who were sampled had at least 1 primary care or specialty medical visit in the month being sampled. Among eligible patients, a random sample is drawn with oversampling of prevalent chronic conditions (eg, diabetes, heart failure).26
For patients at each primary care site, we determined the numbers of emergency department or urgent care visits, VA hospital admissions, and hospitalizations for ambulatory care–sensitive conditions (ACSCs), which are postulated to be most avoidable through provision of effective primary care.27 Hospitalizations for ACSCs were based on Agency for Healthcare Research and Quality Prevention Quality Indicators and were identified through standardized protocols using International Classification of Diseases, Ninth Revision, diagnoses and Current Procedural Terminology codes from inpatient VA records.27
To test internal consistency reliability, we calculated the Cronbach α for all items in each domain and all 53 items that make up the total scale.
Variation in PCMH Adoption
We evaluated bivariate comparisons of facility characteristics and level of implementation by using χ2 tests for categorical variables and t tests for continuous variables. We compared sites assessed to have effectively implemented PACT with those assessed as less effective according to type of facility (hospital or community-based outpatient clinic), number of patients, demographic characteristics, and Elixhauser comorbidity score.28
Associations With Patient and Provider Outcomes
We used a nonparametric test of trend for the ranks across ordered groups (an extension of the Wilcoxon rank sum test) to test for trends in patient satisfaction and staff burnout by Pi2 scores. We tested differences in the proportions of eligible patients at each VHA clinic fulfilling each of the 48 quality indicators according to the success of PACT implementation as measured by the Pi2. We calculated rates of services at the facility level by dividing the number of patients who satisfied the EPRP quality measure by the number who met inclusion criteria for each quality measure (eTable 3 in the Supplement). For each of the 48 facility-level quality indicators, we tested the trend in proportions of patients fulfilling the EPRP quality guideline by the level of PACT implementation. We used the nonparametric test for trend developed by Cuzick, which is an extension of the Wilcoxon test.29 We adjusted for multiple comparisons using a method described by Benjamini and Yekutieli.30 To determine whether more effective implementation (as measured by Pi2) corresponded to higher performance overall, we included all 48 outcome measures in a linear mixed-effects model that accounted for correlation among outcomes from the same facility and estimated an overall implementation effect. We adjusted for implementation in this model as a linear term ranging from 1 to 5, corresponding to the grouped Pi2 scores. This approach was possible because all 48 outcomes were measured on the same scale.
We examined fiscal year 2012 emergency department and urgent care visits and total hospitalizations for sites with more effective vs less effective implementation, adjusting for patient age, community-based outreach clinics, and Elixhauser comorbidity scores.28 To account for temporal trends, we modeled facility-level trends for hospitalization from 2003 to 2012. The method for examining such trends has been described elsewhere.31 We estimated interrupted time-series models of ACSC and all-cause hospitalizations from October 1, 2003, through September 20, 2012, for each facility and assessed how the trends in hospitalizations changed after the start of the PACT initiative in April 2010. All regression models adjusted for facility-level patient characteristics, unemployment rate in the VA market area, quarterly dummy variables to capture seasonal variation, and a linear time trend. Patient risk was measured using mean facility-level Elixhauser comorbidity scores. Changes in admissions for ACSC and all-cause hospitalizations after implementation of the PACT initiative were calculated as the difference between the observed rate of admissions and the predicted rate had the initiative not been implemented during the 2½-year period between April 1, 2010, and September 30, 2012. In this way, we estimated changes in admissions that might be attributed to the PACT initiative. Trend analyses for hospitalizations were stratified by age (≥65 and <65 years) to account for the substantial use of non-VA health care by Medicare-eligible veterans.32 We then compared the estimated change in admissions among facilities that had effectively implemented PACT with the change in those that had done so less effectively.
The final Pi2 consisted of 53 individual items assigned to the 8 overarching PACT concepts (Table 1). Detailed descriptions of all items and descriptive statistics are provided in eTable 1 in the Supplement. From more than 22 000 primary care personnel at the time of the survey, 5404 (approximately 25% response rate) from 667 sites of care completed the PACT Primary Care Personnel Survey during the spring of 2012. Between June and December 2012, more than 75 000 veterans who were enrolled in VA primary care completed the CAHPS PCMH module included in the Survey of the Health Experiences of Patients (47% response rate).
Psychometric Properties of Pi2
The Pi2 demonstrated satisfactory levels of internal consistency for total score (Cronbach α= 0.89), access (0.63), continuity (0.67), comprehensiveness (0.81), self-management support (0.68), patient-centered care and communication (0.95), shared decision making (0.75), and team-based care (0.91). Similar to results reported by Scholle et al,16 the care coordination composite had lower internal consistency (0.51). With patient satisfaction as measured by the Survey of the Health Experiences of Patients used as a measure of convergent validity, provider ratings differed between sites with more vs less effective implementation (mean provider rating, 9.05 vs 8.37; P < .001).
Variation in PCMH Adoption
Clinical sites that had implemented PACT more effectively tended to have fewer patients than those that had been less effective (Table 2). However, the type of clinic, mean patient age, percentage of male patients, and mean Elixhauser comorbidity score were similar among all sites irrespective of how well PACT had been implemented.
Associations With Patient and Provider Outcomes
Patient satisfaction was significantly higher among sites that had effectively implemented PACT than among those that had not (range of mean rating for satisfaction with provider, 9.33-7.53; P < .001) (Table 3 and the eFigure in the Supplement). A similarly favorable pattern was observed for staff burnout as measured by the Maslach Burnout Inventory emotional exhaustion subscale (range, 2.29-2.80; P = .02) (Table 3 and the eFigure in the Supplement) but not for the 1-item burnout question.
We observed significant trends in quality of care in relation to the Pi2 score. The 77 sites that achieved the most effective implementation exhibited higher clinical quality outcome measures than less successful sites. There was a statistically significant association (P < .05) between clinical quality outcomes and the Pi2 score for 19 of 48 measures, and better performance was associated with a higher Pi2 score for all but 2. Overall, of all 48 measures, 41 were higher among sites with higher Pi2 scores. The Figure displays the difference in the percentage of patients meeting quality criteria between sites in the highest and lowest quintiles of Pi2 scores. The combined effect estimated from the mixed-effects model showed a significant increase in mean outcomes for facilities with higher Pi2 scores compared with those with lower Pi2 scores (P < .001).
In the comparison of trends across Pi2 scores, veterans with chronic disease who received care at facilities with higher Pi2 scores had small but significant improvements in quality-of-care indicators (eTable 3 in the Supplement). For example, veterans with diabetes who received care at sites with the highest Pi2 scores were more likely to have a hemoglobin A1c value less than 9% (range at high-implementation vs low-implementation sites, 84.0%-81.8%; P = .04) or a low-density lipoprotein cholesterol (LDL-C) level less than 100 mg/dL (to convert to millimoles per liter, multiply by 0.0259) (range, 70.4%-66.0%; P = .03). Veterans with hypertension were more likely to have a blood pressure reading less than 140/90 mm Hg (range, 80.2%-76.9%; P = .02). Among veterans with ischemic heart disease, those at sites with the highest Pi2 scores were more likely to have LDL-C measured (range, 97.0%-94%; P < .001), have a measured LDL-C level less than 100 mg/dL (range, 70.5%-65.3%; P < .001), and have documentation of a prescription for aspirin at their most recent visit (range, 92.9%-89.5%; P = .03). Veterans receiving care at sites that exhibited more successful implementation of PACT were more likely to receive an influenza vaccination (range, 68.5%-64.2%; P < .001 for veterans aged 50-64 years), to be screened for cervical cancer (range, 92.8%-86.7%; P = .047 for women aged 21-64 years), or offered medications for tobacco cessation (range, 96.2%-93.4%; P < .001).
The rate of emergency department visits was significantly lower in sites with more effective implementation than in those with less effective implementation (range, 188-245 visits per 1000 patients; P < .001; Table 4 and eFigure in the Supplement). Although the total numbers of hospitalizations in fiscal year 2012 did not differ by level of implementation (Table 4), rates of hospitalization for ACSCs during the 2½-year period after implementation of PACT were lower among sites that had more effectively implemented PACT than among those that were less effective (Table 5). Among sites with Pi2 scores in the highest decile, we estimated that there was a mean reduction of 2.28 admissions for ACSCs per 1000 patients younger than 65 years (a 13.4% decrease) attributable to the PACT initiative compared with a reduction in admissions of only 0.08 for ACSCs (a 3.0% decrease) among sites with less effective implementation. Thus, the estimated reduction in hospitalizations for ACSCs was significantly greater at the more effective sites, although the absolute number of admissions was relatively small (2.8 hospitalizations per 1000 patients). For veterans older than 65 years, the projected changes in admission rates for ACSCs were more modest, and the difference among sites was small. Similar trends of smaller magnitude were noted for all-cause hospitalization.
We constructed the Pi2, a measure to assess progress in implementing PCMH in the VHA, using primary care personnel surveys, patient surveys, and administrative data. The index was favorably and meaningfully associated with important outcomes, including patient satisfaction, staff burnout, quality of care, hospitalizations, and emergency department visits. Patient satisfaction was significantly greater (on the order of a full point higher on a 0-10 scale) and staff burnout lower at sites for which the Pi2 indicated more effective implementation. Despite the overall high level of clinical care provided at the VHA, we found measurable differences between clinics by level of PCMH implementation in terms of the proportion of veterans meeting criteria for multiple measures of quality. In addition, sites with the highest Pi2 scores exhibited modestly lower rates of hospital admission for ACSCs and larger projected decreases in rates of admission after the start of the VHA PACT initiative. These results are consistent with findings from a recent VHA study in which clinic directors’ reports about the medical home indicated that sites with better care coordination and support for transition had lower rates of hospitalizations for ACSCs.33
Previous studies have found a lack of association with measurements of structural and care processes in primary care and quality of care34,35 or patient experience.36 Thus, in devising a method to assess the degree of PCMH implementation, we adopted an approach that differed substantially from the widely used NCQA recognition process, including patient-reported measures and using administrative data on access and continuity. We chose not to use the NCQA recognition process for several reasons: the administrative burden is too high to determine certification for more than 900 clinics and much of the process relies on structural changes that have already been broadly implemented across the VA, therefore diminishing the ability to discriminate among sites. Previous authors have noted that the NCQA measurement for PCMH may not be able to differentiate on quality-of-care measures.10 In contrast, our measure of PCMH implementation detects differences in quality across many clinics. Our data support the notion that the assessment of PCMH needs to include both patient-level and practice-level infrastructure measures.5,6,37
The VHA has a long-standing investment in infrastructure considered a baseline prerequisite for a functioning PCMH, including a robust quality improvement and performance system.14 Our data are consistent with previous reports of the high quality of clinical care provided at the VHA.14,38 Paradoxically, this high baseline of quality makes it more difficult to demonstrate improvements than would probably be the case in other health systems where implementation of PCMH has been evaluated and the baseline quality of care was substantially lower based on comparable measures.2 The favorable findings from our analysis may partly reflect the extensive improvements in clinical and resource infrastructure that the VHA has made since the 1990s, such as the use of electronic prescribing and a universal electronic health record and deployment of pharmacists, dieticians, social workers, and mental health professionals in many primary care clinics.
These analyses have several limitations. First, several of the domain scores rely on self-report, which are subject to biases, including response bias, framing bias, and others. However, we used well-validated measures, augmented these data with important team-based care domains from our primary care personnel survey, and used patient report to capture comprehensiveness of care that other measures may not include.15,39 Second, our primary care personnel survey had a low response rate, but our results were consistent with those from other surveys of primary care providers40,41 and with those from a survey of all VA employees. Third, the cross-sectional design of the study did not permit assessing change over time, although this is planned in future studies.
We found that the Pi2 score was favorably associated with patient satisfaction, staff burnout, quality of care, and use of health care services. Our results may apply only to large integrated health systems that include a robust and integrated electronic health record and a well-developed quality improvement system that provides feedback to clinics and providers. All primary care providers in the VHA, for example, have ready access to detailed information about their patient panels, including the likelihood of admission or death (updated weekly), as well as patients’ use of a range of inpatient, outpatient, and care coordination services.42 However, as accountable care organizations evolve, this type of patient-centered measurement could be adopted by other large integrated health systems.
Accepted for Publication: April 25, 2014.
Corresponding Author: Karin M. Nelson, MD, MSHS, Seattle Center of Innovation for Veteran-Centered and Value-Driven Care, VA Puget Sound Health Care System, 1100 Olive Way, Ste 1400, Seattle, WA 98108 (karin.nelson@va.gov).
Published Online: June 23, 2014. doi:10.1001/jamainternmed.2014.2488.
Author Contributions: Dr Nelson had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
Study concept and design: Nelson, Helfrich, Hebert, Dolan, Wong, Hernandez, Schectman, Stark, Fihn.
Acquisition, analysis, or interpretation of data: All authors.
Drafting of the manuscript: Nelson, Helfrich, Sanders, Fihn.
Critical revision of the manuscript for important intellectual content: Nelson, Helfrich, Sun, Hebert, Liu, Dolan, Taylor, Wong, Maynard, Hernandez, Sanders, Randall, Curtis, Schectman, Stark, Fihn.
Statistical analysis: Nelson, Sun, Hebert, Liu, Dolan, Taylor, Wong, Maynard, Sanders, Randall.
Obtained funding: Fihn.
Administrative, technical, or material support: Nelson, Helfrich, Dolan, Hernandez, Sanders, Randall, Curtis, Schectman, Stark, Fihn.
Study supervision: Nelson, Schectman, Fihn.
Conflict of Interest Disclosures: None reported.
Funding/Support: This work was supported by the VHA Office of Patient Care Service.
Role of the Sponsor: This study was conducted as part of VHA Health Care Operations in accordance with VHA Handbooks 1605.1 and 1605.2. The study team had full responsibility for the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation of the manuscript. The study was reviewed through normal administrative channels.
Additional Contributions: Data for this report were developed by the national evaluation team at the PACT Demonstration Lab Coordinating Center and the VHA Office of Analytics and Business Intelligence. The VHA Office of Primary Care Operations is responsible for PACT implementation, and the VHA Office of Patient Care Services is responsible for the PACT Demonstration Lab program. John Messina, BA, was paid to assist with data acquisition and administrative support.
Previous Presentation: This study was presented at the Society for General Internal Medicine Annual Meeting; April 25, 2014; San Diego, California. This study was also presented at the Academy Health National Meeting; June 10, 2014; San Diego, California.
1.Stange
KC, Nutting
PA, Miller
WL,
et al. Defining and measuring the patient-centered medical home.
J Gen Intern Med. 2010;25(6):601-612.
PubMedGoogle ScholarCrossref 2.Werner
RM, Duggan
M, Duey
K, Zhu
J, Stuart
EA. The patient-centered medical home: an evaluation of a single private payer demonstration in New Jersey.
Med Care. 2013;51(6):487-493.
PubMedGoogle ScholarCrossref 4.Liss
DT, Fishman
PA, Rutter
CM,
et al. Outcomes among chronically ill adults in a medical home prototype.
Am J Manag Care. 2013;19(10):e348-e358.
PubMedGoogle Scholar 5.Day
J, Scammon
DL, Kim
J,
et al. Quality, satisfaction, and financial efficiency associated with elements of primary care practice transformation: preliminary findings.
Ann Fam Med. 2013;11(suppl 1):S50-S59.
PubMedGoogle ScholarCrossref 6.Jaén
CR, Ferrer
RL, Miller
WL,
et al. Patient outcomes at 26 months in the patient-centered medical home National Demonstration Project.
Ann Fam Med. 2010;8(suppl 1):S57-S67, S92.
PubMedGoogle ScholarCrossref 7.Reid
RJ, Fishman
PA, Yu
O,
et al. Patient-centered medical home demonstration: a prospective, quasi-experimental, before and after evaluation.
Am J Manag Care. 2009;15(9):e71-e87.
PubMedGoogle Scholar 8.Jackson
GL, Powers
BJ, Chatterjee
R,
et al. The patient-centered medical home: a systematic review.
Ann Intern Med. 2013;158(3):169-178.
PubMedGoogle ScholarCrossref 9.Solberg
LI, Asche
SE, Fontaine
P, Flottemesch
TJ, Anderson
LH. Trends in quality during medical home transformation.
Ann Fam Med. 2011;9(6):515-521.
PubMedGoogle ScholarCrossref 10.Solberg
LI, Asche
SE, Fontaine
P, Flottemesch
TJ, Pawlson
LG, Scholle
SH. Relationship of clinic medical home scores to quality and patient experience.
J Ambul Care Manage. 2011;34(1):57-66.
PubMedGoogle ScholarCrossref 11.Rosland
AM, Nelson
K, Sun
H,
et al. The patient-centered medical home in the Veterans Health Administration.
Am J Manag Care. 2013;19(7):e263-e272.
PubMedGoogle Scholar 12.Berenson
RA, Hammons
T, Gans
DN,
et al. A house is not a home: keeping patients at the center of practice redesign.
Health Aff (Millwood). 2008;27(5):1219-1230.
PubMedGoogle ScholarCrossref 14.Jha
AK, Perlin
JB, Kizer
KW, Dudley
RA. Effect of the transformation of the Veterans Affairs Health Care System on the quality of care.
N Engl J Med. 2003;348(22):2218-2227.
PubMedGoogle ScholarCrossref 15.Burton
RA, Devers
KJ, Berenson
RA. Patient-centered medical home recognition tools: a comparison of ten surveys’ content and operational details.
http://www.urban.org/publications/412338.html. Washington, DC: Urban Institute; 2012. Accessed February 1, 2014.
16.Scholle
SH, Vuong
O, Ding
L,
et al. Development of and field test results for the CAHPS PCMH Survey.
Med Care. 2012;50(suppl):S2-S10.
PubMedGoogle ScholarCrossref 17.Wright
SM, Craig
T, Campbell
S, Schaefer
J, Humble
C. Patient satisfaction of female and male users of Veterans Health Administration services.
J Gen Intern Med. 2006;21(3)(suppl 3):S26-S32.
PubMedGoogle ScholarCrossref 18.Helfrich
CD, Dolan
ED, Simonetti
J,
et al. Elements of team-based care in a patient-centered medical home are associated with lower burnout among VA primary care employees.
J Gen Intern Med. 2014. doi:10.1007/s11606-013-2702-z.
PubMedGoogle Scholar 19.Shen
Y, Hendricks
A, Zhang
S, Kazis
LE. VHA enrollees’ health care coverage and use of care.
Med Care Res Rev. 2003;60(2):253-267.
PubMedGoogle ScholarCrossref 20.Perlin
JB, Kolodner
RM, Roswell
RH. The Veterans Health Administration: quality, value, accountability, and information as transforming strategies for patient-centered care.
Am J Manag Care. 2004;10(11, pt 2):828-836.
PubMedGoogle Scholar 21.Schaufeli
WB, Enzmann
D, Girault
N. Measurement of burnout: a review. In: Schaufeli WB, ed. Professional Burnout: Recent Developments in Theory and Research. Philadelphia, PA: Taylor & Francis; 1993:199-215.
22.Wheeler
DL, Vassar
M, Worley
JA, Barnes
LL. A reliability generalization meta-analysis of coefficient alpha for the Maslach Burnout Inventory.
Educ Psychol Meas. 2011;71(1):231-244. doi:10.1177/0013164410391579.
Google ScholarCrossref 23.Maslach
C, Jackson
SE. The measurement of experienced burnout.
J Organiz Behav. 1981;2(2)99-113. doi:10.1002/job.4030020205.
Google ScholarCrossref 24.Rohland
BM, Kruse
GR, Rohrer
JE. Validation of a single-item measure of burnout against the Maslach Burnout Inventory among physicians.
Stress Health. 2004;20(2):75-79. doi:10.1002/smi.1002.
Google ScholarCrossref 25.Leiter
MP, Shaughnessy
K. The areas of worklife model of burnout: tests of mediation relationships.
Ergonomia.2006;28:327-341.
Google Scholar 26.Goulet
JL, Erdos
J, Kancir
S,
et al. Measuring performance directly using the Veterans Health Administration electronic medical record: a comparison with external peer review.
Med Care. 2007;45(1):73-79.
PubMedGoogle ScholarCrossref 28.Elixhauser
A, Steiner
C, Harris
DR, Coffey
RM. Comorbidity measures for use with administrative data.
Med Care. 1998;36(1):8-27.
PubMedGoogle ScholarCrossref 30.Benjamini
Y, Yekutieli
D. The control of the false discovery rate in multiple testing under dependency.
Ann Stat. 2001;29:1165-1188.
Google ScholarCrossref 31.Hebert
PLLC, Wong
ES, Hernandez
SE,
et al. The economic effects and return on investment of the Veterans Health Administration’s Patient Centered Home Initiative, 2010 through 2012 .
Health Aff. In press.
Google Scholar 32.Liu
CF, Chapko
M, Bryson
CL,
et al. Use of outpatient care in Veterans Health Administration and Medicare among veterans receiving primary care in community-based and hospital outpatient clinics.
Health Serv Res. 2010;45(5, pt 1):1268-1286.
PubMedGoogle ScholarCrossref 33.Yoon
J, Rose
DE, Canelo
I,
et al. Medical home features of VHA primary care clinics and avoidable hospitalizations.
J Gen Intern Med. 2013;28(9):1188-1194.
PubMedGoogle ScholarCrossref 34.Holmboe
ES, Arnold
GK, Weng
W, Lipner
R. Current yardsticks may be inadequate for measuring quality improvements from the medical home.
Health Aff (Millwood). 2010;29(5):859-866.
PubMedGoogle ScholarCrossref 35.Friedberg
MW, Safran
DG, Coltin
KL, Dresser
M, Schneider
EC. Readiness for the patient-centered medical home: structural capabilities of Massachusetts primary care practices.
J Gen Intern Med. 2009;24(2):162-169.
PubMedGoogle ScholarCrossref 36.Martsolf
GR, Alexander
JA, Shi
Y,
et al. The patient-centered medical home and patient experience.
Health Serv Res. 2012;47(6):2273-2295.
PubMedGoogle ScholarCrossref 37.Gray
BM, Weng
W, Holmboe
ES. An assessment of patient-based and practice infrastructure-based measures of the patient-centered medical home: do we need to ask the patient?
Health Serv Res. 2012;47(1, pt 1):4-21.
PubMedGoogle ScholarCrossref 38.Kerr
EA, Gerzoff
RB, Krein
SL,
et al. Diabetes care quality in the Veterans Affairs Health Care System and commercial managed care: the TRIAD study.
Ann Intern Med. 2004;141(4):272-281.
PubMedGoogle ScholarCrossref 39.Birnberg
JM, Drum
ML, Huang
ES,
et al. Development of a safety net medical home scale for clinics.
J Gen Intern Med. 2011;26(12):1418-1425.
PubMedGoogle ScholarCrossref 40.Shanafelt
TD, Boone
S, Tan
L,
et al. Burnout and satisfaction with work-life balance among US physicians relative to the general US population.
Arch Intern Med. 2012;172(18):1377-1385.
PubMedGoogle ScholarCrossref 41.Lewis
SE, Nocon
RS, Tang
H,
et al. Patient-centered medical home characteristics and staff morale in safety net clinics.
Arch Intern Med. 2012;172(1):23-31.
PubMedGoogle ScholarCrossref 42.Wang
L, Porter
B, Maynard
C,
et al. Predicting risk of hospitalization or death among patients receiving primary care in the Veterans Health Administration.
Med Care. 2013;51(4):368-373.
PubMedGoogle ScholarCrossref