[Skip to Content]
Sign In
Individual Sign In
Create an Account
Institutional Sign In
OpenAthens Shibboleth
Purchase Options:
[Skip to Content Landing]
Figure 1.
Flow Diagram of Indicator Development and Ratification
Flow Diagram of Indicator Development and Ratification

aAcceptability, feasibility, and impact were assessed by reviewers scoring them as yes, no, or not applicable. Acceptability refers to the relevance of the indicator to Australian health care in 2012 and 2013; feasibility refers to the frequency of presentation and the likelihood of documentation; and impact refers to the influence of the recommended action on patient experience, safety, or effectiveness.

Figure 2.
Sample Distribution
Sample Distribution

Populations aged 15 years and younger, as reported in the footnotes, are as estimated on December 31, 2012 (Australian Bureau of Statistics; Australian Demographic Statistics, series 3101). The percentage of the population that was metropolitan, as reported in the footnotes, was calculated on population estimates (aged ≤15 years) from state departments of health. Each square and circular pin identifies a health district that was sampled within the metropolitan and regional strata; pins in the regional strata are approximately at the center of the sampled health district to prevent identification of individual sites. Numbers in squares and circular pinheads are the sum of general practitioners (GPs), pediatricians, and nontertiary hospitals recruited in a health district, except for 8 pediatricians (shown with an asterisk) all recruited from metropolitan South Australia (see Figure 3, footnote g). Triangular pins mark the approximate location of tertiary pediatric hospitals, and the number in the triangles indicate the number of tertiary hospitals in that location.

aQueensland: population aged ≤15 years, 976 821; percentage of population that was metropolitan, 66%; total recruited: 35 GPs, 4 pediatricians, and 12 hospitals.

bSouth Australia: population aged ≤15 years, 314 511; percentage of population that was metropolitan, 68%; total recruited: 28 GPs, 8 pediatricians, and 7 hospitals.

cNew South Wales: population aged ≤15 years, 1 479 680; percentage of population that was metropolitan, 70%; total recruited: 22 GPs, 8 pediatricians, and 15 hospitals.

Figure 3.
Sampling Structure
Sampling Structure

Health district refers to local health district in New South Wales, hospital health service in Queensland, and local health network in South Australia. GP indicates general practitioner.

aMetropolitan and regional strata are geographically defined; tertiary pediatric hospitals were sampled outside of this classification as they have statewide responsibility; and 5 of the 6 tertiary hospitals were physically located within metropolitan strata.

bNumber of health districts or tertiary hospitals selected; 1 of the 6 tertiary pediatric hospitals was located within a selected health district.

cNumber of sites of each type successfully recruited within the metropolitan or regional strata or among the tertiary pediatric hospitals.

dFive excluded, 4 ineligible due to lack of a hospital with sufficient patient volumes, 1 excluded due to remoteness; together comprise 7.5% of regional population aged 15 years and younger.

eOne excluded as ineligible due to lack of a hospital with sufficient patient volumes; 32.2% of metropolitan population aged 15 years and younger.

fTwo health districts were randomly selected in regional Queensland initially. One, which contained 2 eligible hospitals, was removed because neither hospital responded to recruitment efforts; 2 other districts, each containing 1 eligible hospital, were nonrandomly selected to replace the lost district.

gThe study was unable to recruit any pediatricians in the eligible health districts in South Australia; all 8 pediatricians were therefore recruited from a health district that was not eligible for selection because it lacked a hospital with the required patient volumes.

Table 1.  
Exemplars of Quality of Care Indicators and Characteristics
Exemplars of Quality of Care Indicators and Characteristics
Table 2.  
Number of Indicators by Condition, Overall and by Indicator Characteristic
Number of Indicators by Condition, Overall and by Indicator Characteristic
Table 3.  
Characteristics of the Study Sample and Australian Population, 2012–2013, for Children and for Health Care Visits
Characteristics of the Study Sample and Australian Population, 2012–2013, for Children and for Health Care Visits
Table 4.  
Quality of Care by Clinical Condition, 2012-2013
Quality of Care by Clinical Condition, 2012-2013
Table 5.  
Quality of Care by Indicator Characteristics, 2012-2013
Quality of Care by Indicator Characteristics, 2012-2013
1.
McGlynn  EA, Asch  SM, Adams  J,  et al.  The quality of health care delivered to adults in the United States.  N Engl J Med. 2003;348(26):2635-2645.PubMedGoogle ScholarCrossref
2.
Runciman  WB, Hunt  TD, Hannaford  NA,  et al.  CareTrack: assessing the appropriateness of health care delivery in Australia.  Med J Aust. 2012;197(2):100-105.PubMedGoogle ScholarCrossref
3.
Hathorn  C, Alateeqi  N, Graham  C, O’Hare  A.  Impact of adherence to best practice guidelines on the diagnostic and assessment services for autism spectrum disorder.  J Autism Dev Disord. 2014;44(8):1859-1866.PubMedGoogle ScholarCrossref
4.
Doherty  S, Jones  P, Stevens  H, Davis  L, Ryan  N, Treeve  V.  ‘Evidence-based implementation’ of paediatric asthma guidelines in a rural emergency department.  J Paediatr Child Health. 2007;43(9):611-616.PubMedGoogle ScholarCrossref
5.
Mangione-Smith  R, DeCristofaro  AH, Setodji  CM,  et al.  The quality of ambulatory care delivered to children in the United States.  N Engl J Med. 2007;357(15):1515-1523.PubMedGoogle ScholarCrossref
6.
Wiles  LK, Hooper  TD, Hibbert  PD,  et al.  CareTrack Kids, part 1: assessing the appropriateness of healthcare delivered to Australian children: study protocol for clinical indicator development.  BMJ Open. 2015;5(4):e007748.PubMedGoogle ScholarCrossref
7.
Hooper  TD, Hibbert  PD, Mealing  N,  et al.  CareTrack Kids, part 2: assessing the appropriateness of the healthcare delivered to Australian children: study protocol for a retrospective medical record review.  BMJ Open. 2015;5(4):e007749.PubMedGoogle ScholarCrossref
8.
Britt  H, Miller  GC, Henderson  J,  et al.  General Practice Activity in Australia 2012-13: BEACH: Bettering the Evaluation and Care of Health. Sydney, Australia: Sydney University Press; 2013.
9.
Hiscock  H, Roberts  G, Efron  D,  et al.  Children Attending Paediatricians Study: a national prospective audit of outpatient practice from the Australian Paediatric Research Network.  Med J Aust. 2011;194(8):392-397.PubMedGoogle Scholar
10.
Begg  S, Vos  T, Barker  B, Stevenson  C, Stanley  L, Lopez  AD. The burden of disease and injury in Australia 2003. https://www.aihw.gov.au/getmedia/f81b92b3-18a2-4669-aad3-653aa3a9f0f2/bodaiia03.pdf.aspx. Published May 2007. Accessed December 20, 2017.
11.
Australian Institute of Health and Welfare; Commonwealth Department of Health and Family Services. First report on national health priority areas 1996. https://www.aihw.gov.au/getmedia/11b6bbee-cfcf-4af0-9d32-d303e0d7ee3b/frnhpa96.pdf.aspx. Published 1997. Accessed December 20, 2017.
12.
Australian Institute of Health and Welfare. A picture of Australia’s children 2012. https://www.aihw.gov.au/getmedia/31c0a364-dbac-4e88-8761-d9c87bc2dc29/14116.pdf.aspx. Accessed December 20, 2017.
13.
Marks  G, Reddel  H, Cooper  S, Poulos  L, Ampon  R, Waters  A-M; Australian Institute of Health and Welfare, Department of Health and Ageing. Asthma in Australia 2011. https://www.aihw.gov.au/getmedia/8d7e130c-876f-41e3-b581-6ba62399fb24/11774.pdf.aspx. Accessed December 20, 2017.
14.
Fitch  K, Bernstein  SJ, Aguilar  MD, Burnand  B, LaCalle  JR.  The RAND/UCLA Appropriateness Method User’s Manual. Santa Monica, CA: RAND Corp; 2001.
15.
National Institute for Health and Care Excellence.  Health and Social Care Directorate Quality Standards Process Guide. Manchester, England: National Institute for Health and Care Excellence; 2014.
16.
Boulkedid  R, Abdoul  H, Loustau  M, Sibony  O, Alberti  C.  Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.  PLoS One. 2011;6(6):e20476.PubMedGoogle ScholarCrossref
17.
National Health and Medical Research Council.  Guideline Development and Conflicts of Interest: Identifying and Managing Conflicts of Interest of Prospective Members and Members of NHMRC Committees and Working Groups Developing Guidelines. Canberra, Australia: National Health and Medical Research Council; 2012.
18.
Hasson  F, Keeney  S.  Enhancing rigour in the Delphi technique research.  Technol Forecast Soc Change. 2011;78(9):1695-1704. doi:10.1016/j.techfore.2011.04.005Google ScholarCrossref
19.
Australian Bureau of Statistics. 3218.0 Regional population growth, Australia: table 1: estimated resident population, remoteness areas, Australia. www.abs.gov.au/ausstats/subscriber.nsf/log?openagent&32180ds0005_2003-13.xls&3218.0&Data%20Cubes&4E851AEF51EC29B8CA257CAE000ECC44&0&2012-13&03.04.2014&Latest. Published 2014. Accessed February 21, 2018.
20.
Australian Bureau of Statistics. Australian Statistical Geography Standard (ASGS): Volume 5: remoteness structure, July 2011. http://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/1270.0.55.005July%202011. Published January 1, 2013. Accessed January 17, 2018.
21.
Hiscock  H, Danchin  MH, Efron  D,  et al.  Trends in paediatric practice in Australia: 2008 and 2013 national audits from the Australian Paediatric Research Network.  J Paediatr Child Health. 2017;53(1):55-61.PubMedGoogle ScholarCrossref
22.
Australian Institute of Health and Welfare. Australian hospital statistics 2012-13: emergency department care. https://www.aihw.gov.au/getmedia/f1a0ec92-b0eb-4a45-8648-f6f9565746f1/16299.pdf.aspx. Accessed December 20, 2017.
23.
Australian Institute of Health and Welfare. Australian hospital statistics 2012-13. https://www.aihw.gov.au/getmedia/e1d759b2-384f-40a1-a724-de7a3419307a/16772.pdf.aspx. Published 2014. Accessed December 20, 2017.
24.
Australian Institute of Health and Welfare. Principal diagnosis data cubes. http://www.aihw.gov.au/hospitals-data/principal-diagnosis-data-cubes/. Updated May 17, 2017. Accessed May 22, 2017.
25.
Australian Bureau of Statistics. Australian demographic statistics, Jun 2016: time series spreadsheets (tables 51, 53, 54, and 59). http://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/3101.0Jun%202016. Released December 15, 2016. Accessed May 23, 2017.
26.
Asher  I, Pearce  N.  Global burden of asthma among children.  Int J Tuberc Lung Dis. 2014;18(11):1269-1278.PubMedGoogle ScholarCrossref
27.
Zemek  RL, Bhogal  SK, Ducharme  FM.  Systematic review of randomized controlled trials examining written action plans in children: what is the plan?  Arch Pediatr Adolesc Med. 2008;162(2):157-163.PubMedGoogle ScholarCrossref
28.
National Asthma Council Australia. Australian asthma handbook. http://www.asthmahandbook.org.au/. Accessed May 24, 2017.
29.
Chung  EY, Yardley  J.  Are there risks associated with empiric acid suppression treatment of infants and children suspected of having gastroesophageal reflux disease?  Hosp Pediatr. 2013;3(1):16-23.PubMedGoogle ScholarCrossref
30.
Runciman  WB, Coiera  EW, Day  RO,  et al.  Towards the delivery of appropriate health care in Australia.  Med J Aust. 2012;197(2):78-81.PubMedGoogle ScholarCrossref
Original Investigation
March 20, 2018

Quality of Health Care for Children in Australia, 2012-2013

Author Affiliations
  • 1Centre for Healthcare Resilience and Implementation Science, Australian Institute of Health Innovation, Macquarie University, Sydney, New South Wales, Australia
  • 2Centre for Population Health Research, Sansom Institute for Health Research, The University of South Australia, Adelaide, South Australia, Australia
  • 3Australian Patient Safety Foundation, Adelaide, South Australia, Australia
  • 4School of Women’s and Children’s Health, University of New South Wales, Sydney, New South Wales, Australia
  • 5Department of Respiratory Medicine, Sydney Children’s Hospital, Sydney Children’s Hospital Network, Randwick, New South Wales, Australia
  • 6Discipline of Paediatrics, School of Women’s and Children’s Health, University of New South Wales, Sydney, New South Wales, Australia
  • 7Sydney Children’s Hospital, Sydney Children’s Hospital Network, Randwick, New South Wales, Australia
  • 8Kids Research Institute, Sydney Children’s Hospital Network, Westmead, New South Wales, Australia
  • 9Sydney Medical School, University of Sydney, Sydney, New South Wales, Australia
  • 10Centre for Primary Health Care and Equity, Faculty of Medicine, University of New South Wales, Sydney, New South Wales, Australia
  • 11Children’s Health Queensland Hospital and Health Service, Herston, Queensland, Australia
  • 12Division of Paediatric Medicine, Women’s and Children’s Hospital, North Adelaide, South Australia, Australia
  • 13Russell Clinic, Blackwood, South Australia, Australia
  • 14Australian Commission on Safety and Quality in Health Care, Sydney, New South Wales, Australia
  • 15Southern Adelaide Local Health Network, Bedford Park, South Australia, Australia
  • 16New South Wales Ministry of Health, North Sydney, New South Wales, Australia
  • 17Clinical Excellence Division, Queensland Department of Health, Brisbane, Queensland, Australia
  • 18International Society for Quality in Health Care, Dublin, Ireland
  • 19Bupa Health Foundation Australia, Sydney, New South Wales, Australia
  • 20New South Wales Agency for Clinical Innovation, Chatswood, New South Wales, Australia
  • 21Clinical Excellence Commission, Sydney, New South Wales, Australia
  • 22London School of Hygiene and Tropical Medicine, London, United Kingdom
  • 23World Health Organization, Geneva, Switzerland
  • 24The University of Warwick, Coventry, United Kingdom
  • 25Cincinnati Children’s Hospital Medical Center, Cincinnati, Ohio
JAMA. 2018;319(11):1113-1124. doi:10.1001/jama.2018.0162
Key Points

Question  Is health care for children in Australia consistent with quality standards?

Findings  In this study of 6689 Australian children aged 15 years and younger, a comparison of clinical records against quality indicators for 17 important child health conditions, such as asthma and type 1 diabetes, estimated that overall adherence was 59.8%, with substantial variation across conditions.

Meaning  For many important child health conditions, the quality of care in Australia may not be optimal.

Abstract

Importance  The quality of routine care for children is rarely assessed, and then usually in single settings or for single clinical conditions.

Objective  To estimate the quality of health care for children in Australia in inpatient and ambulatory health care settings.

Design, Setting, and Participants  Multistage stratified sample with medical record review to assess adherence with quality indicators extracted from clinical practice guidelines for 17 common, high-burden clinical conditions (noncommunicable [n = 5], mental health [n = 4], acute infection [n = 7], and injury [n = 1]), such as asthma, attention-deficit/hyperactivity disorder, tonsillitis, and head injury. For these 17 conditions, 479 quality indicators were identified, with the number varying by condition, ranging from 9 for eczema to 54 for head injury. Four hundred medical records were targeted for sampling for each of 15 conditions while 267 records were targeted for anxiety and 133 for depression. Within each selected medical record, all visits for the 17 targeted conditions were identified, and separate quality assessments made for each. Care was evaluated for 6689 children 15 years of age and younger who had 15 240 visits to emergency departments, for inpatient admissions, or to pediatricians and general practitioners in selected urban and rural locations in 3 Australian states. These visits generated 160 202 quality indicator assessments.

Exposures  Quality indicators were identified through a systematic search of local and international guidelines. Individual indicators were extracted from guidelines and assessed using a 2-stage Delphi process.

Main Outcomes and Measures  Quality of care for each clinical condition and overall.

Results  Of 6689 children with surveyed medical records, 53.6% were aged 0 to 4 years and 55.5% were male. Adherence to quality of care indicators was estimated at 59.8% (95% CI, 57.5%-62.0%; n = 160 202) across the 17 conditions, ranging from a high of 88.8% (95% CI, 83.0%-93.1%; n = 2638) for autism to a low of 43.5% (95% CI, 36.8%-50.4%; n = 2354) for tonsillitis. The mean adherence by condition category was estimated as 60.5% (95% CI, 57.2%-63.8%; n = 41 265) for noncommunicable conditions (range, 52.8%-75.8%); 82.4% (95% CI, 79.0%-85.5%; n = 14 622) for mental health conditions (range, 71.5%-88.8%); 56.3% (95% CI, 53.2%-59.4%; n = 94 037) for acute infections (range, 43.5%-69.8%); and 78.3% (95% CI, 75.1%-81.2%; n = 10 278) for injury.

Conclusions and Relevance  Among a sample of children receiving care in Australia in 2012-2013, the overall prevalence of adherence to quality of care indicators for important conditions was not high. For many of these conditions, the quality of care may be inadequate.

Introduction

Relatively little is known about the quality of care provided across modern health systems. Knowledge of care quality is limited to targeted studies in some countries1,2; small numbers of, or single, conditions3; or particular settings.4 Previous population-level studies of adults in the United States1 and Australia2 estimated a prevalence of adherence to clinical practice guidelines (CPGs) of 55% and 57%, respectively. In child health, a large US study of multiple conditions in children remains the benchmark.5 That study, published a decade ago, examined ambulatory care delivered between 1998 and 2000 for 11 conditions in 12 metropolitan settings, and estimated adherence of 47%.

The purpose of this study was to estimate the prevalence of quality care, as measured by adherence to CPG recommendations, by undertaking a population-based study of care received by Australian pediatric patients aged 15 years or younger in 2012 and 2013.

Methods

Quiz Ref IDThe CareTrack Kids study methods have been published elsewhere.6,7 Briefly, this study audited medical records of children aged 0 to 15 years on the date of visit, in 2012 and 2013, across 4 health care settings: general practices, pediatricians’ offices in the community, hospital emergency departments (EDs), and hospital inpatient settings.

This study developed a facility-based recruitment and selection strategy to maximize efficiency and condition-level sample sizes, customizing methods for selecting indicators, sampling sites, and analyzing data. Seventeen child health conditions were identified on the basis of published research,8,9 burden of disease,10 frequency of presentation, and national priority areas.11-13 The 17 conditions are listed in Table 1 and organized into 4 categories: noncommunicable (n = 5), mental health (n = 4), acute infection (n = 7), and injury (n = 1). These included high prevalence conditions, such as asthma, which affects 10% of Australian children,12 and gastroesophageal reflux, a normal physiological condition in infants that needs to be distinguished from a variety of disease states. Also included were important lower-prevalence conditions such as type 1 diabetes.

Ethical Approval

Ethics approval was obtained from hospital networks and individual hospitals in each sampled state, and the Royal Australian College of General Practitioners. Australian human research ethics committees can waive requirements for patient consent for external access to medical records if the study entails minimal risk to facilities, clinicians, and patients; all relevant bodies provided this waiver. Ethical approvals for this study do not permit reporting of overall performance by health care setting. Participants were protected from litigation by gaining statutory immunity for this study as a quality assurance activity from the Federal Minister for Health under Part VC of the Australian Health Insurance Act 1973.

Development and Ratification of Clinical Indicators

The development and ratification of quality indicators is depicted in Figure 1. Quiz Ref IDThe RAND-UCLA method to develop indicators was modified and applied,14 commencing with a systematic search for Australian and international CPGs. Recommendations were extracted from 99 CPGs. A total of 1266 recommendations were screened for eligibility, and 322 were excluded for 1 or more of 4 reasons: (1) weak strength of wording (eg, “may” and “could”); (2) low likelihood of the information being documented (eg, standard operating procedures such as temperature measurement); (3) guiding statements without recommended actions (eg, general information such as “consideration should be given to” or “be aware that”); and (4) “structure-level” recommendations (eg, training requirements for health care professionals).15 The 944 remaining recommendations were grouped into a standardized indicator format. After consolidation of similar recommendations, 385 were available for review.6 These recommendations were categorized by the phase of care being addressed by the indicator (diagnosis, treatment, or ongoing management) and the type of quality of care addressed (underuse: actions that are recommended but not undertaken; overuse: actions that are not indicated or contraindicated).

In total, 146 experts (104 pediatricians, 22 general practitioners, 11 psychiatrists, 5 psychologists, and 4 nurses) were recruited to undertake internal and external reviews.16 An expert coordinator was appointed to lead the reviews for each condition. Proposed indicators were ratified by experts over a 2-stage, multiround modified Delphi process, comprising an email-based 3-round internal review and an online, wiki-based 2-round external review.6 Internal reviewers (n = 55) were recruited from the research team’s professional networks, while external reviewers (n = 91) were sourced through targeted advertisements and open to all qualified applicants. Reviewers completed a conflict of interest declaration6,17 and worked independently to minimize groupthink.18

For the internal review, experts scored each of the 385 recommendations against 3 criteria (acceptability, feasibility, and impact, scored as yes/no or not applicable)6 to guide their decision to include or exclude a recommendation, and they provided additional comments. Feedback was deidentified, collated, and used to revise recommendations between rounds. Internal review resulted in the removal of 162 recommendations, by majority decision, leaving 223 for external review.

External reviewers applied the same scoring criteria as internal reviewers and used a 9-point Likert scale to score each recommendation as representative of quality care delivered to Australian children during 2012 and 2013.6,14 A mean score of 7 or more was required for retention of the item; by the end of external review, 196 recommendations remained.

A single CPG recommendation was frequently separated into multiple quality indicators. For example, 1 recommendation relating to the treatment of children with depression required that they should receive information about evidence-based management and be offered community support. This generated 2 quality indicators, one for provision of information about evidence-based management and another for community support. The 196 retained recommendations generated 479 indicator questions, which were grouped to create 17 condition-specific surveys; abdominal pain, for example, had 21 quality indicators, while fever had 47. Examples of indicators are shown in Table 1, with a full listing in eTable 1 in the Supplement. Further examples of translating CPG recommendations into study indicators are shown in eTable 2 and eAppendix 1 in the Supplement. Of the 479 indicator questions, 356 (74.3%) did not have an evidence level or grade of strength of recommendation specified in the CPGs.

Sample Size

A survey was defined as the aggregated set of condition-specific indicators assessed for each visit. For inpatient care, a visit was defined as an occasion of admitted care; for ED care, a single presentation; and for general practice (GP) and general (not subspecialty) pediatrician care, a consultation. A minimum of 400 medical record reviews per condition was required to obtain national estimates with 95% CIs and precision of ±5%. A pilot study did not contain sufficient clusters to provide an accurate estimate of the intracluster correlation coefficient, so the design effect could not be prespecified.

Sampling targeted 400 medical records for each of 15 conditions, with anxiety and depression assigned 267 and 133 records, respectively. Anxiety or depression was initially conceptualized as a single condition for sampling purposes as they were often discussed together in CPGs and allocated 400 records. During implementation, this was divided proportionate to the expected prevalence; as a result, lower precision was anticipated for these conditions.

For medical records containing multiple occasions of care for a condition, a separate survey of care quality was made for each occasion. If a record sampled for one condition contained occasions of care for other conditions, a separate condition-specific survey was undertaken for each visit, for each other condition. If 2 or more conditions were cared for during a single visit, each condition was separately surveyed. Based on the pilot study, it was anticipated that loss of precision due to design effects would be partially offset by additional surveys generated by this secondary sampling.

Sampling Process

A multistage stratified random sampling process was applied. For logistical efficiency, 3 states were sampled: Queensland, New South Wales, and South Australia, which together comprised 60.0% of the estimated Australian population aged 15 years and younger on December 31, 2012 (Figure 2). Australian geographical localities are classified into remoteness categories (major cities, inner and outer regional areas, and remote and very remote areas).19 Remote and very remote regions accounted for 86% of the Australian land area and 2.3% of the population; the figures were slightly lower in the sampled states (81% of the area and 1.7% of the population) than in the nonsampled states and territories (91% of the area and 3.2% of the population).19,20

Each state’s local department of health delivers health services through administrative units (referred to as health districts), and designates these as metropolitan or regional (Figure 3). Six pediatric tertiary hospitals providing statewide coverage were sampled outside this metropolitan/regional designation and were considered a third stratum.

Health districts that contained at least 1 hospital with 2000 or more ED presentations and 500 or more pediatric inpatient discharges per year were eligible for selection. One of the 3 metropolitan health districts in South Australia, containing 32.2% of the metropolitan target population, was ineligible. Four health districts, all from regional Queensland, were also ineligible, and a fifth health district from regional Queensland was excluded due to remoteness, for logistical reasons, prior to district selection. Together, these 5 health districts contained 7.5% of the regional target population. All New South Wales health districts were eligible.

In South Australia, the regional stratum functioned as a single health district and the metropolitan stratum only contained 2 eligible districts; all 3 were selected. The study was unable to recruit any pediatricians in the eligible health districts; all pediatricians were recruited from the third (ineligible) metropolitan district, where they were clustered.

In Queensland and New South Wales, 2 eligible health districts were selected within each stratum, using equal probability sampling. One of the 2 districts randomly selected in regional Queensland, containing 2 hospitals, was removed because neither hospital responded to recruitment efforts; 2 other health districts, each containing 1 eligible hospital, were nonrandomly selected as a replacement. For additional detail on selection of health districts and hospitals, see eAppendix 2 in the Supplement.

Recruitment of Hospitals, GPs, and Pediatricians and Selection of Records

Recruitment within selected health districts was by direct mail, telephone, and face-to-face contact by study investigators, clinical peers, and study surveyors. GPs and pediatricians were recruited through advertising, internet searches, and personal contacts. Recording of recruitment, nonresponses, and refusals for GPs and pediatricians was decentralized, and records were unavailable after decommissioning of project laptops, so response rates cannot be precisely calculated. For GPs, recovered data from email communications were available for South Australia, and the recruitment rate was estimated at 24%. For pediatricians, recovered data were available in all states and estimated at 25%. See eAppendix 2 in the Supplement for additional details.

All hospitals with the minimum patient volumes were targeted; 34 of 37 eligible hospitals approached (92%) agreed to participate, with 34 providing ED data and 31 providing inpatient data. Recruited hospitals were estimated to be responsible for 40% of all ED visits in the 3 sampled states, and 41% of all inpatient visits.

Within selected sites, a random sample of medical records for each condition was sought. For hospitals and GPs, eligible record identifiers for each condition were loaded into a Microsoft Excel spreadsheet and were arranged randomly and selected consecutively; for pediatricians, selection was performed on site by the surveyor, with instructions to randomly select. The process is described in eTable 3 in the Supplement, which lists the International Classification of Diseases and Systematized Nomenclature of Medicine codes used to identify medical record identifiers in hospitals. Records were mostly electronic for GPs and hospitals, and paper-based for pediatricians.

Surveyors

Nine surveyors, experienced registered pediatric nurses, were engaged across the 3 states, undergoing 5 days of training and competency assessment. Medical records for selected visits in 2012 and 2013 were reviewed on site at each participating facility during March to October 2016. As participating sites were separated by up to 2000 miles, assessing interrater reliability on actual records was not feasible. Mock records were assessed during the surveying task for 6 of the 9 surveyors (2 had already terminated employment and 1 was excluded as their assessments may not have been made independently) and their results compared. A good level of agreement was found; κ = 0.76 (95% CI, 0.75-0.77; n = 1895) for the child’s eligibility for indicator assessment, and κ = 0.71 (95% CI, 0.69-0.73; n = 1009) for indicator assessment.

Data Collection and Analysis

An electronic data-collection tool,2 incorporating indicators and recorded surveyor decisions, was adapted for the study. The tool included built-in filters to remove indicators that were not relevant to the child because of age or setting. For example, when assessing a GP visit by a 5-year-old child, indicators for children aged younger than 3 years were filtered, as were indicators restricted to ED presentations. Patients’ age and sex data, but not race/ethnicity and socioeconomic status data, were collected.

A surveyor manual provided definitions, inclusion and exclusion criteria, and guidance for assessing indicator eligibility. Surveyors assessed adherence with each indicator as “yes” (care provided was consistent with the indicator), “no” (inclusion criteria met, but no documented compliance action performed), or “not applicable” (the indicator was not eligible for assessment).

For each setting, survey or register-derived data were used to estimate the proportion of visits by condition.21-24 Visits per condition were thereby estimated for each health care site, and sampling weights calculated (in the Supplement, eAppendix 4 and the eFigure detail the procedure and show the conceptual model for the survey, eTables 4-8 list the codes used to identify visits in each health care setting, and eTable 9 summarizes the level at which sampling fractions were calculated for inpatient visits in tertiary hospitals). The weights adjust for oversampling of settings and conditions.

The maximum number of assessable quality indicators ranged from 9 for eczema to 54 for head injury. Table 2 summarizes the number of indicators by condition in total, and by type of quality of care and phase of care. At indicator and condition level, the proportion adherent to underuse indicators was calculated as the total number of yes responses divided by the total number of eligible responses, using sample weights; adherence to overuse indicators was similarly calculated, after first reversing no and yes responses. The overall assessment of care quality was the weighted mean of the 17 condition-level assessments. The overall condition category assessments were weighted averages of the included conditions.

Data were analyzed in SAS/STAT software version 9.4 (SAS Institute) using the SURVEYFREQ procedure. Variance was estimated by Taylor series linearization. At condition level, state and health care setting were specified as strata or pseudostrata, and the primary sampling unit (health district) was specified as the clustering unit to account for clustering at all levels. For the overall assessment of adherence with indicators, the overall condition category assessments, and the analysis by indicator characteristics, condition was added as a stratum. Exact 95% CIs were generated using the modified Clopper-Pearson method. Domain analysis was applied to assessments of indicator characteristics (eAppendix 5 in the Supplement).

Results
Characteristics of Surveyed Medical Records

The 6689 children in this study received care for 1 to 7 separate clinical conditions (median = 1), had a total of 1 to 19 visits in which 1 or more indicators were assessed (median = 2), and had 1 to 232 indicator assessments (median = 18). A single child, for example, may have had 3 visits to a GP for targeted conditions in 2012 and 2013, 2 for asthma management, and 1 for acute abdominal pain, with 42 care quality indicators assessed across the 3 visits. Table 3 compares the age and sex composition of this study population with that of all Australia, separately for children (median age, 4 years and 55.5% male in the sample vs 7 years and 51.3% male in Australia) and for occasions of health care provided to children (median age, 3 years and 56.2% male in the sample vs 4 years and 52.4% male in Australia). The distribution of occasions of health care in the 4 settings in the study shows a much closer correspondence for age, but with an overrepresentation of children aged 0 to 4 years and males. The differences that remain may reflect differences in age-sex structure between the conditions targeted by this study and all conditions managed in these health care settings, and oversampling of some conditions and health care settings.

Of 439 704 possible indicator assessments, 97 468 (22.2%) were automatically filtered and 182 034 (41.4%) were designated as not applicable by surveyors or otherwise deemed ineligible in data cleaning (eg, if aged 16 years on the visit date). The field team conducted 160 202 eligible indicator assessments during 15 240 visits; each visit included 1 to 40 indicators (median, 10) with yes or no answers. The surveys were conducted at 139 health care sites: 85 GP sites, 20 pediatricians’ offices, and 34 hospitals. The numbers of children, visits, and indicators assessed in each setting are presented in eTable 10 in the Supplement for each of the 17 conditions.

Quality of Care Indicators

Mean prevalence of adherence with quality of care indicators, by condition, is shown in Table 4. Estimated adherence ranged from 43.5% (95% CI, 36.8%-50.4%) for tonsillitis to 88.8% (95% CI, 83.0%-93.1%) for autism. Tonsillitis was the only condition with less than 50% estimated adherence, while the 4 mental health conditions, diabetes, and head injury had estimated adherence of more than 70%. The mean adherence was estimated as 60.5% (95% CI, 57.2%-63.8%) for the 5 noncommunicable conditions (range, 52.8% for gastroesophageal reflux disease to 75.8% for diabetes); 82.4% (95% CI, 79.0%-85.5%) for the 4 mental health conditions (range, 71.5% for depression to 88.8% for autism); 56.3% (95% CI, 53.2%-59.4%) for the 7 acute infections (range, 43.5% for tonsillitis to 69.8% for croup); and 78.3% (95% CI, 75.1%-81.2%) for head injury. Overall, quality of care was estimated to be adherent for 59.8% (95% CI, 57.5%-62.0%) of indicators.

Mean adherence was also calculated by indicator characteristics (Table 5). Estimated adherence was 61.4% (95% CI, 57.3%-65.4%) for diagnosis, 57.4% (95% CI, 52.4%-62.4%) for treatment, and 58.7% (95% CI, 55.8%-61.6%) for ongoing management. Indicators associated with overuse (eg, unjustified antibiotic prescription or diagnostic testing) had an estimated adherence of 87.2% (95% CI, 80.7%-92.1%), while indicators associated with underuse had an estimated adherence of 56.2% (95% CI, 53.5%-58.9%).

Individual indicator estimates were calculated. For example, for children with asthma, among those prescribed preventer therapy in any of the 4 settings, 46.5% (95% CI, 38.4%-54.8%; n = 1070) were estimated to have had a written action plan; and among those discharged from hospital after an acute asthma episode, 91.5% (95% CI, 85.2%-95.8%; n = 125) were estimated to have had a written action plan. For gastroesophageal reflux disease indicators in any setting, of infants and children with regurgitation, only 44.4% (95% CI, 33.7%-55.5%; n = 292) were estimated to have had their height and weight documented; whereas of healthy thriving infants presenting with irritability or unexplained crying, it was estimated that 41.2% (95% CI, 15.0%-71.8%; n = 92) were prescribed acid-suppression medication at the first presentation. Children diagnosed as having type 1 diabetes in any setting received investigations for glutamic acid decarboxylase at diagnosis on an estimated 71.7% (95% CI, 50.0%-87.8%; n = 128) of occasions.

Discussion

Quiz Ref IDOf the care provided to Australian children, approximately 60% met quality indicators, with considerable variation between conditions. The only condition with estimated adherence less than 50% was tonsillitis, while 6 conditions had estimated adherence greater than 70%: the 4 mental health conditions, diabetes, and head injury.

These results provide insights into the management of each condition. Consider, for example, the management of asthma, the most common chronic disease in children,26 affecting 334 million people worldwide and imposing a significant burden on health services; in Australia, 1 in 10 children has asthma.12 Written plans to manage asthma flare-ups are an important part of management and have been shown to improve asthma control, reducing time off school and contact with health facilities.27 Asthma guidelines recommend that each child has a written asthma plan, regularly updated.28 While an estimated 92% of children discharged from hospital following a flare-up were given an asthma action plan, only 47% of children prescribed a preventer were estimated to have a plan.

Poor adherence may affect patient outcomes and contribute to suboptimal use of resources. For example, infants with suspected gastroesophageal reflux disease are often treated with acid-suppressive medications. Evidence to support the effectiveness of these medications in the infant population is limited, and their use is associated with increased incidence of infections.29 This study found that 41% of infants who were healthy and thriving and presented with irritability or unexplained crying were prescribed acid-suppression medication at the first presentation.

The findings are similar to previous population-level estimates of quality of care for adults in the United States (55%)1 and Australia (57%)2 but are higher than those reported in a survey conducted almost 2 decades ago of children in ambulatory settings in the United States (47%).5 This could reflect differences in study population, this study’s addition of inpatient conditions, indicators chosen, system performance, or performance improvement over time. The substantial variation in adherence rates by condition found here was also found in the previous adult1,2 and child5 studies.

Quiz Ref IDAdherence gaps and practice variation persist despite decades of development and endorsement of CPGs designed to promote the uptake of evidence into routine practice and to standardize care. The problems with CPGs have been well described and include redundancy, lack of currency, inconsistent structure and content, voluminous documents,30 and concerns about the quality of evidence on which CPGs are based.

Limitations

This study had several limitations. First, while a large sampling frame was developed, covering 60% of the Australian population 15 years of age and younger, the rest of Australia has a slightly larger proportion of remote population. Only 2.3% of the Australian population resides in remote or very remote areas, and the results may not generalize to these settings. In other settings, the estimated quality of care is likely to be generalizable. There is broad similarity between these results and other Australian2 and US1,5 studies of the quality of care, but the extent to which the results can be generalized to the United States or elsewhere is unknown.

Second, while this study was more inclusive and larger than the US children’s study,5 covering both ambulatory and inpatient care for 17 conditions in 4 care settings, it nevertheless did not include some clinicians such as clinical psychologists and psychiatrists.

Third, because the quality indicators assessed in the audits had diverse sources, it is possible that the clinicians were adhering to guidelines other than those selected. Mitigating this, a systematic search for guidelines was undertaken and a mean of 5.8 guidelines were used per condition. Additionally, indicator development included an assessment, by reviewers external to the project, to ensure that each recommendation was a relevant standard of quality care for clinicians in 2012 and 2013.

Fourth, the κ scores were consistent with other medical record reviews but, for logistical reasons, were restricted to mock records. Given the greater inconsistency of medical records in the field, this process may have overestimated agreement.

Fifth, convenience sampling of GPs and pediatricians may mean that the recruited practices were nonrepresentative of the population. Relevant data were unavailable to assess the representativeness of the sampled sites at a local level. The sample had more children aged 0 to 4 years (58.4% vs 51.1% for the Australian population), fewer children aged 10 to 15 years (18.4% vs 25.0%), and more males (56.2% vs 52.4%).

Quiz Ref IDSixth, the study has a potential for self-selection bias. The best available estimate was a recruitment rate of 25% for GPs and pediatricians. Hospital recruitment was, in contrast, high (92%). Recruitment rates reported by the other studies were 37% for the adult US study,1 8% for the adult Australian study,2 and 42% for the US child health study.5 If self-selecting GPs and pediatricians were more likely to provide adherent care, this study likely overestimated the quality of care.

Seventh, there remains a potential bias arising from the possibility that the care documented may not reflect the care delivered. All studies seeking to assess the quality of care based on medical record audit face this possibility. Alternate methods may result in an estimate of adherence approximately 10 percentage points higher in primary care.1

Conclusions

Among a sample of children receiving care in Australia in 2012-2013, the overall prevalence of adherence to quality of care indicators for important conditions was not high. For many of these conditions, the quality of care may be inadequate.

Back to top
Article Information

Corresponding Author: Jeffrey Braithwaite, PhD, Australian Institute of Health Innovation, Faculty of Medicine and Health Sciences, Level 6, 75 Talavera Rd, Macquarie University, NSW 2109, Australia (jeffrey.braithwaite@mq.edu.au).

Accepted for Publication: January 19, 2018.

Author Contributions: Dr Braithwaite and Mr Hibbert had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

Concept and design: Braithwaite, Hibbert, Jaffe, White, Cowell, Harris, Runciman, Hallahan, Williams, Murphy, Hooper, Wakefield, Hughes, C. Dalton, Holt, Lachman, Muething.
Acquisition, analysis, or interpretation of data: Braithwaite, Hibbert, Jaffe, White, Harris, Hallahan, Wheaton, Williams, Molloy, Wiles, Ramanathan, Arnolda, Ting, Szabo, Schmiede, S. Dalton, Donaldson, Kelley, Lilford.

Drafting of the manuscript: Braithwaite, Hibbert, Jaffe, Molloy, Arnolda, Ting, Szabo, Schmiede.

Critical revision of the manuscript for important intellectual content: Braithwaite, Hibbert, Jaffe, White, Cowell, Harris, Runciman, Hallahan, Wheaton, Williams, Murphy, Wiles, Ramanathan, Arnolda, Ting, Hooper, Wakefield, Hughes, C. Dalton, S. Dalton, Holt, Donaldson, Kelley, Lilford, Lachman, Muething.

Statistical analysis: Braithwaite, Hibbert, Arnolda, Ting.

Obtained funding: Braithwaite, Hibbert, Jaffe, White, Cowell, Runciman, Hallahan, Murphy, Wakefield, C. Dalton, Holt.

Administrative, technical, or material support: Braithwaite, Hibbert, White, Runciman, Hallahan, Wheaton, Williams, Murphy, Molloy, Wiles, Ramanathan, Arnolda, Hooper, Szabo, Wakefield, Hughes, Schmiede, Kelley.

Supervision: Braithwaite, Hibbert, White, Cowell, Hughes, S. Dalton.

Conflict of Interest Disclosures: All authors have completed and submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Dr Williams reported being a board director of the Australian Commission on Safety and Quality in Health Care and chair of its primary care committee. Dr C. Dalton reported being the national medical director of Bupa ANZ, which provided funding for this study through the Bupa Health Foundation, and serving on the steering committee of the Bupa Health Foundation. Ms Holt reported being the chief executive of New South Wales Kids and Families at the time it authorized its funding of this study, and authorizing funds toward the CareTrack Kids Research Study. Dr Lilford reported receiving funding from the National Institute for Health Research Collaboration for Leadership in Applied Health Research and Care, West Midlands. No other disclosures were reported.

Funding/Support: The research was funded as an Australian National Health and Medical Research partnership grant (APP1065898), with contributions by the National Health and Medical Research Council, Bupa Health Foundation, Sydney Children’s Hospital Network, New South Wales Kids and Families, Children’s Health Queensland, and the South Australian Department of Health (SA Health).

Role of the Funder/Sponsor: The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.

Additional Contributions: We acknowledge with gratitude the fieldwork conducted by our surveying team, all of whom were employed by the project: Florence Bascombe, BSc (St Vincent’s Hospital, New South Wales, Australia), Jane Bollen, RN (BMP Healthcare Consulting, South Australia, Australia), Samantha King, BN (Women’s and Children’s Hospital Network), Naomi Lamberts, BN (Lyell McEwin Hospital, South Australia, Australia), Amy Lowe, RN (Children’s Health Queensland), AnnMarie McEvoy, BSc (Murdoch Children’s Research Institute, Victoria, Australia), Stephanie Richardson, BNurs (Sydney Children’s Hospital, New South Wales, Australia), Jane Summers, BN (Macquarie University, Sydney, Australia), and Annette Sutton, RN (Corporate Health Group, New South Wales, Australia). Thanks also go to Stan Goldstein, MHA (Bupa Health Foundation, Sydney, New South Wales, Australia; associate investigator; not compensated for role), Annie Lau, PhD (Macquarie University, Sydney, Australia; associate investigator; not compensated for role), and Nicole Mealing, MBiostat (University of Technology Sydney, Sydney, New South Wales, Australia; statistician; compensated for role) for their contributions to study design and planning, and Elise McPherson, BSc (Macquarie University, Sydney, Australia; compensated for role) and Baki Kocaballi, PhD (Macquarie University, Sydney, Australia; compensated for role) for their contributions to analysis and reporting. Thanks also go to the organizations that provided data for planning and analysis of this study: (1) Queensland Health, the New South Wales Ministry of Health and South Australia Health; (2) the Australian Paediatric Research Network; (3) the Bettering the Evaluation and Care of Health Program, University of Sydney; and (4) the Australian Department of Human Services. Specific uses of the data are detailed in eAppendices 2-4 in the Supplement.

References
1.
McGlynn  EA, Asch  SM, Adams  J,  et al.  The quality of health care delivered to adults in the United States.  N Engl J Med. 2003;348(26):2635-2645.PubMedGoogle ScholarCrossref
2.
Runciman  WB, Hunt  TD, Hannaford  NA,  et al.  CareTrack: assessing the appropriateness of health care delivery in Australia.  Med J Aust. 2012;197(2):100-105.PubMedGoogle ScholarCrossref
3.
Hathorn  C, Alateeqi  N, Graham  C, O’Hare  A.  Impact of adherence to best practice guidelines on the diagnostic and assessment services for autism spectrum disorder.  J Autism Dev Disord. 2014;44(8):1859-1866.PubMedGoogle ScholarCrossref
4.
Doherty  S, Jones  P, Stevens  H, Davis  L, Ryan  N, Treeve  V.  ‘Evidence-based implementation’ of paediatric asthma guidelines in a rural emergency department.  J Paediatr Child Health. 2007;43(9):611-616.PubMedGoogle ScholarCrossref
5.
Mangione-Smith  R, DeCristofaro  AH, Setodji  CM,  et al.  The quality of ambulatory care delivered to children in the United States.  N Engl J Med. 2007;357(15):1515-1523.PubMedGoogle ScholarCrossref
6.
Wiles  LK, Hooper  TD, Hibbert  PD,  et al.  CareTrack Kids, part 1: assessing the appropriateness of healthcare delivered to Australian children: study protocol for clinical indicator development.  BMJ Open. 2015;5(4):e007748.PubMedGoogle ScholarCrossref
7.
Hooper  TD, Hibbert  PD, Mealing  N,  et al.  CareTrack Kids, part 2: assessing the appropriateness of the healthcare delivered to Australian children: study protocol for a retrospective medical record review.  BMJ Open. 2015;5(4):e007749.PubMedGoogle ScholarCrossref
8.
Britt  H, Miller  GC, Henderson  J,  et al.  General Practice Activity in Australia 2012-13: BEACH: Bettering the Evaluation and Care of Health. Sydney, Australia: Sydney University Press; 2013.
9.
Hiscock  H, Roberts  G, Efron  D,  et al.  Children Attending Paediatricians Study: a national prospective audit of outpatient practice from the Australian Paediatric Research Network.  Med J Aust. 2011;194(8):392-397.PubMedGoogle Scholar
10.
Begg  S, Vos  T, Barker  B, Stevenson  C, Stanley  L, Lopez  AD. The burden of disease and injury in Australia 2003. https://www.aihw.gov.au/getmedia/f81b92b3-18a2-4669-aad3-653aa3a9f0f2/bodaiia03.pdf.aspx. Published May 2007. Accessed December 20, 2017.
11.
Australian Institute of Health and Welfare; Commonwealth Department of Health and Family Services. First report on national health priority areas 1996. https://www.aihw.gov.au/getmedia/11b6bbee-cfcf-4af0-9d32-d303e0d7ee3b/frnhpa96.pdf.aspx. Published 1997. Accessed December 20, 2017.
12.
Australian Institute of Health and Welfare. A picture of Australia’s children 2012. https://www.aihw.gov.au/getmedia/31c0a364-dbac-4e88-8761-d9c87bc2dc29/14116.pdf.aspx. Accessed December 20, 2017.
13.
Marks  G, Reddel  H, Cooper  S, Poulos  L, Ampon  R, Waters  A-M; Australian Institute of Health and Welfare, Department of Health and Ageing. Asthma in Australia 2011. https://www.aihw.gov.au/getmedia/8d7e130c-876f-41e3-b581-6ba62399fb24/11774.pdf.aspx. Accessed December 20, 2017.
14.
Fitch  K, Bernstein  SJ, Aguilar  MD, Burnand  B, LaCalle  JR.  The RAND/UCLA Appropriateness Method User’s Manual. Santa Monica, CA: RAND Corp; 2001.
15.
National Institute for Health and Care Excellence.  Health and Social Care Directorate Quality Standards Process Guide. Manchester, England: National Institute for Health and Care Excellence; 2014.
16.
Boulkedid  R, Abdoul  H, Loustau  M, Sibony  O, Alberti  C.  Using and reporting the Delphi method for selecting healthcare quality indicators: a systematic review.  PLoS One. 2011;6(6):e20476.PubMedGoogle ScholarCrossref
17.
National Health and Medical Research Council.  Guideline Development and Conflicts of Interest: Identifying and Managing Conflicts of Interest of Prospective Members and Members of NHMRC Committees and Working Groups Developing Guidelines. Canberra, Australia: National Health and Medical Research Council; 2012.
18.
Hasson  F, Keeney  S.  Enhancing rigour in the Delphi technique research.  Technol Forecast Soc Change. 2011;78(9):1695-1704. doi:10.1016/j.techfore.2011.04.005Google ScholarCrossref
19.
Australian Bureau of Statistics. 3218.0 Regional population growth, Australia: table 1: estimated resident population, remoteness areas, Australia. www.abs.gov.au/ausstats/subscriber.nsf/log?openagent&32180ds0005_2003-13.xls&3218.0&Data%20Cubes&4E851AEF51EC29B8CA257CAE000ECC44&0&2012-13&03.04.2014&Latest. Published 2014. Accessed February 21, 2018.
20.
Australian Bureau of Statistics. Australian Statistical Geography Standard (ASGS): Volume 5: remoteness structure, July 2011. http://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/1270.0.55.005July%202011. Published January 1, 2013. Accessed January 17, 2018.
21.
Hiscock  H, Danchin  MH, Efron  D,  et al.  Trends in paediatric practice in Australia: 2008 and 2013 national audits from the Australian Paediatric Research Network.  J Paediatr Child Health. 2017;53(1):55-61.PubMedGoogle ScholarCrossref
22.
Australian Institute of Health and Welfare. Australian hospital statistics 2012-13: emergency department care. https://www.aihw.gov.au/getmedia/f1a0ec92-b0eb-4a45-8648-f6f9565746f1/16299.pdf.aspx. Accessed December 20, 2017.
23.
Australian Institute of Health and Welfare. Australian hospital statistics 2012-13. https://www.aihw.gov.au/getmedia/e1d759b2-384f-40a1-a724-de7a3419307a/16772.pdf.aspx. Published 2014. Accessed December 20, 2017.
24.
Australian Institute of Health and Welfare. Principal diagnosis data cubes. http://www.aihw.gov.au/hospitals-data/principal-diagnosis-data-cubes/. Updated May 17, 2017. Accessed May 22, 2017.
25.
Australian Bureau of Statistics. Australian demographic statistics, Jun 2016: time series spreadsheets (tables 51, 53, 54, and 59). http://www.abs.gov.au/AUSSTATS/abs@.nsf/DetailsPage/3101.0Jun%202016. Released December 15, 2016. Accessed May 23, 2017.
26.
Asher  I, Pearce  N.  Global burden of asthma among children.  Int J Tuberc Lung Dis. 2014;18(11):1269-1278.PubMedGoogle ScholarCrossref
27.
Zemek  RL, Bhogal  SK, Ducharme  FM.  Systematic review of randomized controlled trials examining written action plans in children: what is the plan?  Arch Pediatr Adolesc Med. 2008;162(2):157-163.PubMedGoogle ScholarCrossref
28.
National Asthma Council Australia. Australian asthma handbook. http://www.asthmahandbook.org.au/. Accessed May 24, 2017.
29.
Chung  EY, Yardley  J.  Are there risks associated with empiric acid suppression treatment of infants and children suspected of having gastroesophageal reflux disease?  Hosp Pediatr. 2013;3(1):16-23.PubMedGoogle ScholarCrossref
30.
Runciman  WB, Coiera  EW, Day  RO,  et al.  Towards the delivery of appropriate health care in Australia.  Med J Aust. 2012;197(2):78-81.PubMedGoogle ScholarCrossref
×