National Child Measurement Programme annual report
Methods report
Context
The National Child Measurement Programme (NCMP) was introduced in the 2006 to 2007 academic year. It collects annual height and weight measurements of over one million children in reception (aged 4 to 5 years) and year 6 (aged 10 to 11 years) in mainstream state-maintained schools in England. Local authorities are mandated to collect data from these schools.
The Department of Health and Social Care (DHSC) has responsibility for national oversight of the programme and responsibility for the publication of statistics from the NCMP. Local authorities have a statutory responsibility to deliver it. NHS England has responsibility for the collection and validation of NCMP data.
The national report is accompanied by this methods report and a data quality statement that provide details on:
- data collection and validation
- how body mass index (BMI) categories are derived
- guidance on using the data
- methods used for confidence intervals and significance testing
- data quality
Coverage
Local authorities are mandated to collect data from mainstream state-maintained schools. The collection of data from special schools (schools for pupils with special educational needs and pupil referral units) and independent schools is encouraged but not mandated.
Since the proportion of records from independent and special schools varies each year, reporting of NCMP data excludes such records to ensure consistency over time. There are also concerns around how representative the participating independent and special schools might be.
However, independent and special schools are encouraged to feedback the results to the parents of the children they measure.
Participation
The participation rate is the proportion of children who were measured out of those eligible for measurement. Children eligible for measurement are sometimes not measured for a range of reasons, such as the child being absent on the day of measurement or not consenting to be measured. This means that the NCMP dataset is a sample (albeit usually a very large sample) and the prevalence of the body mass index (BMI) categories in this report are estimates assumed to apply to the entire population in each age group.
To ensure the NCMP sample is representative, it is important to verify that non-participation is equally likely for each child. If, for example, all non-participating children were living with obesity then the sample would be biased and obesity prevalence underestimated.
The participation rate is the percentage of children who have been measured in mainstream state-maintained schools out of those eligible for measurement (this excludes children who could not be measured due to physical or mental impairment). Note: participation rate data was not collected or published in 2019 to 2020 and 2020 to 2021 academic years due to the impact of the COVID-19 pandemic.
Participation rate for the latest and previous collection years for each local authority, region and England are published in the obesity, physical activity and nutrition profile on Fingertips for the following indicators:
- participation rate, total
- participation rate, reception (4 to 5 years)
- participation rate, year 6 (10 to 11 years)
The participation rate can affect the accuracy of estimates derived from the data. For example, if the participation rate is very low in a local authority, then the prevalence estimates for the BMI categories should be treated with caution as those children measured may not be representative of all children in the local authority. Therefore, the participation rates should be considered when comparing local authority prevalence figures.
Calculating participation rates
Participation rates are calculated by dividing the number of valid records from mainstream state-maintained schools, submitted by the local authority, by the number of children eligible for measurement in these schools, and multiplying the result by 100.
The number of children eligible for measurement in each school year within a local authority is calculated by aggregating headcounts across the mainstream state-maintained schools within the local authority’s postcode boundary. The NCMP system provides default headcounts based on Department for Education (DfE) census data, but these can be amended by the local authority where necessary. The NCMP system validates local authority provided headcounts by checking that the number measured at a school does not exceed the number eligible for measurement. When the number measured exceeds the number eligible, the system corrects the ‘eligible’ figure by increasing it to match the number measured, thus ensuring a maximum school-level participation rate of 100%.
Data quality
Data quality indicators are presented in the obesity, physical activity and nutrition profile and the spreadsheet accompanying this report.
Table 12 in the accompanying spreadsheet shows the data quality indicators by submitting local authority with breach reasons as provided on submission (if applicable).
Further commentary on how data quality is assessed is provided in the data quality statement which accompanies this report.
Missing and imprecise data
Collection of child postcode, ethnicity and NHS number has improved over time. Trend data on the collection of these variables for each local authority, region and England are published in the obesity, physical activity and nutrition profile on Fingertips for the following indicators:
- records with valid ethnicity code in the NCMP
- records with valid child postcode in the NCMP
- records with an NHS number in the NCMP
By chance, 10% of height and weight measurements would be expected to be whole numbers and 10% half numbers. However, there is some evidence of local authorities rounding heights to whole or half numbers. Rounding of height and weight measurements can be monitored using the trend data for each local authority, region and England in the obesity, physical activity and nutrition profile on Fingertips for the following indicators:
- records with height rounded to X.0 or X.5 in the NCMP
- records with weight rounded to X.0 or X.5 in the NCMP
Data collection
Measurement
The measurement of children’s height and weight, without shoes and coats and in normal, light, indoor clothing, is overseen by healthcare professionals and undertaken in schools by trained staff. The Department of Health and Social Care (DHSC) publish operational guidance for local authorities on how to accurately measure height and weight.
Measurements can be taken at any time during the academic year. Consequently, some children may be almost 2 years older than others in the same school year at the point of measurement. This does not impact upon a child’s BMI classification since BMI centile results are adjusted for age. The age range is one year for the majority of records.
Validation
Full details about validation are provided in NHS England’s validation document and have been summarised below.
Local authorities enter data into the NCMP system which validates each data item at the point of data entry. Invalid data items (such as incorrect ethnicity codes) and missing mandatory data items are rejected and unexpected data items (such as ‘extreme’ heights) have warning flags added.
During collection, the NCMP system provides each local authority with real time data quality indicators. These are based on the data they have entered, for monitoring and to ensure the early resolution of any issues. At the end of the collection each local authority must confirm any data items with warning flags as being correct and sign off their data quality indicators. In cases where the data quality indicators breach the required thresholds local authorities are required to provide a breach reason.
After the collection has closed, NHS England carries out further data validation which includes:
- querying breach reasons that do not fully explain the reasons for the data quality issues
- comparing each local authority’s dataset with their previous year’s dataset and querying unexpected changes
- looking for clusters of unexpected data items to identify data quality issues affecting particular schools
Calculation of prevalence
The prevalence of children in a BMI category is calculated by dividing the number of children in that BMI category by the total number of children and multiplying the result by 100.
The BMI category of each child is derived by calculating the child’s BMI centile and assigning the BMI category based on the following population monitoring thresholds using the British 1990 (UK90) growth reference for BMI:
- underweight, BMI centile less than or equal to the 2nd centile
- healthy weight, BMI centile greater than the 2nd centile but less than the 85th centile
- overweight, BMI centile greater than or equal to the 85th centile but less than the 95th centile (overweight but not living with obesity)
- obesity, BMI centile greater than or equal to the 95th centile (obesity including severe obesity)
- severe obesity, BMI centile greater than or equal to 99.6th centile (this BMI category is a subset of the ‘obesity’ category)
For population monitoring purposes, a child’s BMI is classed as overweight or at risk of obesity where it is on or above the 85th centile or 95th centile, respectively, based on the British 1990 (UK90) growth reference data. The population monitoring thresholds for overweight and obesity are lower than the clinical thresholds (91st and 98th centiles for overweight and obesity) used to assess individual children (such as in clinical setting and providing NCMP feedback to parents) - this is to capture children in the population in the clinical overweight or obesity BMI categories and those who are at high risk of moving into the clinical overweight or clinical obesity categories. This helps ensure that adequate services are planned and delivered for the whole population to treat and prevent obesity and promote healthy growth for all children. The UK90 population thresholds were adopted by Health Survey for England in 2002, the NCMP followed in 2006 and uses the same thresholds.
The NCMP uses the British 1990 child growth reference (WHO-UK90) to assign each child a BMI centile. Children are growing so BMI is adjusted for sex (at birth) and age. This approach considers each child’s height, weight, sex and age. The child’s BMI centile is a measure of how far a child’s BMI is above or below the average BMI value for their age and sex in a reference population.
In England the British 1990 growth reference (UK90) is recommended for population monitoring and clinical assessment in children aged 2 years and over. UK90 is a large representative sample of 37,700 children which was constructed by combining data from 17 separate surveys. The sample was rebased to 1990 levels and the data were then used to express BMI as a centile based on the BMI distribution, adjusted for skewness, age and sex using Cole’s LMS method described in Body mass index reference curves for the UK, 1990.
Comparing prevalence: considerations
When comparing prevalence figures between groups and over time it is important to consider how participation and data quality might affect the calculated figures.
Comparisons between 2 groups may be affected if the groups have differing data quality or participation. This should be considered as it may partly explain any difference in prevalence figures.
Analyses looking at the impact of data quality on prevalence were carried out by the National Obesity Observatory (now part of DHSC) for the 2006 to 2007 and 2007 to 2008 collection years and for 2007 to 2008 in the paper variations in data collection can influence outcome measures of BMI measuring programmes.
The analysis on the NCMP datasets between 2006 to 2007 and 2008 to 2009 established that there was a relationship between Primary Care Trust (PCT) change in participation rates and change in year 6 obesity prevalence. Year 6 obesity prevalence may be underestimated by around 1.3 percentage points for 2006 to 2007, around 0.8 percentage points for 2007 to 2008, and around 0.7 percentage points for 2008 to 2009 (with the impact reducing as participation rates increased). This may be due to year 6 children living with obesity being less likely to participate in the NCMP than other children during these collection years. Therefore, the upper confidence interval for the national year 6 obesity prevalence rate was increased for 2006 to 2007 to 2008 to 2009 by these amounts. For other BMI classifications the relationship was found to be negligible.
In 2009 to 2010 and 2010 to 2011 the participation rate continued to increase and the same analysis found the relationship to be negligible. As the participation rate increased again in 2011 to 2012 and had remained similar since 2012 to 2013, it was considered unnecessary to repeat the analysis in recent years. However, it is still important to consider data quality and participation when making comparisons.
Comparisons over time for year 6 obesity prevalence back to the earlier years of the NCMP should be treated with caution. The time series charts for obesity prevalence use a dotted line for the period between 2006 to 2007 and 2008 to 2009 to reflect this greater uncertainty in the results from those earlier years.
It is also important to note that, since the NCMP dataset is a sample, the prevalence figures in this report are estimates assumed to apply to the entire population. These estimates are subject to natural random variation. Confidence intervals and significance testing have been used in this report to take account of such variation.
Confidence intervals
A confidence interval gives an indication of the likely error around an estimate that has been calculated from measurements based on a sample of the population. It indicates the range within which the true value for the population as a whole can be expected to lie, taking natural random variation into account. Confidence intervals should be considered when interpreting results. When confidence intervals do not overlap the differences are statistically significant. When confidence intervals overlap, it is not possible to determine whether differences are statistically significant. Please refer to the section below ‘significance testing’ for a suggested methodology for such cases.
Larger sample sizes lead to narrower confidence intervals, since there is less natural random variation in the results when more individuals are measured. The NCMP has relatively narrow confidence intervals because of the large size of the sample and high participation rates.
In the data presented in this report, 95% confidence intervals have been provided around the prevalence estimates. These are known as such because if it were possible to repeat the same programme under the same conditions multiple times, we would expect 95% of the confidence intervals calculated in this way to contain the true population value for that estimate.
The confidence intervals in this report have not had the finite population correction (FPC) applied and have therefore not been reduced on the basis of coverage. This approach is consistent with that used throughout the public health community. For example, census, mortality and hospital admission data represent a 100%sample, yet the associated confidence intervals are routinely calculated without the FPC adjustment.
Method for calculating confidence intervals
Confidence intervals have been calculated using the Wilson Score method described in the Fingertips technical guidance.
Significance testing
Significance tests have been used in this report to determine whether differences between prevalence estimates are likely to be genuine differences (statistically significant) or the result of random natural variation.
A quick and easy check to see if 2 prevalence estimates are significantly different is to compare the confidence intervals of the estimates. When the confidence intervals do not overlap the differences are considered as statistically significantly different. This approach was used in NCMP reports prior to 2009 to 2010.
However, it is not always the case that overlapping confidence intervals indicate no significant difference. In some cases, estimates with overlapping confidence intervals will still be statistically significantly different. Consequently, some significant differences may have been missed in NCMP reports prior to 2009 to 2010. A more robust way of checking if 2 prevalence estimates are significantly different is to use significance testing.
The significance testing method used in NCMP reports since 2009 to 2010 follows the approach outlined in Statistics with Confidence.
A 95% level of significance has been used in the tests throughout this report. This means that when prevalence estimates are described as being different, (for example, higher/lower or increase/decrease) the probability that the difference is genuine, rather than the result of random natural variation, is 0.95.
The steps for the approach outlined by Altman et al. are:
- calculate the absolute difference between the 2 proportions
- then calculate the confidence limits around D as:
where pi is the estimated prevalence for the year i, and li and ui are the lower and upper confidence intervals for pi respectively.
- A significance difference exists between proportions p1 and p2 if and only if zero is not included in the range covered by the confidence limits around the difference D.
Other data sources
The Health Survey for England has collected data on measured height and weight in children aged 2 to 15 years since 1995. However, as it is a sample the estimates are less precise than those from the NCMP for the reception and year 6 children where nearly all children are measured.
The NCMP covers children attending schools in England only. Other countries in the UK publish similar reports and these are signposted below. There are differences in methods of collection and ages of the children measured which must be taken into consideration when comparing data across the UK countries. Links to the latest reports from each country are: