Measuring impact

We always measure the impacts of our media campaigns (where possible, the ultimate impact, eg number of lives saved, as well as the immediate behaviour change outcomes). We partner with leading academic institutions to run robust impact evaluations of all of our campaigns. In some cases, we run randomised controlled trials (such as our recent child survival RCT in Burkina Faso).

 


Challenges in measuring the impact of media campaigns

The quality and size of the evidence base for the impact of mass media on health behaviours has been limited by the quality of evaluation designs in many cases. Many campaigns have not been sufficiently well funded to run a robust impact evaluation, but there are also two structural limitations to the efficacy of evaluations of mass media campaigns.

Firstly, it is difficult to attribute impact to the campaign itself, rather than to other factors (such as other media campaigns, or other interventions that change behaviours such as community outreach campaigns, or supply-side interventions such as bednet distribution programmes). The main barrier is that the best way to attribute impact is through the use of 'controls', to enable a comparison between 'intervention' and 'control' zones, but many mass media campaigns reach too wide an area to permit the use of controls. Even when control zones are possible, it is rarely feasible to select them randomly (a randomised controlled trial is the 'gold standard' for demonstrating impact).

Secondly, many evaluations collect data on trends in health behaviours by surveying members of their target audience and asking them about what they know and what they do ('knowledge, attitudes and practices' or 'KAP' surveys). These surveys are prone to various biases, in particular to 'reporting bias', whereby respondents give the surveyor the answer they think he or she wants to hear, rather than the truth. Ideally, evaluations should either observe people behaving in a particular way, or collect data from other sources (such as records of numbers of people attending clinics) to verify that a particular behaviour has taken place. However, this data is often unavailable or unreliable.

 


Attributing the impact of media campaigns

The ideal evaluation design to attribute impacts to our campaign rather than to any other initiatives is a randomised controlled trial, but this is not feasible or affordable in most cases. We have therefore developed a set of techniques for measuring and attributing the health impacts of our campaigns using quasi-experimental evaluation designs. For example, we undertake regular surveys to allow us to conduct time-series analysis of impact; we also compare outcomes between intervention and control areas, and analyse dose-response relationships between behaviour change and target groups with low, medium and high exposure to the campaign.

The reason that a randomised controlled trial (RCT) of a media campaign would not normally work is the risk that people in 'control' areas would listen to radio or TV stations broadcasting from 'intervention' areas. However, DMI analysed the media landscape of every developing country in 2010, and determined that there was one country where an RCT of a media campaign was feasible: Burkina Faso. This small country in West Africa has a very localised, radio-dominated media environment, where local FM radio stations can broadcast campaign messages to intervention areas without 'leaking' into control areas. 

In 2011, DMI received funding from the Wellcome Trust and Planet Wheeler Foundation to run a large-scale, four year randomised controlled trial to test the impact of an intensive Saturation+ radio campaign on under-five mortality. During the 2.5 year campaign phase of the RCT, DMI broadcast a range of messages covering all of the key child health issues (malaria, diarrhoea, pneumonia, nutrition, hygiene, newborn and maternal care) on seven radio stations, with seven control zones. Based on the midline results, this is the first RCT to demonstrate that a large-scale media campaign can improve health behaviours in a developing country.

DMI is currently planning to run further RCTs in Burkina Faso, to test the impact of media campaigns on behaviours linked to other health issues. We have recently secured funding to launch an RCT of a family planning radio campaign

 


Measuring the ultimate impacts of media campaigns

We use surveys to measure trends in knowledge, attitudes and practice, but wherever possible we triangulate survey data with external data sources, ideally clinic-level data, to minimise the risk of 'reporting bias' and other inaccuracies in survey responses. This gives us a reliable estimate of the extent to which our campaigns are actually changing behaviours. Where possible, we also observe behaviours directly, but this is impossible for many health behaviours, which (like exclusive breastfeeding or handwashing) are practised in the household and often in private. 

We have also developed a ground-breaking mathematical model, in partnership with the London School of Hygiene and Tropical Medicine, to predict how many children's lives can be saved by running large-scale media campaigns targeting all the key causes of under-five mortality. The DMI/LSHTM model uses evidence from previous campaigns to predict the increase in coverage of these interventions that DMI media campaigns could achieve. It then combines this data with the Lives Saved Tool and an adapted version of the Lancet Child Survival Series, which predict how many lives could be saved if coverage of key interventions (such as breastfeeding and bed nets) was increased from current levels. The model predicts that media campaigns could reduce child mortality in many low-income countries by 16% to 23%, depending on the profile of the country. The cost per life-year saved is also lower than any currently available intervention, at between $4 and $15 per disability-adjusted life year (DALY)*. The research was published in The Lancet in February 2015.

Our randomised controlled trial of a child survival media campaign in Burkina Faso is testing these predictions by directly measuring under-five mortality (at baseline and endline), as well as measuring coverage of the target behaviours (at baseline, midline and endline). 

 


Film: The Science

What is the science that underpins DMI's work? How many lives can we actually save? What is a DALY? Professor Jimmy Whitworth (Wellcome Trust), Dr Richard Horton (The Lancet) and Roy Head (DMI) present the key concepts behind our model for predicting our impact on child survival.

* The cheapest intervention evaluated in the authoritative literature (Disease Control Priorities in Developing Countries) is childhood immunisations ($1-8 per DALY saved in Africa, $16 in Asia). Other leading interventions include DOTS treatment for TB ($8-$263), insecticide-treated bednets for malaria ($2- $24), Integrated Management of Childhood Illness ($9-$218), increased primary care coverage for maternal and neonatal care ($82-$409), and antiretroviral treatment for HIV/AIDS ($673-$1494). According to our model, the cost per DALY of a DMI mass media campaign in most countries is in the range of $4-$15. This would make mass media behaviour change campaigns in most countries as cost-effective as any other interventions currently used in public health. As a rule of thumb, there are around 30 DALYS per individual 'life', so the cost per life saved of our campaigns is $120-$450, depending on the country.