Adaptation challenges

This section will help you to:

  • Understand how specific traits of climate adaptation can make evaluation challenging, and how you can overcome these challenges.
  • Identify examples, good practice and techniques which may help ensure your evaluation is robust in the context of climate change.
  • Prioritise your evaluation activities, recognising that evaluations need to be proportionate to the investment and are resource limited.

What challenges might I face?

You are likely to face challenges in understanding what has and hasn’t worked – UKCIP has worked with practitioners to identify a number of ‘tricky issues’ which are often experienced when evaluating adaptation activities.

Question to consider:

  • Which ‘tricky issues’ are likely to be relevant to your evaluation? Review the possible responses in the context of your project.

Use the links below to go to a specific section:

Coping with uncertainty

Uncertainty is inherent in our understanding of the factors which shape future climate the resultant impacts and how these are experienced by individuals and systems. This makes it difficult to evaluate the success or appropriateness of an adaptation intervention as the parameters for monitoring and evaluation are continually changing.

Possible responses:

  • Use formative evaluation approaches which focus on strengthening future adaptation interventions – adaptation is a continuous process rather than an end point.
  • Establish baselines so it is possible to track change from the start of the activity. Baselines may relate to climate and weather data, or to public perception, economic conditions or scientific knowledge. Note that establishing baselines can be time consuming.
  • Ensure that the evaluation challenges assumptions but also examines the conditions in which such assumptions were made.
  • Where uncertainty is high, the flexibility of the intervention is an important success measure – consider the robustness of the activity to a variety of possible futures.

Dealing with long timescales

There can be a substantial time gap between taking action (or making an investment) and measureable impact (or the return on the investment) when dealing adaptation.

Possible responses:

  • View adaptation as an iterative, formative process; “climate change in the foreseeable future will not be some new stable ‘equilibrium climate, but rather an ongoing ‘transient process”. (Pittock and Jones, 2000).
  • Make your assumptions clear through your Adaptation Logic Model. Long time frames mean your assumptions are likely to change with circumstances.
  • Ensure regular monitoring and evaluation is in place to track progress.
  • Use process indicators to decide whether progress is on track, even if impacts cannot be worked out yet.
  • Understand the decision lifetime of your adaptation intervention. The decision lifetime is:

‘Lead time’ – the period from the first idea to the execution of the project plus
Consequence time’ – the time period over which the consequences of the decision emerge

For example, the use of drought tolerant crops may have a long lead time through a programme of plant breeding, but a short consequence time as the farmer may only grow the crop for a single season (see Stafford Smith et al, 2011). Understanding the decision lifetime will enable you to phase your M&E work more effectively.

  • Retain flexibility and so avoid becominglocked in’ to a potentially maladaptive response. Consider how to evaluate whether the intervention has ‘retained flexibility’ for a range of futures.

What would have happened anyway?

An assessment of the appropriateness of an adaptation action relies on our understanding of what would have happened without this action (known as the counterfactual). This can be difficult to establish due to the myriad of possible changes to societal attitudes, scientific knowledge, the economy and technology, all of which might shape the consequences of climate change and our responses to them.

Possible responses:

  • Consider the purpose of your evaluation – how important is it to establish what would have happened without the adaptation activity? For example, if accountability or efficiency are key objectives, then establishing that an adaptation activity would not otherwise have happened and what the consequences could be might be important. The UK Treasury Green Book provides useful guidance on this (pp. 53–54).
  • Your Adaptation Logic Model can be useful in developing a counterfactual. What might happen without the intervention? What are the variables involved, and what assumptions can be made about them? Are data sources available to back up such assumptions? Be aware that existing climate change baseline data is likely to be non-linear making it a poor basis for determining future conditions. Record all the assumptions made about the counterfactual.
  • Recognise that it may be more useful to look at the intervention as one of an infinite number of adaptation pathways. The effectiveness of the pathway (as defined in the Adaptation Logic Model) may be tested in against a dynamic set of social, economic and environmental variables rather a single counterfactual. A counterfactual should be developed only when the investment of doing so is proportionate to the scale of the intervention.


Attribution of the costs and benefits of adaptation interventions can be problematic for a number of reasons – for example, long time lags mean that a variety of factors may have shaped the outcomes, not just the planned adaptation intervention. We are also encouraged to embed adaptation within existing processes and M&E systems, making attribution difficult.

Possible responses:

  • Think in terms of contribution rather than attribution. Rather than attempt to demonstrate that a specific outcome is due to an intervention, it may be more appropriate to record the contribution to that outcome. This approach recognises that there are many influences which shape outcomes, particularly in the case of a complex and often long term issue such as climate adaptation. You can focus on gathering evidence to determine the type, nature and level of contribution the activities have made to (a) developments consistent with the Adaptation Logic Model and (b) any additional unplanned impacts.
  • Where adaptation is embedded within a broader set of organisational objectives, it may be helpful to frame your evaluation in different terms. For example, rather than making adaptation the subject of your evaluation, it may be beneficial to look at the contribution a project is making to the ‘achievement of a particular objective in a changing climate’.
  • Economic impact approaches can help in determining the economic costs and benefits of the project, but it is important to examine why benefits have accrued rather than becoming solely focused only on their value.

Identifying appropriate ‘success’ measures

An action which aids adaptation in one location or community may increase vulnerability or inequality elsewhere – so who gets to define success? Adaptation actions are often characterised by trade-offs determined by assessments of vulnerability and risk. Consequently accepting loss may be a legitimate adaptation response, meaning that an extreme event leading to damage is not necessarily an indication of adaptation failure. A further consideration is how to define success given high levels of uncertainty – does success mean planning for all eventualities (higher costs) or backing a winner (risky)?

This makes success hard to define and measure.

Possible responses:

  • Engage a wide range of groups in the design and delivery of your evaluation (see Who should I involve in the evaluation?) to give you a broader view of what success means to different people, and help you to develop a wider range of success measures.
  • Identify who will and who will not benefit from the intervention when developing the Adaptation Logic Model. Examine and test the assumptions that were made in relation to these beneficiaries and non-beneficiaries. What ‘acceptable levels of loss’ have been assumed in the Adaptation Logic Model? For example, if an intervention assumes that it was not viable to protect the residents of a particular road from storm surge because of excessive costs, the evaluation will need to determine whether these were acceptable and reasonable assumptions.
  • Where uncertainly is high, the flexibility of the intervention is an important success measure – the robustness of the intervention to a variety of possible futures is a key factor to consider in the evaluation.


Adaptation to climate change is a relatively new and complex challenge – monitoring and evaluation of adaptive capacity and adaptation actions must be carried out with a view to learning, rather than purely assessing the success or failure of an intervention. How do we ensure that this learning informs future decision-making within a particular organisation and add to society’s broader understanding of how to adapt?

Possible responses:

  • Consider mechanisms for sharing across and between organisations, sectors and disciplines.
  • Combine organisational objectives with broader societal learning about adaptation and think ‘outside of the project box’ (Spearman & McGray, 2011).
  • Ensure mechanisms are in place for both formative and summative evaluations to inform future decisions, and that the timing of the evaluation fits well with key decisions about future investments. For example, an evaluation of a flood defence scheme needs to report before future flood management budgets are decided.
  • Learning must be an objective of all evaluations – make sure it does not become subordinate to your other evaluation objectives.
  • Consider the application of learning – where, when and to whom do the key learning messages need to be communicated to maximise the effectiveness of the evaluation?