Loading...

Monitoring strategy And data collection

Module 6: Data collection methods

6.2 Data collection methods – advantages and disadvantages for PVE

Data collection methods - advantages and disadvantages for PVE tool is a selection of data collection methods which could be used for PVE programmes. Methods need to be adapted to the context, purpose and budget of your programme. Methods need to be adapted to the context, purpose and budget of your programme. Methods examined here are:

  • Survey
  • Participatory methods
  • Counterfactual methods
  • Analysis tools
Why use it? To help think through the types of methods you may wish to adopt and the potential opportunities and challenges within this.

This tool is most useful at all stages of the programming cycle depending on the purpose of your data collection. It can be used together with:

4.1 Plotting levels of change

4.3 Prioritising indicators

Conducting surveys in a PVE context

Table 12: Pros and cons of selected survey methods in a PVE context

1. Survey

A series of predefined questions given to a sample. These can range from questionnaires to structured interviews.


Often not adapted to context, needs tailoring to specific context and group needs and experience.

Examples
Advantages
Disadvantages
 

Brief resilience and coping scale: Assesses resilience and coping strategies. Designed originally in the health and psychology fields to measure individuals’ tendencies to cope with, and recover from, stress in an adaptive manner. It involves self-assessment on statements on behaviour and actions. For example, “I look for creative ways to alter difficult situations”, or “Regardless of what happens to me, I believe I can control my reaction to it.”

 

Useful in assessing resilience. Refined measures are designed to measure individuals’ tendencies to cope with stress in a highly adaptive manner; particularly useful for assessing resilience of people affected by trauma or shock.


Often not adapted to context, needs tailoring to specific context and group needs and experience.

 

Perceptions survey: A survey used when trying to find out how people understand or feel about their situations, institutions or services. They can be used to assess needs, establish baselines, analyse trends, and inform design.

 

Perceptions of individuals and communities related to key VE vulnerability or resilience factors, such as attitudes towards state institutions or sense of grievance, can uncover people’s attitudes and beliefs related to specific VE drivers, and can show potential change if repeated at a later time.


Responses are subjective based on respondents’ individual understanding at that specific time; the responses can be influenced by circumstances on the day.

 

Knowledge, Attitudes and Practice (KAP) survey: Quantitative method (predefined questions formatted in standardised questionnaires) that provides access to quantitative and qualitative information used to measure the extent of a known situation, confirm or disprove a hypothesis, enhance understanding of particular themes, identify what is known and done about various subjects. A KAP survey records what was said, the respondents’ opinions (rather than actions), and is based on declarative statements.

 

KAP surveys reveal perceptions or misperceptions, as well as potential barriers to behaviour change. In the PVE context, KAP surveys can be used for assessing perceptions of violence, religion, exclusion, etc. They provide quantitative measures which can be tracked as indicators of change. They can help measure the effectiveness of activities to change behaviours. They can also be used to identify potential intervention strategies and activities that reflect local circumstances and cultural factors.


Training in surveying is necessary. Costs of employing a survey team and analysing can be high. KAP surveys record opinion, what was said by the respondent, and may not reflect actions or behaviour. This carries a risk of bias, as participants may not give honest answers to sensitive PVE questions or may not participate (especially written forms). Data needs to be triangulated and contextualised with qualitative methods (e.g. FGD or KII).

 

Grievance, activism, and radicalism scales: Aims to test the relationship between grievance, activism and radicalism through questions about past and future intention to engage in activism-radicalism and questions about grievance (e.g. against a government or a particular group). This has been used to assess whether past grievance, activism and radicalism can be a predictor for future.

 

Intention to engage in activist activity in the future has been found to be correlated with future radical intention. Assesses personal or group grievance, which in some cases (not all) can be predictive of both past activism-radicalism and of future activism.


Often not adapted to context, needs tailoring to specific context and group needs and experience.

Conducting surveys in a PVE context

Table 13: Pros and cons of participatory methods in a PVE context

2. Participatory methods

Engages a group in forming the process and generating data;

typically seeks or expects to change the perception or knowledge of the participants

Examples
Advantages
Disadvantages
 

Most significant change (MSC): A participative approach developed for complex evaluations involving generating and analysing personal accounts of change and deciding which of these accounts is the most significant – and why. There are three steps:

  1. Deciding the types of stories that should be collected.
  2. Collecting the stories and determining which stories are the most significant.
  3. Sharing the stories and discussion of values with stakeholders for learning.

 

MSC is particularly useful when you need different stakeholders to understand the different values that other stakeholders have in terms of “what success looks like” – criteria and standards for outcomes, processes and the distribution of costs and benefits. In a PVE context, this allows for a nuanced understanding of the VE dynamics from a community perspective.


MSC works best in combination with other data collection and analysis methods which capture broader social or structural change, as it is limited in collecting impact-level data. MSC requires access to communities to collect their change stories; when working remotely or with hard-to-reach populations in PVE this access may be hampered. Can be resource-intensive to gather.

 

Perceptions survey: A survey used when trying to find out how people understand or feel about their situations, institutions or services. They can be used to assess needs, establish baselines, analyse trends, and inform design.

 

Perceptions of individuals and communities related to key VE vulnerability or resilience factors, such as attitudes towards state institutions or sense of grievance, can uncover people’s attitudes and beliefs related to specific VE drivers, and can show potential change if repeated at a later time.


Responses are subjective based on respondents’ individual understanding at that specific time; the responses can be influenced by circumstances on the day.

 

Community score cards (CSCs): A quantative participatory tool most often used to solicit community members’ ‘perceptions on quality, efficiency and transparency’ of community service providers and their performance at the local level. Service providers may include implementers, donors, companies, or local institutions, such as police departments, government and district officials, and judiciary procedures. CSCs provide a mechanism for actors associated with an intervention to receive feedback on their behaviour, attitude or conduct.

 

This tool can promote and empower local perspectives through enabling communities to comment directly on services. Useful for assessing sensitivities to local dynamics, gathering perspectives within the community and identifying changes towards particular groups. For example, CSCs could reveal that services are only being provided in one part of a community, thereby exacerbating local tensions. Can be useful for remote monitoring, for example, the results of CSCs can be used to ensure that programmes are being implemented as intended by directly consulting the beneficiaries.


A tool best suited to the local level. It is not recommended at the national or macro-scale due to the degree of facilitation, mobilisation, follow-up, and analysis required.

Qualitative data collection tools for PVE

Table 14: Pros and cons of counterfactual evaluation methods in a PVE context

3. Counterfactual evaluation designs

These seek to quantify the impact of a programme compared to the counterfactual i.e. had the programme not occurred. Using measurement from the same tool (e.g. questionnaire) to make comparisons either between groups/individuals or over time is fundamental to these approaches.Qualitative data collection methods are useful in helping to answer the ‘why’ behind data from quantitative methods, and can provide more in-depth information. Quantitative methods can be used to help identify hypotheses and explore assumptions around PVE (for example, at design stage for the programme ToC) or evaluation questions, and inform and test survey design. Methods include the following:

Examples
Advantages
Disadvantages
 

Experimental design, or randomised Control Trial: RCT or randomised impact evaluation designs assign participants randomly to either intervention/treatment or comparison groups. Assigning people randomly (using specific methods of randomisation) to treatment or comparison minimises group differences that may bias results: the only difference between groups should be the intervention you wish to evaluate.

 

Minimises bias and isolates the impact of your intervention. Gives the best estimate of the counterfactual. Randomisation can be to different interventions, and/or intervention vs. no intervention. Randomisation can be at cluster level (such as school or district) if suitable for the programme.


Requires significant planning, expertise and can be costly. Must be designed into the programme from the start. RCTs do not allow for programme adaptation and limit flexibility, which is a challenge in conflict and fragile contexts. Random assignment of participants is often impractical within PVE contexts where it is hard to access target groups (e.g. issues of sufficient numbers or attrition). Large sample sizes can be needed to give statistical power and therefore reliable results. Implementers often question the ethics of assigning people to treatment not based on need but randomly, however there are ways to manage this concern (e.g. using ‘waitlist’ design where individuals assigned to the control group will participate in the programme at a later date).

 

Quasi-experimental design: Participants are assigned to treatment and comparison groups through a matched design rather than at random. Either individuals, communities or other units are matched on various factors to ensure that any attributable differences between the groups post-programme are due to the treatment and not another variable.

 

A well selected comparison group gives most of the advantages of an RCT. Additionally, it is possible to match people later in programme implementation, post-treatment assignment, increasing practicality.


Non-random assignment means that there may be bias built into group selection, for example, assignment by the time of day people visit a clinic may accidentally sample groups with different demographics or working patterns.

 

Non-experimental design: Does not use a comparison group, but takes a measurement before and after an intervention.

 

An effective way of measuring change that occurred during an intervention. Less complex than other approaches.


The lack of a control or comparison group makes it harder to detect if circumstances have affected results, any change is harder to attribute to your intervention.

Using RCTs to assess the impacts of vocational training and cash transfers on youth support for political violence in Afghanistan

Mercy Corps, Political Violence FieldLab at Yale University and Princeton University, supported by United States Insitute of Peace, undertook a randomised controlled trial to test the impact of a youth employability programme and cash transfer on youth attitudes toward and willingness to support political violence in Kandahar Province, Afghanistan. The US-funded INVEST programme's primary goal was to help vulnerable Afghan youth develop skills that are responsive to local market needs and to help them secure economic opportunities.

The study tested whether a programme designed explicitly to improve economic outcomes can also affect support for political violence. The main components of the programme were technical and vocational education and training (TVET). Unconditional cash transfers (UCT) were provided as an additional intervention to a random subsample of participants to test the effects of cash transfers on economic and violence outcomes. Additionally, Mercy Corps tested how the interventions affected psychosocial wellbeing and perceptions of the government in the short term.

Data were collected at baseline, endline upon participants' completion of the TVET courses and in a post-programme survey six to nine months after course completion.

The finding showed:

  • Vocational training by itself had no impact on youth support for political violence, despite helping to improve economic outcomes six to nine months post intervention. Even after experiencing those improvements, youth still showed no change in support for political violence.
  • Cash transfers reduced willingness to support violent groups in the short term; however, these positive effects quickly dissipated. Six to nine months later, the effect is reversed, with youth who only received the cash transfer registering slightly higher support for armed opposition groups.
  • The combination of vocational training and cash transfers resulted in a large reduction in willingness to engage in pro-armed opposition group actions six to nine months post intervention

Source: J. Kurtz, B. Tesfaye and R. J. Wolfe, Can economic interventions reduce violence? Impacts of vocational training and cash transfers on youth support for political violence in Afghanistan, Washington DC: Mercy Corps, 2018 .

Table 15: Pros and cons of selective tools for organising and analysing data

4. Selected tools for organizing and analysing data

Engages a group in forming the process and generating data; typically seeks or expects to change the perception or knowledge of the participants

Examples
Advantages
Disadvantages
 

Media content and discourse analysis: Monitoring and analysis of different media sources and outlets, looking at content, language, visuals and tone. Discourse analysis then examines the key assumptions underpinning the discourses that influence social constructions and interactions.

 

Helps measure the unmeasurable by analysing the relationship between discourse and societal effects. This could be suitable for programmes working with media or public figures and on counter- narratives.


Requires significant expertise to be able to undertake the analysis. Requires building up a picture of analysis over time to identify patterns. Data protection, online safety, and rights to expression. Distinction needs to be made between online/offline persona (whom people interact with online and in the‘real world’).

 

Contribution analysis: Assesses programmes’ achievement towards an outcome. Contribution analysis focuses on questions of ‘contribution’, specifically to what extent observed results (positive or negative) are the consequence of a programme. It involves 6 steps:

  1. Set out the attribution problem to be assessed,
  2. Develop a ToC/logic model,
  3. Populate the model with existing data and evidence,
  4. Assemble and assess the ‘performance story’,
  5. Seek out additional evidence,
  6. Revise the ‘performance story’.

 

Contribution analysis is particularly useful for situations where designing an ‘experiment’ to test cause and effect is impractical, which is likely in PVE contexts. Contribution analysis is also useful when the programme has been funded on the basis of a relatively clearly articulated ToC and where there is little or no scope for varying how the programme is implemented.


Contribution analysis requires experience and expertise, as well as resources to go through the 6-step process.

It does not uncover an implicit or inexplicit ToC. It does not provide definite proof but provides evidence for whether the programme has made an important contribution to the documented results.

 

Qualitative coding: Coding is a fundamental task in most qualitative projects – it involves gathering all the material about a particular theme or case into a node (branch/pattern/ theme) for further exploration.

 

Enhances qualitative analysis, allows for more nuanced and systematic analysis of qualitative data.


Requires time (transcribing, coding and analysing). Software, such as NVivo, reduce time and increase efficiency, but licences can be expensive.


Micro-narratives for PVE – Jordan case study

UNDP Jordan piloted the use of a micro-narratives approach for evaluating its livelihoods programme which has a PVE objective. The micro-narratives methodology involves project beneficiaries in interpreting their own stories for quantitative analysis, plotting responses on a series of data points on triads (triangles) and dyads (ranges).

The research involved a series of questions, such as: “What was a recent interaction you had, either positive or negative, with local authorities?”; the interviewee might be presented with a triad with one trait at each vertex (i.e. ‘discriminatory,’ ‘helpful,’ ‘incompetent’) and then be asked to pinpoint where on the triad the local authority in the micro-narrative is situated. After conducting all interviews with respondents, the method can produce quantitative representation of community attitudes and behaviours. Conducted repeatedly over time, these can demonstrate shifts in community attitudes and behaviours.

The benefits of self-analysis by respondents help to reduce interpretation bias on the part of the researcher. However, micro-narratives is a new tool within this context, and needs further refinement to ensure it is an appropriate and useful tool for assessing PVE outcomes.

During implementation, analyses from the data from the micro-narrative approach was found to raise more questions. The data gave useful information on beneficiaries' perceptions of their situation and relationships, however, it did not explicitly deal with assumptions in the ToC related to the PVE context. Options for addressing these gaps could include the following:

  • Based on the data collected and implementation so far, review the programme ToC to make assumptions around PVE explicit and evidence these assumptions with available data. Carry out additional targeted PVE context analysis if required. If feasible, use this review process as part of a mid-term review to adapt programme to more explicitly contribute to PVE objectives.
  • 'Check out' data gaps through additional consultation with beneficiaries, partners and experts to help contextualise data received and guide areas for further investigation.
  • Identify and test specific hypotheses related to PVE in the context (such as the relationship between unemployment and vulnerability) with additional data collection. Use data collection methods (such as additional focus group discussions, interview or surveys), and triangulate these data with micro-narratives data collected previously.
  • Consider gaps and untested assumptions in programme evaluation design and include these in the evaluation TOR and methodology.

Source: J. Kurtz, B. Tesfaye and R. J. Wolfe, Can economic interventions reduce violence? Impacts of vocational training and cash transfers on youth support for political violence in Afghanistan, Washington DC: Mercy Corps, 2018.