Field Monitoring and Data Collection Services

Monitoring Data

Monitoring – collecting and managing data

This section looks at the different tools and approaches for collecting and managing the information needed. This is broken down into two types of methods:

  • Methods to be used in real-time for managers and practitioners to collect data throughout the process: these generally relate to the output, uptake, and more immediate outcome measures, as they tend to be more tangible and observable.
  • Methods more oriented towards the more intermediate and longer-term outcome measures: these require more time and are generally used retrospectively.

Real-time data collection methods

Generally, if the intervention is very brief and engagement with individuals is very limited (e.g. through the broadcast media), the data for collection will be thin and may need to be supplemented with data from discrete studies.

The deeper the engagement, the more in-depth will be the information you can collect in real-time – and the more important these methods will become. Here are some of the methods.

Journals and logs

One of the most basic ways of capturing information is by keeping a journal of observations, trends, quotes, reflections, and other information. Logs are usually quantitative and simple – for example, the number of people attending an event or airtime during a radio show.

Journals are more descriptive, and either structured with a specific format and fields to be filled in (such as progress against predefined measures or changes in contextual factors) or unstructured, allowing the author to record comments.

They can be notebooks carried by team members or electronic (website, database, intranet, email or even mobile apps).

Examples include ODI’s ‘M&E log’, which all staff members can contribute to by sending an email to a particular inbox, which then stores the information on the institute’s intranet.

The unstructured approach makes it very easy for staff to submit evidence of uptake of research outputs and feedback from audiences but does require effort to maintain, systematize and use.

The Accountability in Tanzania program collects journals from its 20-plus partners, each reporting on the outcomes of up to eight different actors, to understand their influence at the national and local levels in Tanzania. It asks for journals to be submitted only twice a year and has developed a database to organize the information, enable analysis and identify patterns.

After action review

The RASA developed after-action reviews as a technique for debriefing on a tactical maneuver. We have been adapted to organizational use and are commonly applied as part of a learning system. An after-action review is typically used after an activity has taken place, bringing together the team to reflect on three simple questions: what was supposed to happen, what actually happened, and why were there differences?

They are designed to be quick and light – not requiring a facilitator, an agenda, or too much time – and collect any information that might otherwise be forgotten and lost once the event passes. Therefore, they should be included as part of the activity itself and scheduled right at the end. Like a journal, notes from the meeting should be filed away and brought out at the next reflection meeting.

A variation on the after-action reviews is an ‘intense period debrief’, developed by the Innovation Network in the US as a method for advocacy evaluations. The richest moments for data collection in any policy-influencing intervention are likely to be the busiest – such as when mobilizing inputs into a parliamentary committee hearing or responding to media attention.

Data collection methods should adapt to this. The intense period debrief unpacks exactly what happened in that busy time, who was involved, what strategies were employed, how the intervention adapted, and what the outcomes were, without interrupting the momentum of the intervention.

Surveys

Surveys can be useful for obtaining stakeholder feedback, particularly when interventions have limited engagement with audiences. They are most appropriate for collecting data on uptake measures since this is about reactions to and uses of intervention outputs.

Surveys can also be used for outcome measures, but the timing has to be considered since outcomes take time to emerge. If a survey template is set up prior to an intervention, it can be relatively quick and easy to roll out after each event or engagement. This could be automated with an online service like SurveyMonkey – you just provide the link to your audiences.

System or relational mapping

When the outcomes desired are related to how a system operates – for example, building relationships between actors, shifting power dynamics, targeting the environment around which a policy is developed, or improving information access or flows – then it can be useful to map that system to see how the different parts fit together. The data required for this are relational (i.e. to do with relationships, connections, and interactions) rather than attributes (i.e. to do with facts, opinions, behavior, and attitude).

They are usually collected through standard techniques such as surveys, interviews, and secondary sources. By asking about the existence and nature of relationships between actors, a very different picture emerges of what the system looks like. This can be easily turned into a visual map to help identify patterns and new opportunities for influence.

One particular method is NetMap, an interactive approach that allows interviewees to use physical objects and colored pens to describe relationships between actors and their relative influence on a particular issue. It can be a useful variation if the aim is to gain perspectives across a system or network.

Another variation is influence mapping, which asks specifically about the influence one actor has on the opinions and actions of another.

An influence map can show the primary and secondary (and if needed tertiary) influences on a key decision-maker. This can help in planning or adapting influencing strategies or identifying possible individuals to consult for a bellwether interview.

Another case study method relates to episode studies, which look at the different mechanisms leading to a change. These are not systematic assessments of how much each factor has contributed to the change but they are very labor- and evidence-intensive. The steps are the same as for stories of change except that the evidence-gathering stage investigates any and all factors influencing the change, including but not limited to the intervention.

Select the Appropriate Data Collection Method

A data collection method refers to the procedure for how data are collected. Quantitative data collection methods produce countable or numerical results. Qualitative data collection methods produce non-numerical data, such as perceptions and descriptions. While performance monitoring is often associated with quantitative indicators, data collection methods for performance monitoring may be either quantitative or qualitative. The table below presents a snapshot of some of the data collection methods for performance monitoring:

RASA Commonly Used Data Collection Methods

Recording Data Through Administrative Actions

Recording data through administrative actions in the course of implementing activities is one of the most common methods of data collection, particularly for our implementing partners. Examples include recording attendance at training courses, awarding grants to local organizations, hours of technical assistance provided, and deliveries of food aid. Recording data through administrative actions is primarily a method of quantitative data collection.

Electronic Data Harvesting

Electronic data harvesting encompasses data collection of electronically generated data. This could include a record of people’s actions in an online environment (e.g., number of downloads) via texts or apps on mobile devices, social media data (e.g., “tweets” on Twitter), or data generated from cell phones and other mobile devices (such as human mobility data) Electronic data harvesting is a method of quantitative data collection.

Survey and Assessment

A survey comprises a structured series of questions that respondents are asked according to a standard protocol. A survey tends to include mostly closed-ended questions, but can also include open-ended questions (e.g., to explore or better understand responses to closed-ended questions). A survey is conducted on a total population of interest (census), on a representative sample of the population of interest that through statistical methods can be generalized to the total population, or on a non-representative sample of the population of interest that is not generalizable. Surveys are primarily a method of quantitative data collection, though survey questions can be either quantitative or qualitative in nature, and can measure coverage (i.e., who received an intervention), satisfaction, perceptions, knowledge, attitudes, and reported actions or behaviors.

In-depth Interview (IDI)

An in-depth interview is usually conducted one-on-one by an interviewer who asks an interviewee about their knowledge, experiences, feelings, perceptions, and preferences on a certain topic. IDIs can also be conducted with a group though this may not always be appropriate or optimal. The interviewer relies on a structured, semi-structured, or unstructured question guide or list of themes/points to be discussed and often encourages a free flow of ideas and information from the interviewee. A Key Informant Interview (KII) is a type of IDI, whereby an interviewee is selected for their first-hand knowledge of the topic of interest or geographical setting (e.g., community). IDIs are a method of qualitative data collection.

Focus Group Discussion (FGD)

A focus group discussion involves a skilled moderator who stimulates discussion among a group of individuals to elicit experiences, feelings, perceptions, and preferences about a topic. The moderator uses a list of topics to be discussed, ensures all voices are represented, and keeps the discussion on track. Focus group data may include information about body language, group dynamics, and tone, in addition to what is said. Typically, groups comprise 6-12 purposively selected participants; however, size and selection techniques may vary. Focus groups differ from group interviews in format, how they are facilitated, who may be chosen to participate, and the types of data that come out of the process. FGDs are a method of qualitative data collection.