In governance and management it can be defined as the duty to ensure and report that the use of authority is aligned with rules, standards, policy and interests of the program, organization and the broader group of stakeholders. In the context of research for development, accountability can be seen the obligation to take responsibility for performance in light of commitments, to the extent that performance is in the control of the program/project. Accountability requires ownership and acceptance of responsibility and the ability to deliver or influence the delivery of the desired results.
Describes reaching a positive result (see also term results).
A measurable amount of work performed to convert inputs (ie. time and resources) into outputs.
Acquisition and use of social, institutional or technological innovations.
Assessment of the adoption/use of a CG output and/or methodology in order to make a case for CGIAR contribution to the outcome, relative to other potential influencing factors.
An ex‐ante assessment of the quality, relevance, feasibility and potential for impact and sustainability of a research program or activity, usually prior to a decision on funding it.
Conditions that must be present for the causal chain behind an intervention to hold. These often relate to factors, risks or context which could affect the progress or success of a development intervention.
The causality between observed (or expected) changes and an output from research or related activity. Attribution refers to both isolating and estimating the particular contribution of a program/project to the outcome/impact.
Financial and management audit in the CGIAR provides accountability to management at the level of the Center Boards, Consortium and Fund Council on finances and assets and also provide elements of oversight in human resources and business efficiency. Some audits also ensure compliance with other regulations such as genebank standards.
An analytical description of the situation prior to research activities, against which progress can be assessed or comparisons made.
Objectivity and impartiality on the part of evaluators (which is not guaranteed by structural independence; for example evaluators may be reluctant to be critical of people they think may provide them with future contracts).
Reference point or standard against which performance or achievements can be assessed. Note: A benchmark refers to the performance that has been achieved in the recent past by other comparable organizations, or what can be reasonably inferred to have been achieved in the circumstances.
The individuals, groups, or organizations, whether targeted or not, that benefit, directly or indirectly, from the chain of events that research has contributed to.
CIMMYT receives funding on a bilateral basis for specific projects (see also Window 1, 2 and 3 funding).
An estimate of funds allocated for development. CGIAR funding includes Window 1 (W1), Window 2 (W2), Window 3 (W3) and bilaterally raised funds by CGIAR centers, excluding leveraged funds from non-CG partners.
- W1-2: Funds allocated by the CGIAR Fund Council to a CRP.
- W3: Funds allocated by Fund Donors to a Center.
- Bilateral: Funds allocated by donors to a Center outside of the CGIAR Fund.
Is what is put forth through CCAFS and covers Window 1 (W1), Window 2 (W2), Window 3 (W3) and bilaterally raised funds by CGIAR centers, excluding leveraged funds from non-CG partners.
- W1: Funds to support the entire CGIAR program portfolio including through CRPs as well as to proposals from the consortium for support to other critical activities that are vital for successful implementation of the strategy and results framework.
- W2: Funds designated to one or more of the CRPs
- W3: Donor funds specifically earmarked to a CRP through a center
- Bilateral: Funds raised directly by CGIAR Centers through concepts notes and proposals. Centers decide to which CRP they want to map their bilateral and not the donor who decides.
CGIAR platforms underpin the research of the CGIAR system portfolio. Four platforms exist: one for management of gene banks and CGIAR genetic resources policy; one as a technological platform to accelerate research on all commodities – particularly to accelerate genetic gain for yield improvement; a third in the area of the management of the ever increasing data from many fields of research and means to cross reference and analyze this – Big Data, and a fourth collaborative platform focused on Gender and housed with the CRP "Policies, Institutions and Markets (PIM).
The primary clients of evaluation – those requesting or receiving the evaluation (for example senior managers or donors of a CRP). Elsewhere it is often used to refer to the target group for a research project.
Evaluation of a set of related activities, projects and/or programs.
Cluster of Activities
Structural components of a CRP flagship project. They are the summary description of a range of key outputs that are linked and related to each other e.g. through their contribution towards an outcome.
A multi-stage sample design, in which a sample is first drawn of geographical areas (e.g. sub-districts or villages), and then a sample of households, firms, facilities or whatever, drawn from within the selected districts. The design results in larger standard errors than would occur in a simple random sample, but is often used for reasons of cost.
In economic terms, a comparative advantage in producing or selling a good is possessed by an individual, firm or country with the lowest opportunity cost (as opposed to absolute cost) in producing the good. In these standards the term refers more broadly to the role and mandate of the CGIAR in producing international public goods where there are no alternative research suppliers that are better positioned to produce those goods.
A group of individuals whose characteristics are similar to those of the treatment groups (or participants) but who do not receive the intervention. Under trial conditions in which the evaluator can ensure that no confounding factors affect the comparison group it is called a control group.
Conclusions point out the factors of success and failure of the evaluated intervention, with special attention paid to the intended and unintended results and impacts, and more generally to any other strength or weakness. A conclusion draws on data collection and analyses undertaken, through a transparent chain of arguments.
Level of certainty that the true value of impact (or any other statistical estimate) will be included within a specified range.
Causal relationship in which an intervention is one of two or more causal elements leading, independently or in combination, to a change.
Special case of the comparison group, in which the evaluator can control the environment and so limit confounding factors.
Cost Benefit Analysis
Comparison of all the costs and benefits of the intervention, in which these costs and benefits are all assigned a monetary value. The advantage of CBA over analysis of cost effectiveness, is that it can cope with multiple outcomes, and allow comparison in the return to spending in different sectors (and so aid the efficient allocation of development resources).
Extent to which the program has achieved or is expected to achieve its results at a lower cost compared with alternatives. Cost‐effectiveness analysis is distinct from cost‐benefit analysis, which assigns a monetary value to the measure of effect. In research programs costing of outputs is more feasible than outcomes that typically depend on conditions and activities outside of research.
Situation or condition which hypothetically may prevail for individuals, organizations, or groups were there no development intervention.
Describes issues in a project that cut across a CRP research agenda, which currently are limited to three: gender, youth, and capacity development.
Scientific credibility requires that research findings be robust and that sources of knowledge be dependable and sound. This includes a clear demonstration that data used is accurate, that the methods used to procure the data are fit for purpose, and that findings are clearly presented and logically interpreted. It also recognizes the importance of good scientific practice, such as peer review. One of four elements of "Quality of Research for Development" (see also: Relevance, Legitimacy, Effectiveness).
CRP (CGIAR Research Program)
CGIAR Research Programs are focused on two interlinked categories of agricultural research. The first of these is the “innovation in agri-food systems (AFS)” which involves adopting an integrated, agricultural systems approach to advancing productivity, sustainability, nutrition and resilience outcomes at scale. There are 8 AFS CRPs. The second category consists of 4 cross-cutting “global integrating programs (GIP)”, with CRPs framed to work closely with the eight agri-food systems CRPs within relevant agro-ecological systems. These CRPs will consider the influence of rapid urbanization and other drivers of change to ensure that research results deliver solutions at the national level that can be scaled up and out to other countries and regions.
A specific, time-bound, tangible information and knowledge product that is linked to an output. It is proof, in digital, electronic, physical or other kind of soft or hard copy of the completion of a set of activities. Examples of deliverables are: workshop reports, journal articles, datasets, training materials.
A variable believed to be predicted by or caused by one or more other variables (independent variables). The term is commonly used in regression analysis.
The breakdown of observations, usually within a common branch of a hierarchy, to a more detailed level to that at which detailed observations are taken.
Double loop learning
In double-loop learning the underlying causes behind the problematic action are questioned and then addressed or corrected. This leads to deeper understanding of assumptions and better decision-making in everyday operations.
The extent to which objectives have been achieved. The term is also used in the CGIAR to refer to an element of research quality concerning the extent to which research is designed and positioned to be effective.
A measure of how economically resources/inputs (funds, expertise, time, etc.) are converted to results. In research context, assessment of efficiency refers to activities and outputs that are in the control of the research programs or cut across several CRPs and takes into account the exploratory nature and risks inherent to research (see also Value for Money).
The final population that ultimately makes use of and is intended to benefit from the results of an intervention.
The systematic and objective assessment of an on‐going or completed project, program or policy, its design, implementation and results. In the CGIAR evaluation refers to an external, completely (IEA commissioned) or largely (CRP commissioned) independent and systematic study of an in‐depth nature that uses clear evaluation criteria. In addition to research, it applies also to central CGIAR institutions, support programs and themes, and the System as a whole. An evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into the decision‐making processes of major stakeholders.
Different aspects of quality of a program which are used internationally to develop evaluation questions and serve as a check that all major issues have been considered. In the CGIAR these include relevance, efficiency, quality of science, effectiveness, impact and sustainability.
Evaluation reference group
A structure set up to work with the evaluation managers to ensure good communication with, learning by, and appropriate accountability to primary evaluation clients and key stakeholders, while preserving the independence of evaluators.
The information presented to support a finding or conclusion. Such evidence should be sufficient and relevant. There are several sources for evidence: observations (obtained through direct observation of people or events); documentary (obtained from written information); analytical (based on computations and comparisons); self‐reported (obtained through, for example, surveys) and experiential (based on professional understanding and expertise that is accumulated over time) and is based on credible and legitimate sources.
An evaluation that is performed before implementation of a development intervention.
Evaluation of a development intervention after it has been completed. Note: It may be undertaken directly after or long after completion. The intention is to identify the factors of success or failure, to assess the sustainability of results and impacts, and to draw conclusions that may inform other interventions."
The evaluation of a development intervention conducted by entities and/or individuals outside the donor and implementing organizations.
Focuses on program/project implementation and is improvement-oriented.
Flagship Project (FP)
A major organizational component of a CRP, and an area of research that, together, with other interrelated Flagships, forms a CGIAR Research Program. Each delivers CRP outputs and outcomes through Clusters of Activities (See Clusters of Activities).
Global public goods
These are defined as goods with the three following economic properties: ‘non‐rivalrous’ (i.e. consumption of this good by anyone does not reduce the quantity available to others), ‘non‐excludable’ (it is impossible to prevent anyone from consuming it) and available worldwide.
A specific statement regarding the relationship between variables. In an impact evaluation the hypothesis typically relates to the expected impact of the intervention on the outcome.
A change in state resulting from a chain of events to which research outputs and related activities have contributed. Some examples: crop yield, farm productivity, household wealth (state) income (flow), quality of water (state), water flow (flow)."
Studies that estimate the causal effects of research outputs and related activities on one or more development parameters of interest. Assessing the costs of the intervention vis-ï¿½-vis the magnitude of impact achieved is important as well. Impact assessments are usually carried out after scaling has taking place and typically include estimates on the extent of use/adoption of the intervention as well as development outcomes (see also Impact evaluation).
Studies that estimate the causal effects of an intervention on one or more development parameters of interest. Assessing the costs of the intervention vis-ï¿½-vis the magnitude of impact achieved is important as well. Recently, the term impact evaluation has been used to describe studies which use experimental/ quasi experimental methods conducted to determine whether an intervention ought to be considered for scaling up (see also Impact assessment).
The causal pathway for a research project or program that outlines the expected sequence to achieve desired objectives beginning with inputs, moving through activities and outputs, and culminating in outcomes and impacts. Assumptions underpinning the causal chain and feed‐back loops are usually included (Closely related terms include Logical Framework and Theory of Change.)
In conducting an evaluation, the absence of bias in due process, in the scope and methodology, and in considering and presenting achievements and challenges. The principle applies to the clients of the evaluation, donors and partners, management, beneficiaries, and the evaluation team.
An evaluation that is carried out by entities and persons free from the control of those involved in policy making, management, or implementation of program activities. This entails both organizational and behavioural independence, protection from interference, and avoidance of conflicts of interest.
A variable believed to cause changes in the dependent variable, usually applied in regression analysis.
A quantitative or qualitative variable that represents an approximation of the characteristic, phenomenon or change of interest (for instance, efficiency, quality or outcome). Indicators can be used to monitor research or to help assess for instance organizational or research performance.
Research and development innovations are new or significantly improved (adaptive) outputs or groups of outputs - including management practices, knowledge or technologies. This could also refer to a significant research finding, method or tool. A significant improvement is one that allows the management practice, knowledge or technology to serve a new purpose or a new class of users to employ it, for example a new variety, a blend of fertilizer for a particular soil type, or a tool modified to suit a particular management practice (see Outputs).
The financial, human, and material resources used in research.
Intermediate Development Outcome (IDO)
Intermediate development outcomes constitute a hierarchical results level within the CGIAR Strategic Results Framework (SRF) which is just lower than the system-level impact (SLI) level, and higher than the sub-intermediate development outcome results area. IDOs and sub-IDOs contribute to system level impacts (see System Level Impacts).
Evaluation of a development intervention conducted by a unit and/or individuals reporting to the management of the donor, partner, or implementing organization.
International public goods
See Global public goods
A deliverables is interoperable when:
- The deliverable is published on a repository that implements a metadata schema (In the CGIAR we use the CGCore Metadata Schema which is based on Dublin Core standard).
- The deliverable is published on a repository that implements an interoperability protocol (i.e. CGspace https://cgspace.cgiar.org/ or Dataverse).
- The (meta) data uses a formal, accessible, shared, and broadly applicable language for knowledge representation.
- The (meta) data uses vocabularies that follow FAIR principles (Findable, Accessible, Interoperable and Reusable).
- The (meta) data includes qualified references to other (meta) data.
One of four elements of "Quality of Research for Development". Means that the research process is fair and ethical and perceived as such. This encompasses the ethical and fair representation of all involved and consideration of interests and perspectives of intended users. It suggests transparency/lack of conflict of interest, recognition of responsibilities that go with public funding, genuine recognition of partners’ contributions as well as partnerships built on trust.
Generalizations based on evaluation experiences with projects, programs, or policies that abstract from the specific circumstances to broader situations. Frequently, lessons highlight strengths or weaknesses in preparation, design, and implementation that affect performance, outcome, and impact.
Logical framework / logical model
Link inputs and activities to outputs, outcomes and impacts in a visual presentation. Logic models do not provide insights into causality. The detail tends to be in the activity and output levels. Assumptions and risks that are part of a logical framework presentation tend to be outside the control of the program. Logic models follow an agreed presentational form.
Evaluation performed towards the middle of the period of implementation of the intervention. Related term: formative evaluation.
A time bound target that reflects progress towards a planned result. Milestones include both outputs, output use and outcomes along CRP impact pathways as appropriate to the scale and maturity of the work.
The use of both quantitative and qualitative methods in an evaluation design. Sometimes called Q-squared or Q2.
A process of continuous or periodic collection and analysis of data to compare how well a project, program, or policy is being implemented against expected progress and results, in order to track performance against plans and targets, to identify reasons for under or over achievement, and to take necessary actions to improve performance.
In the context of the CGIAR, this refers to the accountability of all partners, including donors, for the efficiency of outputs, outcomes and impacts of a program, institution or policy and sustainability of research.
Actors such as national research institutions, extension organizations, NGOs and others, who access CGIAR products directly and can help CRPs reach end-users.
NARES/ NARS (National agricultural research and extension systems or National agricultural research systems)
Includes organizations and institutions created and/or funded by the government as a support for the national program of agricultural development with the purpose of improving agricultural research, management, financing, and service delivery (extension services). They comprise a variety of public or private stakeholders (universities, civil society, farmers’ groups, private sector) engaged in agricultural research and which promote linkages with institutions at national, regional and international level. It is important to distinguish these from academic and research institutes in-general as funders like to know the status of NARS/NARES partnerships specifically.
A change in knowledge, skills, attitudes and/or relationships, manifest as a change in behavior, to which research outputs and related activities have contributed.
Outcome case study
An outcome case study focuses on a particular unit - a person, a site, a project. It often uses a combination of quantitative and qualitative data. Outcome case studies can be particularly useful for understanding how different elements fit together and how different elements (implementation, context and other factors) have produced the observed changes in practice/ behaviour. The outcome cases reported may be at different stages of maturity and can be used for different purposes in evaluation and reporting to funders.
Knowledge, technical or institutional advancement produced by CGIAR research, engagement and/or capacity development activities. Examples of outputs include new research methods, policy analyses, gene maps, new crop varieties and breeds, institutional innovations or other products of research work.
Evaluation method in which representatives of agencies and stakeholders (including beneficiaries) work together in designing, carrying out and interpreting an evaluation.
Organizations or individual stakeholders that the CGIAR collaborates with to achieve its goals.
A process of review involving qualified individuals within the relevant field. Peer review methods are employed to maintain standards of relevance and quality to improve performance and provide credibility. A peer review may be an input into an evaluation.
The continuous process of setting goals, measuring progress, giving feedback, coaching for improved performance, and rewarding achievements
The ongoing monitoring, measurement and reporting of program accomplishments and progress toward pre‐established goals, which involved collecting data on the level and type of activities (inputs) and the products and services delivered by the program (outputs).
PPA (Program Participant Agreement)/ Managing Partners
The institutions – CGIAR and non CGIAR - with whom a CRP has a formal contract to participate in the CRP. Typically, these are the institutions that are receiving W1/W2 funds directly from the CRP.
Data observed or collected by a researcher from first-hand experience, specifically for the research project of interest.
An evaluation of the internal dynamics of implementing organizations, their policy instruments, their service delivery mechanisms, their management practices, and the linkages among these. Related term: formative evaluation.
Evaluation of a set of interventions, marshaled to attain specific global, regional, country, or sector development objectives. Note: a development program is a time bound intervention involving multiple activities that may cut across sectors, themes and/or geographic areas. Related term: Country program/strategy evaluation.
Program Management Unit
The group of persons carrying out the day-to-day management and coordination of a CRP or CGIAR platform.
Evaluation of an individual development intervention designed to achieve specific objectives within specified resources and implementation schedules, often within the framework of a broader program.
The person in the lead and coordinating role for the project. The person that is ultimately accountable for the delivery of the project and coordination with the project partners.
Quality assurance encompasses any activity that is concerned with assessing and improving the merit or the worth of a development intervention or its compliance with given standards. Note: examples of quality assurance activities include appraisal, RBM, reviews during implementation, evaluations, etc. Quality assurance may also refer to the assessment of the quality of a portfolio and its development effectiveness.
Impact evaluation designs used to determine impact in the absence of a control group from an experimental design. Many quasi-experimental methods create a comparison group using statistical procedures. The intention is to ensure that the characteristics of the treatment and comparison groups are identical in all respects, other than the intervention, as would be the case from an experimental design. Other, regression-based approaches, have an implicit counterfactual, controling for selection bias and other confounding factors through statistical procedures.
An intervention design in which members of the eligible population are assigned at random to either the treatment group or the control group (i.e. random assignment). That is, whether someone is in the treatment or control group is solely a matter of chance, and not a function of any of their characteristics (either observed or unobserved).
Randomized control trial (RCT)
An impact evaluation design in which random assignment has been used to allocate the intervention amongst members of the eligible population. Differences in outcome between the treatment and control can be fully attributed to the intervention, i.e. there is no selection bias. However, RCTs may be subject to several types of bias and so need follow strict protocols. Also called Experimental design.
Proposals aimed at enhancing the effectiveness, quality, or efficiency of a development intervention; at redesigning the objectives; and/or at the reallocation of resources. Recommendations should be linked to conclusions.
A statistical method which determines the association between the dependent variable and one or more independent variables.
One of four elements of "Quality of Research for Development". Refers to the importance, significance and usefulness of the research objectives, processes and findings to the problem context and to society, and CGIAR’s comparative advantage to address the problems. It incorporates strategic stakeholder engagement along the AR4D continuum, explicit impact pathways, original and socially relevant research aligned to national and regional priorities, as well as the CGIAR SRF and SDGs. It also recognizes the importance of International Public Goods.
Consistency or dependability of data and evaluation judgements, with reference to the quality of the instruments, procedures and analyses used to collect and interpret evaluation data.
Independent verification of study findings. Internal replication attempts to reproduce study findings using the same dataset, whilst external replication evaluates the same intervention in a different setting or at a different time. Internal replication may be pure replication, which uses the same data and model specification, or may test robustness to different model specifications, estimation methods and software.
Research quality in the CGIAR is measured in four dimensions: relevance, scientific credibility, legitimacy and potential (ex-ante) or actual (ex-post) effectiveness (see relevance, credibility, legitimacy, effectiveness)/
The output, outcome or impact of an intervention.
RBM is a management strategy by which all actors, contributing to achieving a set of results, ensure that their processes, products and services contribute to the achievement of desired results (outputs, outcomes and higher level goals or impact). The system entities in turn use information and evidence on actual results to inform decision making on the design, resourcing and delivery of programmes for accountability, adaptive management, and learning.
An assessment of the progress and performance of an intervention (including research), periodically or on an ad hoc basis. The words evaluation and review are often used interchangeably, but in the CGIAR, an evaluation refers to an external, completely (IEA commissioned) or largely (CRP commissioned) independent and systematic study of an in-depth nature using clear evaluation criteria, whereas reviews may be more flexible and narrow in focus.
An analysis or an assessment of factors (called assumptions in the log frame) affect or are likely to affect the successful achievement of an intervention’s objectives. A detailed examination of the potential unwanted and negative consequences to human life, health, property, or the environment posed by development interventions; a systematic process to provide information regarding such undesirable consequences; the process of quantification of the probabilities and expected impacts for identified risks.
Scaling up and scaling out
In agricultural development the terms are used nearly interexchangeably to refer to the expansion of beneficial impacts from agricultural research and rural development. Scaling up/out relate to expanding, replicating, adapting, and sustaining successful policies, programs, or projects in geographic space or over time to reach a greater number of people. Scaling is typically preceded by piloting the model, idea or approach initially in a small scale. Scaling‐out may refer specifically to the adoption and adaptation to local circumstances by users; while scaling‐up may refer to extension and institutional support related to scaling.
Data that has been collected for another purpose, but may be reanalyzed in a subsequent study.
Potential biases introduced into a study by the selection of different types of people into treatment and comparison groups. As a result, the outcome differences may potentially be explained as a result of pre-existing differences between the groups, rather than the treatment itself.
Sex disaggregated data
Information differentiated on the basis of what pertains to women and their roles and to men and their roles.
Single loop learning
People, organizations or groups modify their actions according to the difference between expected and reached outcomes in order to fix/avoid mistakes. Single-loop learning can also be described as a situation in which people observe the present situation and face problems, errors, inconsistencies or impractical habits. After that behavior and actions are adapted to mitigate and improve the situation accordingly.
Sphere of control
An element of a conceptual model of the CGIAR RBM system that refers to actions under direct control of the program that result in outputs. Covers CGIAR research, innovations, services and output delivery.
Sphere of influence
An element of a conceptual model of the CGIAR RBM system that refers to actions that can be influenced directly by the program, defined as outcomes. Covers outcome research and sub-intermediate development outcomes (sub-IDOs).
Sphere of interest
An element of a conceptual model of the CGIAR RBM system that includes outcomes and impacts that can be only be influenced indirectly by the program. Covers selected intermediate development outcomes (IDOs) and sub-intermediate development outcomes (sub-IDOs).
When the intervention has an impact (positive or negative) on units not in the treatment group. Ignoring spillover effects results in a biased impact estimate. If there are spillover effects then the group of beneficiaries is larger than the group of participants. When the spillover affects members of the comparison group, this is a special case of contagion.
Strategic Results Framework (SRF)
Defines CGIARs vision and desired impacts and outcomes for 2016-2030.
Agencies, organizations, groups or individuals who have a direct or indirect interest in the CGIAR or its component, for instance research program or its evaluation.
Sub-Intermediate Development Outcome (Sub-IDO)
Sub-intermediate development outcomes constitute a hierarchical results level within the CGIAR Strategic Results Framework (SRF) which is lower than the Intermediate Development Outcome level of results.
The collection of information using (1) a pre-defined sampling strategy, and (2) a survey instrument. A survey may collect data from individuals, households or other units such as firms or schools (see facility survey).
A pre-designed form (questionnaire) used to collect data during a survey. A survey will typically have more than one survey instrument, e.g. a household survey and a facility survey.
The continuation of benefits from a program intervention after research has been completed; the probability of continued long‐term benefits or scalability of the benefits; the resilience to risk of the net benefit flows over time.
The individuals or organizations for whose benefit CGIAR research or interventions are ultimately undertaken.
Unit of measure for a selected indicator.
Value you expect to see in a selected indicator.
Terms of reference
Written document presenting the purpose and scope of the evaluation, the methods to be used, the standard against which performance is to be assessed or analyses are to be conducted, the resources and time allocated, and reporting requirements. Two other expressions sometimes used with the same meaning are “scope of work” and “evaluation mandate”.
Evaluation of a selection of development interventions, all of which address a specific development priority that cuts across countries, regions, and sectors.
Theory-based impact evaluation
A study design which combines a counterfactual analysis of impact with an analysis of the causal chain, which mostly draws on factual analysis.
Theory of Change (ToC)
An explicit, testable model of how and why change is expected to happen along an impact pathway in a particular context. A basic research-for-development ToC identifies the context and key actors in a system and specifies the causal pathways and mechanisms by which the research aims to contribute to outcomes and impacts.(Closely related terms include Logical Framework and impact pathway.)
The costs of planning, adapting and monitoring tasks completion. Transaction cost analysis includes comparison of transaction costs under alternative governance or operating structures.
The group of people, firms, facilities or whatever who receive the intervention. Also called participants.
Triple loop learning
Learning how to learn by reflecting on how learning occurred in the first place. In this kind of learning organizations, individuals or groups reflect on how they think about rules and not only think that rules should be changed. Also referred to as “double-loop learning about double-loop learning”.
The use of different sources or types of information, evaluators or types of analysis, to verify and substantiate an assessment, in order to overcome the potential bias that comes from a single source or method.
Value for Money
Achieving the best possible outcomes [over the life of an activity] relative to the total cost of effectively managing and resourcing that activity.
The extent to which the data collection strategies and instruments measure what they purport to measure.
Window 1 funding
Received from Funders without restriction to the CGIAR and re-allocated by CGIAR System Council, which sets CGIAR priorities and funding allocation.
Window 2 funding
Designated by Funders to specific CRPs and Platforms.
Window 3 funding
Allocated to specific Centers by Funders.
Women Empowerment in Agriculture Index
A survey based instrument designed to measure the empowerment, agency and inclusion of women in the agriculture sector in an effort to identify ways to overcome those obstacles and constraints.
Last Update - January 30, 2019