One of several insights from the recently published Survey and Analysis of the Data Requirements for Stakeholders in African Agriculture is that people tend to feel their data is the best, the truest, the most suitable data for informing the decisions of others and that everyone else’s data is a bit suspect. Which is perhaps why the wide availability of vast amounts of data does not appear to be greatly improving intervention and policy decisions.
The Survey and Analysis helps explain why that is. And it helps a great deal that this is a well written, insightful and easy to read paper with important messages. Each and every reader will take away a message that resonates with them, and for me that message is:
Information has value for supporting a decision only if it reduces the chance of or the cost of being wrong.
Many project planners, managers and researchers will reply, “But of course my project, my research does just that!” Most would be wrong. Research across many fields has shown that quantitative decision analysis methods overwhelmingly outperform expert judgment in identifying the economic value of information and improving decisions.
Daniel Khaneman in Thinking, Fast and Slow presents decades of research on ‘cognitive biases’, one of which is ‘expert bias’. Khaneman’s research shows that while there is such a thing as expertise, it operates within fairly narrow parameters. This and other research across many fields demonstrates quite clearly that quantitative decision analysis methods overwhelmingly outperform expert judgment in identifying the economic value of information and improving decisions.
Most people, excepting experts, are inclined to agree, more or less, with that statement, until it hits home. In most fields of research, agriculture included, one of the fundamental biases that prevents so much research being put to good use is that experts are quite confident they know best what we should be researching and what data we should be generating. As the authors so diplomatically put it:
“…developments in information technology have not been matched by impacts on improving policy decisions. It is possible that there are important differences between what researchers say they need to collect or use and what is most likely to impact decisions?”
The results of the survey suggest that ‘yes’, there are important differences. Those differences are, in part, a result of the ‘scalar disconnect’ between the interests and perceptions of stakeholder groups.
“Different stakeholder groups are routinely making sets of critical decisions related to agriculture. For example, donors and research managers decide on which projects to fund and how to monitor impacts; research scientists decide on what to research, which research design to use and what to measure; regional organizations decide on which programs to support and promote; governments decide on which policies, programs and projects to implement, and what to measure to track their performance; universities decide on how to allocate resources among departments and what material to include in their curricula; the private sector decides on which R&D projects to invest in and which products to promote; development NGOs decide on which practices to extend where; farmers decide on which crops to plant, when and how much area to plant, and what inputs to buy. All of these examples are decisions characterized by the need to allocate scarce resources among alternative actions under conditions of large uncertainty in outcomes. Applied research should support these decisions.”
As the survey shows, applied research does not always support those decisions. One of the several insights from the survey suggests why: stakeholders tend to pursue and use their own, newly collected data first. Which is why one of the survey recommendations is that:
Awareness of key decisions and what data is needed to support them should be widespread and routine. Clearly, some researchers will be more concerned with fundamental research than applied research and perhaps there are cases where sponsors recognize that. But in cases where researchers are interested in improving development outcomes – or where their sponsors believe they should be –researchers should be consistently apprised of what data they should be gathering and how it informs decisions. The habit of collecting data for the sake of having data is a practice that should be discouraged in view of limited resources.
There are many other valuable insights and recommendations here for interested readers who feel it’s time our thinking caught up with the technology. Rather than spending millions of dollars generating yet more data, perhaps it’s time we considered “better stewardship of data” and taking into consideration what the eyes of other beholders are looking for in the data.
“We welcome efforts to better connect data collection and dissemination efforts to critical development decisions through the wider use of scientifically sound methods for prioritizing measurements”. Rachel Lambert, DFID Senior Livelihoods Adviser
Citation: Clapp, A.; DauSchmidt, N.; Millar, M.; Hubbard, D.; Shepherd, K. A survey and analysis of the data requirements for stakeholders in African agriculture. World Agroforestry Centre (ICRAF), Nairobi, Kenya (2013) 28 pp.
This report was produced as part of DFID’s contribution to the G8 initiative on Open Data for Agriculture, the World Agroforestry Centre (ICRAF) on behalf of the CGIAR Program on Water, Land & Ecosystems, in partnership with Hubbard Decision Research.
The authors gratefully acknowledge financial support from DFID and helpful review comments from Professor Margaret Gill (DFID). We thank Grace Muinga for her dedicated support in web research, communications with stakeholders, maintaining stakeholder databases, and arranging interview schedules.