Which algorithm for international aid and development planning?
The academic field of development has proliferated and matured so much that there seems to be literature analysing almost every distress in our contemporary society. However, those problems are far from being eradicated because applying academic commentaries into the realm of policy reforms and project implementation is a Sisyphean task when the contextual complexities of the situation is an ever-changing force that evolves faster than the speed of scientific inquiry. It is a common mantra in development planning that “context matters”, and that legitimate observation is also the first reason given when some well-planned project doesn’t deliver the results that some donor somewhere had generously paid for. For many international projects, the subsequent explanation is quite common: the donors are usually too far removed to understand the context when given the directives while the responsible agents on the ground do not have time nor capacity to meticulously study the contextual elements. This is an over simplification of the challenges regarding the theory-to-practice implementation gap in development aid, but it is not untrue. A recently released Project Performance Database with scorecards of 14,000 aid projects from nine different international development agencies revealed that nearly 40 percent of all projects were deemed failures, and only 14 percent were considered truly successful (Honig, 2018). Instead of joining the crowded debate on what is the best ways to “deliver aid differently (Fengler and Kharas, 2010)” or to better manage development projects (Diallo and Thuillier, 2005), this survey article will continue with the simplified model of analysis to present the three popular modes of planning and implementation, drawing new parallels with cognitive science’s models of value representation and decision computation, to reframe the comparison of appropriateness and efficacy – arguing that there isn't a simple catch all solution to ensure effective delivery of results and that context also matters quite a lot for picking a planning strategy.
Target-setting and adaptation of best practices
Perhaps the most straightforward way of achieving a goal is by establishing targets, ideally ones that are specific, measurable, achievable, realistic, and timely, and then follow a concrete plan of how to get there. That is the way by which the majority of development organisations operate since there is a plethora of worthwhile goals, best practices, and solid implementation frameworks readily available, oftentimes recommended by multilateral institutions such as the UN agencies, World Bank, OECD, or by the bilateral development funds from affluent countries (Diallo and Thuillier, 2005; Rodrik, 2008). A major attraction of setting clear goals and targets is the foundation for performance benchmarking, which offers an agreeable solution to the increasing demands for impact evaluation of development initiatives, as well as streamlining the reporting process for the responsible parties involved (Bamberger, Rao and Woolcock, 2010). In terms of planning and implementation, the common association between measurable impacts and the menu of best practices makes this approach appealing to donors who prefer the ease of managing straightforward portfolios where funds can be tied to a deliverable, for which each dollar can be assigned an projected impact value. Additionally, many elite institutions are concerned that most aid recipients are “not operating on the efficiency frontier (Collier and Dollar, 2001)”, thus significant reform based on their own tried-and-true models would be necessary to increase effectiveness. That instrumental rationality of following best practices for the best outcomes holds remnants of Rostow’s (1960) catch-up hypothesis and Kerr et al.’s (1960) convergence theory for economic development, which believed that less developed countries could progress towards the desired state of more productive economies via a growth pathway trail-blazed by developed nations, as long as the right value incentives are in place.
It is tempting to suggest that positive societal impacts are the most important incentives prompting behaviour and decisions in development; but in practice, funding and institutional legitimation are the sources of sustenance for organisational survival and operation in this ecosystem (Meehan and Jonker, 2019; Krause, 2013). While positive societal impacts are still the orienting mission, development goals are often translated into action as deliverables, which are then often defined as a set of conditions to be met and to be evaluated upon. Those responsible for implementing projects are rewarded based on the deliverables rather than their completion of the ‘mission', due to the valid acceptance that each organisation can only contribute a fraction to the global agenda and that there has to be some measurable indicator of positive impacts for ‘development’ to be operationalised. The systematic preference for clear targets and measureable outcomes naturally promotes the convergence pathway via best practices. Adopting vetted practices that are aligned with the set of internationally recognized normative targets ensure legitimation for the donors and, consequentially, monetary rewards for agents on the ground (Gurajani and Calleja, 2019). There is also the social reward, a powerful sense of purpose associated with joining a movement of collective action and sharing the burden of work with peers who are on the same path.
This culture of action-based valuation and rewards is analogous to the system of ‘model-free reinforcement learning’ in cognitive science and behavioural psychology. In a model-free computational system, the agent can quickly navigate the set of decisions using existing data on the assigned positive or negative value of each action based on past experience of similar cases without a comprehensive analysis of the larger puzzle or outcome. The agent ranks its options primarily to optimise for more rewards, analysing actions one step at a time within a limited scope and flexibility constrained by the set of available moves adjacent to its immediate state, which does not leave room for exploration of the holistic puzzle but is extremely fast and computationally cheap. In building this model, the reinforcement learning comes from testing and valuating each action based on the presence of reward or lack thereof – for a rat it is whether the button produces food, for a machine it is whether the action reduces the number of steps between start and finish. After repeated trials to generate and collect data to reinforcement the model, all possible actions could be assigned relatively accurate values to feed an even faster algorithm. The problem solving and the reinforcement learning happen concurrently, so the model is updated every time an agent tries a pathway; however, the values assigned to actions only correspond to local rewards and not adapted to changes in the larger puzzle because the model-free agent does not have a holistic causal structure of how actions relate sequentially and how the sequence relates to the outcome (see Cushman, 2013, pp. 276-279).
The desired effect of having a highly efficient computational model to weigh and select appropriate actions is that limited time and resources can be diverted into executing the development programs rather than the analysis of possible interventions. This is particularly helpful in cases where small or young organisations face new challenges without the institutional knowledge or the capacity to navigate the uncertainties of trying new ideas (DiMaggio and Powell, 1983). Following an already established model is the best way for a new agent to learn about the problem terrain, and from that learning experience register its own experience to evaluate those choices for future reference. The adaptation of best practices has also been found to be effective for projects that aim at simple interventions like ensuring public health through vaccination (Levine and Kinder, 2004) or increasing standard of living via infrastructure upgrades (Kenny, 2011) because the assignment has already been clearly defined as the delivery of concrete deliverables. Though, at the scale of national policies, Dussauge (2010) and Krause et al. (2012) have noted that some developing countries were able to successfully replicate budget reforms by replicating facets of policies that worked in OECD countries. The wisdom leading to success is the ability to distinguish between adaptation and insincere mimicry, as to not conflate form and function when choosing to follow the recommendations of best practice (Pritchett, 2012).
For more complex problems, which describe most development agenda, a major issue with action-based valuation and rewards is the difficulty to specify an algorithm that produces an optimal sequence of choices leading to the desired outcome without having tried all the combinations a number of times (Cushman, 2013). This type of learning is feasible in machine learning with enough time and computing power, but is highly unreasonable and potentially unethical when applied back into the development context where interventions take time and have serious consequences on human lives. Probably the largest criticism of this action-based optimisation is that the system incentivises isomorphic mimicry without guaranteeing meaningful results. Adopting best practices for legitimacy could result in an ineffective program that inflict harm if not applied in the right context, or it could hurt the organisation by stressing them beyond their capacity to perform “technically optimal interventions” just to meet the donor’s requirements (Fukuda-Parr, Yamin and Greenstein, 2014; Pritchett, Woolcock and Andrews, 2010). After analysing case studies of programs directly prompted by the quantitative targets set by the Millennium Development Goals, Fukuda-Parr et al. (2014) found that the oversimplification of human development into a set of measurable deliverables and recommended actions were successful in inciting mass mobilisation but, more significantly, led to unintended consequences. Two serious negative impacts are the diversion away from more complex dimensions of human rights, and the rise of conceptually narrow, siloed programming. One of the architect of the MDGs later recognised that while the goals might have been defined on normative grounds – reflecting consensus on important political objectives for the global society – the quantitative targets and indicators were guided by technocratic criteria that dictated the means and ends of development without input from those ultimately receiving assistance (Vandemoortele, 2011).
Participatory planning
Many who criticises the best-practice approach as technocratic and top-down are often in favour of participatory planning and governance, which places emphasis on the involvement of stakeholders in the project from the beginning to the end. The idea of participation is ingrained in community development practitioners who typically seek to enable ‘change from below’ (e.g. Ife, 2009, Dent, Dubois and Dalal-Clayton, 2013) and to oppose the dominant ideologies of the 60s and 70s that preach of convergence and efficiency frontiers. The initial focus was on the inclusion of the poor and other marginalised voices in conversations concerning their well-being (Annis and Hakim, 1988) because the conventional model of single-directional planning was accused of causing more unintentional harm than delivering meaningful impacts (Easterly, 2006). Now, participatory planning is considered a key mechanism to resolve conflict, build consensus, and ensure success in any development project (Gaventa and Barrett, 2012). The main principles of effective participatory planning include: diversity of involvement, equity in allocating resources, transparency in communication, openness to new ideas and different points of view, and accountability in carrying out policies and actions (Dent et al. 2013). The emphasis on the diversity of perspectives is a response to the recognition that each stakeholder has valuable insights and opinions to contribute to the understanding of the complexities around a certain issue. Community participation in program and policy planning is particularly important for a more democratic redistribution of development priorities (de Sousa Santos, 1998) as it can theoretically shift the deliberation of decision horizontally, across sectoral interest groups, ministries and communities, as well as vertically, from national to local levels, or from leaders to marginalized groups and to everyone in between (Bass et al., 1995).
In the dual-system theory of cognitive processing, the contrast of model-free is model-based reinforcement learning and decision making. The latter analyses the contextual information regarding the final desired-outcome and then assigns value to each action based on how it contributes to achieving the final result. Unlike the model-free algorithm, agents in a model-based system learn by building a causal model of the world they occupy and evaluate each action based on its relationship to the final outcome as well as the sequential relationship to other actions in the chain. In technical language, “this internal outcome-based representation includes information about all the states, the actions available in each state, the transitions to new states produced by selecting each action, and the reward associated with each state (Cushman, 2012, p. 277)” – unsurprisingly with a very high computational cost. The most valuable aspect of model-based reasoning is its goal-oriented policy, naturally suited for far-sighted planning and accountability in decision making, rather than short-sighted rewards optimisation. With adequate information, the agent can always choose the best pathway to reach the expected outcome given that the contextual elements of the system remain the same to when data was collected. Even when there are alterations in the model, the algorithm is capable of being flexible and recalculate based on new parameters because the agent understands how all actions and state of beings interrelate (see Cushman, 2013, pp. 276-279). The power to compute is technically limitless, and the options to compute are equally vast because additional information can always be added for more accuracy; but comprehensive calculations without set boundaries make up the core issue of model-based reasoning. As the level of complexity increases, the larger the computation demands and more chances for inefficiencies, but at some point, the marginal gains of improved accuracy do not justify the cost (Kool, Gershman and Cushman, 2017).
Similar to how a model-based agent must construct its internal representation of the puzzle before solving it, development planners must have a holistic view of the challenges at hand before trying to propose solutions. Instead of reinforcement learning by trial-and-error, humans can learn from listening to others who share different viewpoints on the same matter, like piecing together the puzzle from the top decision-making agents down to those at the bottom. Because each stakeholder is responsible and knowledgeable about different stages of project planning, implementation, and end-use, having them dialogue at the same table is analogous to being able to play out different scenarios of how one decision upstream sparks reactions downstream, or how new information from downstream changes the calculations up top. It’s straightforward to see how soliciting inputs from a broad diversity of perspectives, especially those that are far removed from the starting point, afford better decision making overall. However, one practical challenge of participatory planning is similar to that of designing a model-based system: it requires a lot more time and resources for a procedure that may potentially be more ineffective if not enough people participate, or inefficient if too many opinions are registered. The analogy breaks down here because in a computational model, it may take long but eventually the algorithm will be able to produce some form of numerical estimation for the pathways in order to rank optimality, but the same cannot be guaranteed for qualitative stakeholder inputs.
Dent et al. (2013, p.95) pinpointed that “the dilemma of public authorities is that they both need and fear people’s participation”. The fear and inability to handle the complexity resulting from community participation could lead policy makers to substitute passive consultation in place of true engagement. The case studies reviewed by Daniels and Walker (1997) revealed that in both developed and developing countries, participation has been used more as a means for information or communication than for shared decision-making – an observation that is still true in development today. Many scholars have criticized shoddy participation planning as a mere technique to increase input for formalistic decisions (Quick & Feldman, 2011, Monno and Khakee, 2012). Sihlongonyane (2015) called out the empty signifiers of democracy and transformation in planning documents, while some called attention to the fact that the questioning over the extent to which citizen’s participation can ever challenge dominant trajectories has reached a point of conceptual ‘crisis’ (Legacy, 2016; Carolini, 2017). The effectiveness of the broad range of perspectives in contributing data and insights to formulate a holistic representation of the puzzle can only be realised if there’s genuine input from genuine stakeholders – if community participation is reduced to tokenism and passive consultation then there is no basis to justify the high cost of public engagement and model-based computation.
Problem-Driven Iterative Adaptation
The comparison of model-free and model-based systems based on their characteristics might result in a misleading representation that the prior is less useful than the latter due to its rudimentary processes. In cognitive science literature, there’s increasing recognition that the successful operation of each system largely depends on interactions with the other (Cushman, 2013). The first co-dependent algorithm is one through which model-free valuation can screen actions based on whether they are appropriate to be included in the more complex computational model (Dayan, 2012), effectively facilitating the process by reducing the arbitrarily large scope of causal calculations from all available options down to only those worth considering (Bornstein & Daw, 2011; Graybiel, 2008). The second, the reverse, is that representations of potential but unrealized rewards generated by a causal outcome-based model can be used to calibrate the action-based values in a model-free system, similar to judging decisions based on ‘observational’ or ‘instructed’ knowledge – that is, on information derived from watching others or listening to external advice (Cushman, 2013). The difference between this type of learning and insincere isomorphic mimicry is that the holistic model-based values are not blueprints for the steps leading to immediate reward. This type of information sharing allows the agent to gain a better idea of all the possible options that have been tried in the past and their expected effectiveness in reaching the ultimate outcome without having to independently solve the puzzle.
Applying these dynamics into the development policy domain, it is conceivable to consider that a mix of holistic outcome-based decision-making and more localized action-based valuation systems is also beneficial. While research on human cognition and machine learning are still exploring the manifestation and application of these interactivities (Cushman, 2013), a group of development practitioners from the Harvard University Center for International Development have synthesised their own framework that harnesses both the agility of model-free action valuation and the contextual apprehension of model-based outcome valuation – without identifying it as such. As the name suggests, their Problem-Driven Iterative Adaptation (PDIA) framework is a process of facilitated emergence which focuses on understanding problems and follows a step by step process that allows for flexible learning and adaptation as opposed to a rigid plan for solution implementation. The architects of PDIA, Andrews, Pritchett and Woolcock (2013), believe that the investment into finding the ‘optimal’ solution for a complex development agenda is too costly in terms of time, resources, and political capital to be realistic. Furthermore, the emphasis on implementing the ‘right’ solution risks pushing organisations towards isomorphic mimicry (Pritchett et al., 2010). Policy solutions are, then, conceptualised “as a puzzle [that emerges] overtime, given the accumulation of many pieces (Andrews et al., 2013, p.13).” Building on the incrementalist approach of Lindblom (1959), they argue that the process of testing small, “relatively cheap” steps to solving local problems within a larger context of reform is an effective way to deliver relevant and politically acceptable results while creating space for active learning and flushing out systemic constraints. The necessary experimentation processes require mechanisms that capture lessons and ensure these are used to inform future programs (Andrews et al., 2013, p.16) – similar to the mechanism of a model-free reinforcement learning algorithm calculating action-based values to inform and increase the efficiency of model-based problem solving.
PDIA also values highly the possibility of collecting ‘observational’ and ‘instructed’ knowledge from a diversity of stakeholders and decision makers, both vertically between the organisation and the community as well as horizontally between different siloed departments and interest groups. One of the key elements of PDIA is the ‘sprint sessions’ during which all relevant agents somehow related to a particular problem are invited to a facilitated workshop to ask questions, share opinion, and even best practices. Without placing too much emphasis on framing the background contexts and the associated design requirements for the optimal solutions, the process of ‘facilitated emergence’ is meant to collect information that is equivalent to the potential but unrealized rewards generated by a causal outcome-based model, already internally built by those who do know the problem context really well. The assumption is that the implementing agency does not need to go through the process from scratch but can re-create a fairly accurate understanding of the concerns and challenges from the ground if they just listen to those who have already spent a lot of time thinking about those exact concerns and challenges – similar to the mechanism of using model-based generated data to inform more accurate model-free decision making.
The guidelines for PDIA seem straightforward and intuitive enough, but it poses several obstacles that are not always easily manageable in the context of implementing development projects. First, it demands a lot of time and active participation from a large number of agents, both from the community and from the official agencies. This is a similar challenge to that of conventional participatory planning, but PDIA requires a larger diversity of participants and, more importantly, their commitment to stick with the process – something that might not be realistic for the local participants who can’t afford the time nor for NGO and government workers who are stretched between a multitude of projects. Secondly, and relatedly, the leading agent needs an authorizing environment to allow him or her to request that kind of participation from others. This is perhaps possible in a governmental setting but is quite difficult in an independent development project in which different agents answer to different agencies. Even if there is one authoritative voice, that person must have enough of an appetite for quick failures in the name of learning because that is the other tenet of problem-driven adaptation and reinforcement learning. As previously noted in the discussion about model-free reinforcement learning via repetition, testing ideas expecting to learn from failures might not be appropriate for programs that directly impact human lives and well-being. These are few of the reasons why PDIA has primarily been promoted as a tool for governmental and institutional reform rather than an ideal solution for all development planning.
Conclusion
The brief descriptions and explanations above by no means capture the nuances of each of these three development planning models. However, drawing analogy with value computation, reinforcement learning, and decision making provides an interesting way to view the advantages and disadvantages of each – serving as a reminder that one is not inherently good or bad but it all depends on the challenges at hand and the resources available to solve them. Sometimes, a straightforward target setting and execution is the most efficient way to deliver results and it should not be villainised for doing so. On the other hand, community participation is in vogue but it shouldn’t take up all of the time and resources if the puzzle doesn’t necessitate a detailed analysis or that proper engagement cannot be guaranteed. PDIA is an interesting example of an innovation that attempts to produce a better hybrid, but presents obstacles and pitfalls of its own. Understanding the context is truly the first and most important step to approach international development and aid delivery, not just to figure out what to do in terms of a solution, but even how to best go about it.
Furthermore, representing implementation strategies as computational model highlights an interesting characteristic, which is that many of these mechanisms can go on autopilot if not treated with deliberate intentions. Action-based or outcome-based valuation are both affected by rewards and the architecture of the algorithm, which are vulnerable to implicit and explicit biases. There is literature looking at behavioural psychology and biases in the development field, some even started looking at the “socio-cognitive constructions of international development policies and implications for development management (Morrison, 2010),” but they are only scratching the surface. Several instances in this paper noted the fact that parallel comparison to cognitive science alone is inadequate; more research is needed to understand the how policy makers and development agents process and navigate choices in order to discern how they act and make decisions in a given implementation model, which might provide more insight on choosing and crafting the right hybrid for project planning and delivery in the future.
Click here for the full Bibliography