Evaluation and communication approaches and methods keep on interconnecting in my mind. Take the example of Michael Quinn Patton's Utilization Focused Evaluation (the title of his 2008 book now in its 4th edition); UFE for short. In essence: "Utilization-focused evaluation is evaluation done for and with specific intended primary users for specific, intended uses. Utilization-Focused Evaluation begins with the premise that evaluations should be judged by their utility and actual use; therefore, evaluators should facilitate the evaluation process and design any evaluation with careful consideration for how everything that is done, from beginning to end, will affect use. Use concerns how real people in the real world apply evaluation findings, and experience the evaluation process. Therefore, the focus of utilization-focused evaluation is on intended use by intended users." (Patton, 2008: 37)

UFE is the subject matter of an action research project with which I am involved, along with several colleagues. Funding is from the International Development Research Centre (IDRC, Ottawa). We have been coaching and mentoring five regional research projects in Asia. We are learning by doing. A notable experience has been the profound shift that takes place when a project manager is given the freedom to decide what it is that she wishes to evaluate about her project, as opposed to the donor deciding the evaluation approach. We are all so conditioned by donor-imposed evaluation requirements that this offer comes as a surprise. It takes a while for the freedom to sink in. More on our lessons below, but first I’ll digress into the field of communication for development.

In the communication for development field there is much discussion these days about approaches and methods for linking research with policy. A gathering of donors and analysts took place in London this past December. A central question of interest to this community is "How can research findings be made available and relevant to policy making?" Much work has been done to map out the complex nature of this relationship (for good examples, see: ODI’s work on RAPID or IDS's Working Paper 335 on 'Making science of influencing: Assessing the impact of development research’). The ODI and IDS work emphasizes that good 'sticky messaging' is about one third of the challenge, while the other two thirds involve ‘knit-working’ (building coalitions, connecting with champions) and 'strategic opportunism' (using windows of opportunity that open up, often unexpectedly). For research projects, managing these three components is no simple feat.

Central themes in communication planning include: understand the nature of the issue; map out who needs to be involved; introduce communication functions that respond to the needs; work with affordable, accessible, and tested methods and media; research the facts and key content; pretest any materials before dissemination; determine a range of outputs and outcomes; implement, monitor, and improve. This listing is generic, as there are multiple variations – but you get the gist of the process.

UFE is organized into 12 guiding steps: Project or network readiness assessment; evaluator readiness and capability assessment; identification of primary intended users; situational analysis; identification of primary intended uses; focusing on evaluation; evaluation design; simulation of use; data collection; data analysis; facilitate use; and meta-evaluation. In our experience, these steps are often iterative and non-linear, much like in communication planning.

It does not take much imagination to see the linkages between communication planning and UFE. While some UFE steps seem to confirm the communication planning process (communicators pre-test materials; evaluators simulate data collection), others augment it (the notion of including a meta-evaluation into any communication process is appealing). However, I turn to a couple of principles of UFE that have emerged as most relevant from our action-research project. The first one is about the ownership of the process: Patton emphasizes this principle and we have lived it in our project experience. Having control over every component of the evaluation has led the projects to assume a learning process that is reflexive and committed. The second is about facilitation vs. external measurement: as evaluators we have become facilitators, as opposed to external judges. We have engaged the project teams through many challenging steps. In the project, we observed that our coaching role shifted to a mentoring one: we were learning as peers. In my communication experience, this role is also the most effective.

Returning back to the communication challenge about linking research with policy, I see possible directions to pursue based on UFE thinking. For example, we need to appreciate the context thoroughly (Step 1 of UFE on readiness). Some research projects may have limited access to their policy audience, while others may be lucky to have an intense interaction with them. Conversely, some projects may be about emergent and exploratory issues, while others may focus on topics with evident policy relevance. These two sets of variables provide us with four possible scenarios. In our own action-research project on UFE, we have shifted from an emergent and exploratory issue: now that we have evidence at hand, we realize that we do have evidence for policy advice, but we did not know that at the start. Furthermore, after presenting some of our findings at the Evaluation Conclave in Delhi last November, we began realizing that our target audiences were emerging.

This leads me to conclude that 'communication focused evaluation' is not an oxymoron. It is less about messages and communication channels; it is more about ownership over the process, about acknowledging that each project has a different degree of readiness, and that it will evolve and the strategies need to be adjusted accordingly.

Ricardo Ramírez. Jan. 19th. 2011