Evaluating community risk reduction (CRR) programs can be simple and straightforward-if you’re evaluating an individual part of them. But when you consider that CRR encompasses all of a department’s operations (e.g., emergency response, engineering, education, enforcement), then the process can be much more challenging. I can’t include all the information one might need to develop a solid evaluation plan in one article, or even two, as this is outlined. But I can provide an overview, food for thought, and some resources that will help.
The measures we use to document the results we achieve in CRR include formative, process, impact, and outcome measures. They are a challenge to define in general terms because they sometimes align themselves differently, depending on which of our CRR programs is being evaluated.
Formative Evaluation
Formative evaluation refers to the developmental stage of our programs and implies that we must conduct a risk assessment to determine which community safety problems we should be addressing. The tricky part here is that it is both a step in the CRR planning process and a specific portion of our evaluation process. Textbook explanations tell us that the purpose of formative evaluation is to help us develop programs that are focused on specific problems and audiences. It can provide answers to questions including the following: Is there a link between our identified risks and our efforts to reduce them? Are our programs designed to reach the people we identified as at risk?
Because I’ve already written about risk assessment in previous articles, I’m going to focus on the fact that we can and should be concerned with how our strategies to mitigate community risks were developed; doing so helps us and others in development of their own efforts. Of course, this assumes we share the evaluation information, which isn’t mandatory but works to all of our mutual benefit.
Process Evaluation
Process evaluation refers to the process steps we took in the implementation of our efforts. It is here that we start looking at things like workload indicators and output that help us understand how our programs work so that we and others might gain a better understanding of what works-and what doesn’t. It is here that we start asking questions that would lead us toward an examination of efficiencies. We might look at things like the training programs we required of workers, the number of workers involved in planning and implementation, the output they produce (e.g., number of emergency responses, number of inspections, etc.), and the time it took them to perform certain functions.
Process is about outputs-not outcomes-though there is a relationship between the two. For example, determining how many inspectors it takes to conduct a given number of inspections is important, but it doesn’t tell us the impacts or outcomes those inspections produce. The same is true of emergency response outputs. We know that responding quickly with the proper resources can produce a given result, but other factors come into play. If the building has burned for 20 minutes prior to notification, that same output (quick response) does not produce the same outcome (i.e., fire confined).
Process measures can be among the more valuable things we look at. For example, looking at formative research to determine which fires to investigate is kind of a “duh” moment. We investigate the fires that occur; they determine themselves, we didn’t. But deciding which of them to actively investigate for cause would be a process measure, which in turn might help establish standards of practice that could help others determine their own criteria for which fires to send an investigator on and which to leave to company officers. That would be a process measurement and of interest to managers and policymakers everywhere. Process is in public policy circles about workload and efficiency, and sometimes that is the most important discussion during budget cycles.
Impact Measurement
Impact measures examine data indicators that tell us risk has been reduced-not incidents, not deaths, but some indication that can be inferred that risk is presumed to be less. A smoke alarm installation program is a good example. Installing an alarm does not reduce fire incidence, but it lessens the chance of someone dying if a fire should occur. Documenting things like a reduction in hazards from an inspection program implies risk reduction. Documenting that knowledge or behavior changed (e.g., pre- and post-testing) after educational efforts implies that risk is reduced. Having emergency response capabilities to provide quick responses with adequately trained staff are indicators that community risk of negative outcomes from a variety of emergencies is reduced.
So impact measurement is a strong and relatively short-term evaluation of whether or not our efforts could (implied through other research) or do (documentation through pre/post testing changes in behavior) reduce risk.
Outcomes
The brass ring of all measurement for community risk reduction programs is looking at outcomes. It begs the question: What outcome are we trying to produce?
Do we respond to get their quickly? No, that is an output. We respond quickly to produce a particular outcome, namely the intervention that prevents an emergency from getting worse. Lives are often saved as a result of a timely response capability. But that is the outcome we must measure, not just the outputs that are related to our process of managing emergency response resources.
Outcomes also include looking at the number of fires occurring in properties that are part of a code-enforcement inspection program, the number of fires or injuries that occur that are tied to a particular educational program. What about fire investigation? What is the outcome we expect? We investigate fires to determine the cause, so an outcome we expect could be something like the percentage of fires where cause was determined.
Outcomes are best looked at over time, because any given year can bounce the numbers through normal variance based solely on random chance. It is a longer view we need to examine the outcomes.
This complex issue will be the subject of further explanation in my next column, where I break down examples of each type of measurement for our different CRR strategies. Meanwhile, if you want a short online course to review the elements as they apply to prevention programs only, you can find a link to IFSTA’s Resource One site and the course on model evaluation measures through Vision 20/20’s Web site at www.strategicfire.org. There is also more in-depth training available at the National Fire Academy.