By Dale S. Rose
Hello. My name is Dale S. Rose. I am the President of 3D Group, a firm that has done program evaluation, 360 degree feedback and employee surveys since 1994. One of the thorns in the side of Organization Development has always been the nagging lack of a cohesive theory about how to successfully create change.
Lately, I’ve noticed a surging increase in use of Logic Models in program evaluation. Just about every RFP 3D Group writes in evaluation these days requires us (wisely) to start with a logic model. Some of these clients don’t even want data – they just want a good logic model. For the uninitiated, “logic models” are used in evaluation to identify linkages between program goals, activities, and outcomes. Researchers would recognize them as hypotheses, trainers would call them curriculum plans, HR professionals would say this is how we show the value of what we do, and business leaders probably like them because it allows them to see how organizational initiatives and measurement systems are aligned with strategy.
Hot Tip: Usually, logic models are very precisely tailored to each program with all its eccentricities, but it may be possible to find common logic models for similar types of programs. For example, a shared logic model popular in the last few weeks might look something like this: “Weight loss (goal) of at least 15 lbs (outcome) will result when I exercise 3 times per week, sleep 8 hours per night, and comb my hair in the morning for 3 weeks (activities).” This example points out the intrinsic value of creating logic models – often they point out flaws in our rationale and help us to be more reasonable with our activities.
I suspect that OD professionals could benefit from logic models, in much the same way evaluators have. Logic models became popular because evaluators got tired of looking at results they couldn’t explain (usually because the program was poorly designed to achieve the “hoped for” results).
Hot Tip: Changing organizations could be a lot more effective if we take the time to deeply think through the linkages between activities and expected (“hoped for?”) outcomes. In effect, we could be thinking about an Organization Development intervention as a “program” being evaluated and follow the methods evaluators use every day to clearly articulate our logic model. Over time, we may be able to aggregate a number of logic models to derive solid OD theory. I know I’m going out on a limb here for OD, but if we had some actual data on enough models, we might even be able to show that our “programs” can be replicated.
This week, SIOP is collaborating with the American Evaluation Association (AEA) on a set of joint blog posts for the SIOP Exchange and the aea365 tip-a-day alerts blog on the topic of evaluation and I-O psychology. Throughout the week, we will hear from SIOP experts in the field of evaluation on topics ranging from personnel evaluation to international management. Stayed tuned ot the SIOP exchange every day this week for a new post!