NewYork in July and the International Forum of Visual Practitioners (IFVP) Conference was a fantastic experience.
I ran a workshop called ‘did we hit our target?‘. The topic was aimed at those who design and facilitate workshops and group events – many members of the IFVP have this role.
I believe this is an important topic. The need to measure our impact as facilitators and recorders grows increasingly critical in an environment where project dollars remain tight. We must be able to quantify and qualify our worth. And some of the best data we have can come from our own experiences and those reported by our clients.
We explored two areas of interest – 1) how we measure the success of our efforts + 2) how we visualise that evaluation information for harvesting and communicating. The pivotal question for our session was:
How do we know if our meeting or workshop
has been a success?
I had the opportunity to discuss what evaluation methods, if any, were used by other professionals in the visualisation business.
At the start I did a quick poll on what kind of evaluations visual practitioners do after an event. Scenario A: Exit stage left with materials under your arm, waving to the client. Scenario B: Informal Harvest: how do you think that went? C: Formal, structured harvest: against pre-agreed outcomes.
The general consensus in the conference group was about 10%:80%:10%. In summary, an informal question of ‘How did you think that went?’ with the client was most common.
The group agreed that thinking more about the options in Scenario C: Formal Structured Harvest would be useful for their practice.
I presented my take on the logic model which I’ve dubbed ‘logic model lite’. At its simplest form, it covers the INPUTS (ie. what resources we invest in the meeting), our ACTIVITIES (ie. what we do in the meeting) and the OUTCOMES – short, medium and long-term (ie what results we see).
Using the workshop we were in, we ran an example of what a logic model ‘lite’ would look like. That way, participants got a feel for what information was needed and what level it was aimed at. We built it systematically from identifying the inputs, the activities involved, and then the short-, medium- and long-term outcomes. Finally we identified the matters we could evaluate the results of the workshop.
We discussed how common practice was to check if the activities and short-term outcomes were achieved. However, back to our ‘workshop as an intervention’ paradigm, further investigations could be done into the results and longer term outcomes that flow.
I shared some of the visual methods I employ for checking end-of-workshop outcomes with my participants.
These include my tried-n-true target board. I like the idea of linking people’s responses to the concept of ‘hitting the mark’.
Brian Tarallo of Lizard Brain Solutions offered his use of faces and emotions to do a visual Likert scale for feedback.
At the finale of the workshop, we checked the short-term outcomes for a measure of success. Participants reported having more structure and concepts to approach evaluation of their own workshops for the future.
During this great discussion, Tracey Ezard of Jessup Ezard Consulting recorded our thoughts. Thank you, Tracey for capturing our points and to all the participants – Lynn K., Nora H., Brian T., MJ and Lisa.
Do you want to:
Expand your professional toolkit with visual thinking skills?
Boost your effectiveness in meetings?
Add impact to your presentations?
Gain confidence in drawing and applying graphics to your work?
Be seen as a creative thinker?
If you are yes to any of these, find out more about my premium program: Essentials of Visual Thinking & Graphics Practice here.