When I joined the Center for Creative Leadership (CCL) around four years ago I was asked to lead on an incoming inquiry from a client who wanted us to pitch for an upcoming leadership development need. Enthusiastically I settled into one of our meeting rooms and took the client call. I frantically scribbled down every word, promised a swift response on a follow-up call the next day, and reviewed the notes in my brand new notebook.
As I read through my scribbles, I spotted words like “business challenges”, “new strategy”, and “competitive process”. But there were also phrases including “mindset shifts”, “scale”, “systemic”, “practices” and “complexity”. Among these there were also words like “sceptical audience”, “applied”, “no theory”, ‘time precious”, “useful” and “practical”.
During the next call, having looked at some potential ways we could develop an approach, the question of impact was raised, but in quite an interesting way: “We appreciate this isn’t a typical situation. And we are not a typical organisation. So, I’m curious to see how you might approach impact: I assume it will be more than just the usual Kirkpatrick ‘happy sheets’ type stuff?” I quickly deleted my prepared slides on Programme Reaction Evaluations, “Absolutely not”. Time ran out, saving me from the embarrassment of having no credible answer beyond measuring programme reaction. But I now knew for certain that this was neither a typical client nor a typical need.
Impact is an interesting topic. In many ways, it speaks to the practical efficacy of the whole Executive Development industry. Yet it is also one of the most contentious areas. There exists a lively debate on what ‘Impact’ actually means, how, and even if, it can be measured. A dear colleague I know and admire even did an entire doctorate on the subject. While at CCL we are lucky to have a team of incredibly smart and talented professionals bringing clarity to the debate, as well as practical guidance on the subject.
They reminded me that, as well as ‘Reaction’ Kirkpatrick’s framework, had three other levels; ‘Learning’, ‘Behaviour’ and ‘Results’. In addition, they highlighted that Kirkpatrick himself acknowledged his framework was only a starting point for the evaluation of Impact. This encouraged me to avoid the wider Impact debate and instead use Kirkpatrick’s four levels as a guide to create a bespoke suite of evaluation metrics for my client. To practically guide me they shared three simple rules. These rules and how they guided me are outlined here.
Rule #1: Determine metrics by identifying the goals of the leadership development
In this case, we worked out the goals of the leadership development during a new and innovative type of discovery called Leadership Labs, developed from the concept of Action Enquiry pioneered by Bill Torbert in 2004. The Labs moved away from the traditional, linear, approach to discovery, diagnosis and design, which was too slow in delivering benefit for our ‘time precious’ client. Encouraged too by the knowledge that we were also dealing with a ‘sceptical [participant] audience’, we developed the Labs to provide immediate value, be ‘practical’ and ‘useful’.
Using carefully selected ‘pathfinder’ participants, each Lab used a selection of CCL methods and content to tackle the real-world leadership challenges of 15-30 participants in a psychologically safe environment.
To ensure that we were also looking at the ‘systemic’ whole, we worked with the client using quota sampling, a more practical form of stratified sampling used in academic studies, to select those who could most reliably provide insights into the prevailing and developing leadership challenges, across a variety of geographies and functions. In particular, we included the matrix teams who had significant interdependencies across the enterprise.
By following a consistent methodology, yet adapting it for each Lab’s unique needs, over time we were able to build up an accurate picture of the common leadership themes and Mindsets, Practices and Behaviours which existed currently in the client organisation, and how these might need to shift practically.
These shifts in Mindsets, Practices and Behaviours, gave us the agreed goals for our leadership development programme.
Rule #2: Determine if existing metrics can link to programme goals
End of programme evaluations are an obvious existing ‘Reaction’ metric. We designed these to help show how well we had addressed specific goals, as well as the quality of the learning environment and objectives set. These ‘Reaction’ evaluations consistently scored highly on all metrics. The metrics seeking to understand how well we had achieved practical ‘Learning Insights’ relevant to their real-world challenges and commitment to meet practical objectives were also consistently positive.
One suite of programmes, aimed at a key population of more than 150 global leaders, were run by CCL facilitators and consistently achieved outstanding reactions and learning insights from a group normally sceptical about development initiatives. Even when the programme pivoted to fully virtual delivery for three workshops due to COVID-19 restrictions. This strongly indicated that participants were supportive of their own shifts in mindset, practices and behaviour.
The other, much larger scale suite of programmes for less senior leaders, used CCL material but were run by volunteers from the client. This approach was adopted as we knew from our ‘Labs’ that this matched the practical nature of those in the client. Spearheaded by those who had participated in the senior programme, CCL ran a “Train The Trainer” programme for them alongside experienced trainers from within the client. Using a CCL licence the client then ran workshops for a further 1,000 global leaders.
Despite the global pandemic, the volunteer’s workshops also achieved consistently outstanding reactions and learning insights from participants. Analysis of Learning Insights generated from anonymous, aggregated data suggest participants from this CCL developed but client run global initiative achieved practical ‘Learning Insights’ that were relevant to their real-world challenges. These metrics also suggest participants left these workshops supportive of enacting their intended shifts in mindset, practices and behaviour.
Rule #3: Create new metrics to fill measurement gaps
We agreed with the client that in order to track what impact our programmes were having on changing mindsets, practices and behaviours we needed to create some new metrics which went beyond the ‘Reactions’ and ‘Learning Insights’. These had to go beyond the programme ‘events’ and planned carefully so as to not create artificial incentives or bias what participants were doing ‘naturally’.
The first of these explored the use of common language created by the programmes. This ‘common language’, created from the leadership methodologies we introduced, was found throughout the organisation. Messaging, memoranda and other communication, both formal and informal, used the terms as part of the organisation’s day-to-day vernacular. This demonstrated a new practice and showed that the way in which leadership challenges were discussed and framed had changed, with an increased use of the language of leadership methods that participants had been exposed to in the programme.
“Over the last year I have seen a clear shift in vocabulary used around Direction, Alignment and Commitment, with more awareness of the need to be clear in our direction for our part of the organisation to function well”, said one former participant almost a year after attending the senior level programme.
The language even spread to those who had not been participants in any programmes. Two senior executives, for example, had created blogs about their day-to-day leadership challenge polarities, using terms lifted wholesale from the new common language.
The next metric was the changed or new individual practices and behaviours of participants. A range of practices were found to have changed when participants were interviewed or polled months after the end of their respective programmes. The CCL DAC leadership framework was being used extensively to explore, examine, and elevate the quality of leadership within teams, as well as a way of garnering feedback for leaders of teams.
“I actually used DAC as an analytic tool on multiple issues I face. Either my own issues or issues I spot with people I interact with. It helped me understand the source of issues and how to make a difference”.
Evidence emerged of changed individual practices for intentionally understanding cross-organisational stakeholders and applying greater levels of empathy.
“I’m not there yet, but I am happy to say I’m judging less and listening more”.
We also created a metric to examine changed or new shared practices and behaviours not just of participants, but throughout the enterprise. It was found that new forms of matrix teams, with or without members who had been through any of the development programmes, were regularly using CCL methodologies to support team effectiveness. There was also a large and growing community involved in sharing applications and practice, through ‘DAC Leadership in Practice’ workshops. Moreover, when the client extended the licence agreement, the contract, approach and methodology were adapted to match the new, real-world, shared practices and behaviours which had emerged.
Reflection: Using real-world insight to create real-world impact
As I reflect back on the work done together with this client over several years, the improved business results as well as the other metrics described, make me confident that together we have had a meaningful impact. The three steps described helped a lot, but I also think that during that initial conversation nearly four years ago, because I was new to CCL I was personally more attuned to the real-world problems of the “mindset shifts”, the “scale”, the “systemic” nature, the “practices” and the “complexity”. I was more curious, which is something I need to remind myself to hang onto. I was also fortunate to have a client who was keen to pay attention to real-world challenges and real-world impact, as well as the specific needs of a “time precious” “sceptical audience” with a need for “no theory”, “applied” work that was “useful” and “practical”. That allowed intentionally seeking a rich understanding of the business context and complex human systems which existed within it. Once we and the client understood that problem, and the nature of the individual and collective leadership mindsets, practices and behaviours which needed to change within it, the three steps to track real-world impact were then fairly straightforward and simple.