newsletter #14 | 19-Jul-2016
On my latest client project, we experienced the typical madness around recruiting. The people we thought we set out to find didn’t define themselves the way we did, resulting in a mid-course correction. And then even with the new definition of whom we were seeking, the recruiting firm couldn’t find enough people. We had to step in and recruit for ourselves. So we had a certain feeling by the time we finally got everyone lined up and all 10 of the listening sessions completed. We felt exhausted. And we felt triumphant that we’d accomplished it only a couple of days past our self-imposed deadline. And we felt like the hardest part was finished.
Recruiting had taken 42.5 hours of our time, plus the time of the recruiter. The listening sessions had taken 13.75 hours, and getting the recordings turned into transcripts was 9.5 hours, plus the transcribers’ time. Elapsed time was four grueling weeks of scheduling madness.
At this point, most teams I work with turn to analysis expecting to give it about the same amount of hours it took to collect the data. That is, they expect to spend around 14 more hours finding and recording insights from the transcripts.
In my work, I spent 10 times as many hours in analysis as I do in the listening sessions themselves. In this last project, we spent 142.5 hours. There are two reasons for this:
- I am forging deep understanding. I am not collecting a list of insights. The verb “forge” means that I am crafting something new from the materials.*
- I am searching for people’s reasoning not their preferences nor explanations of how and what they are doing. To develop cognitive empathy, I need to understand the thinking behind what they are doing and how it developed for this person over time. Reasoning is convoluted, and it takes time to untangle it, just as it takes time to elicit it during a listening session.
The 1:10 ratio is what I base my work on; it’s what informs the strength of the insights found in mental model diagrams, behavioral audience segments, and gap analysis. I have run experiments where I do a pre-pass and collect a list of insights from the transcripts. That pre-pass takes roughly twice as long as collecting the data, and it yields roughly a third of the insights. If you are being badgered for early insights by others at your organization, you can use this pre-pass approach to appease people. But the depth of understanding is better forged by dwelling for a longer period of time in the data.
In the project we just completed, we spent an additional five weeks doing this analysis. It is done on a humane schedule, taking into account that each of the team members has other projects on their plate at the same time. For this project we averaged 10.25 hours per person per week during analysis.
If you are in a situation where people would roll their eyes and dismiss your suggestion if it involves this kind of time frame and hours, then you might help people understand with this explanation of how exploring the problem space differs from generating and evaluating the solution space. It includes the warning questions that indicate when your organization needs to spend time in the problem space.
* How to forge deep understanding:
- The first step is to identify and untangle the concepts a person mentions, then re-state these concepts in a clear way that ensures you won’t have to re-understand them later on. (Since this data does not go stale, you will be re-encountering these concepts for years as you use and add to the data set.)
- The second step is to see affinities between the summaries you created in step one for different participants, and let them form into groups. These affinities are not based on the thing (noun) mentioned, but are based on the intent of the participant.
- The third step is to consider the data in relationship to your organization’s current needs and use the resulting insights to a) support the philosophies of the user, b) create different experiences for uses whose philosophies differ enough, and c) employ the language and purposes of the user instead of exposing the language and features of the system.
What’s our best fit?
“We’re trying to explore the problem space, but we’ve run into problems. Can you double check what we’re doing?”
“We want to make sure we do the research right. And we want the skills in-house so we can keep exploring.”
mentor the team
“We want to explore something, but we don’t have the cycles to get involved. We want answers that are credible.”