newsletter #34 | 20-Mar-2018
Every few weeks I create an estimate for a potential client’s problem space research study. Sometimes they want a ballpark*; other times they want a detailed line-item estimate, with a timeline. We discuss the options for balancing what they need with the overall cost. And 4 out of 5 times, after a few weeks they send me a crestfallen email saying they couldn’t get budget for the project, and that they will try again with the next fiscal year.
It’s a struggle. Given what I hear, it’s not just lack of budget for hiring an outside team to help with research. It’s also a struggle for teams doing research internal to a company.
$25,000,000……Driving people to the website
$20,000…………..Customer experience (what happens when they get there)
On 10-March 2018, John Maeda released a report about Design in Tech, in which he references an NEA study by Albert Lee and Dayna Grayson saying that only 46% of late-stage (design mature) companies conduct qualitative research. Sam Ladner responded via twitter that, in her book Practical Ethnography, she cites that the majority of this is money spent on focus groups.
So, how do we increase the proportion of money spent on design research? We can do this together, as a tribe, then share it with each other. Here are a few ideas:
1. Dogfood it: Do some qualitative research internally to benchmark stakeholders’ awareness of design research. Go do five listening sessions with stakeholders. Have they been thinking about it over the past six months? What was their inner reasoning is about the topic if they’ve been thinking about it? Then let’s share our findings about stakeholder inner reasoning, or lack thereof.
2. While we’re benchmarking, do some human-network-node-following to find the various persons who know how much is spent on any kind of research, throughout the various divisions of the organization. To Mark Hurst’s point, find out the advertising budget, too. And marketing. Dig below the surface of the labels to see what the research comprised, exactly.
3. For the design research projects that have occurred (if any) over the past few years, track down how those research results were used. See if you can list the outcomes. The stakeholders will want to see values assigned to these outcomes, so track that down if you can. Did it affect hitting OKRs or other business goals?
If we do these things and tell each other what we found (in terms that won’t betray our org’s IP), then we’ll have some convincing tools to use to in our arguments for more budget. It will hinge on us sharing the information, because we all get caught in the “that’s the way we do it here” trap. If we talk to stakeholders about the recent history of other orgs’ research efforts in conjunction with our own, then it will provide a bit more of a jolt. Hopefully.
(Where should we share? How about via Medium posts?)
* Here’s a snapshot of ten different variables that I use to help potential clients decide how to balance needs against budget.
Q: At a talk one evening, when referring to research I’d done for an airline, I despaired over the current un-supportive state of all airline reservation tools. A person in the audience objected, saying, “But now we have access to all the possible flights. We have so much more knowledge and choice! We can go find exactly what we want!”
A: My response centered on two of their words “go find.” Finding something requires time, cognition, and patience sifting through and comparing options. The options are only presented in terms of the airline schedules, not in terms of the passenger’s calendar of appointments. Prior to the web, there was a service where a human being (a travel agent) got to know a traveler over time, noting their preferences for time of day or seat or connecting airport or price. The travel agent even got to know a passenger’s personal life enough to know that when a particular person had a baby, they changed their flying preferences. They wanted to arrive home earlier in the day than they did before having a child. They stopped tacking on a few extra days to a business trip to see some sights. Passengers only filled out a form once, when they first met this travel agent, and then every subsequent trip, the travel agent took into account all they knew about the passenger, accumulating knowledge as they chatted (note: Conversational Design) about the purpose and needs of each trip. The web-based tools are much less intelligent, and do not accumulate knowledge about individual passengers over time–nor do they even accumulate knowledge in the aggregate unless reprogrammed. Yet.
The point I want to make is that there has been a solution for hundreds of years to the purpose of arranging a trip. We tend to think from a tech-point-of-view, because tech is, admittedly, pretty amazing. But the software we have provides one fixed-in-code, average way to support people. I’m hoping soon we’ll start to see individual-data-customized experiences, similar in a way to what human travel agents can do.
newsletter #33 | 20-Feb-2018
In the tech world, as well as in other industries (like science), there is an overriding push to speed up discovery. In business, the root cause is often “get ahead of the competition.” In academia, it’s “be the first to publish this.” Speed governs most of us. Exceptions exist, like in the safety community, which is filled with careful processes like pre-flight checklists or routine maintenance. But the rest of us pretty much bow to speed. And so digital products aren’t always thought all the way through. Science experiments get published that no one can replicate. Yet, over the years, these very things accumulate polish, from repeated, randomly-driven attention. Experience accumulates as the result of a community of people. Tweaks get made. The outcome gets better and better. But this takes a random amount of years.
Real change always takes time. Some of those follow-on years of polishing could be compressed into a few months of getting more depth and breadth up front.
How do you help stakeholders become more aware of this ubiquitous fear of time? First, you can try to identify what they are struggling with.
Struggle 1: Startups (or innovation teams) don’t have the money.
Struggle 2: Big companies can’t get past convention. (This is the way things are done here. Or, following processes because they are the processes.)
Struggle 3: A person in the power structure does not see the value. (Research does not make progress on the things we need now. We can’t put resources toward something that doesn’t immediately translate to a prototype. Or, we are afraid to find out that we’ll need to make drastic changes, or pivot, based on what we learn.)
If you see your stakeholders in Struggle 1, then it’s time to get them better connected to the people providing the money. Those investors do not want to waste their money, and having a conversation with them about the risk of making guesses generally leads to agreement. The kind of problem space research I describe to them has a time/money trade-off. If the study is being done by an internal team, it won’t cost more than the price of transcription and participant stipends, but since that internal team is always under-staffed and over-tasked, it will take many months. If the study is being done by an external team, it will be finished in 8-12 weeks, but cost may be $50-$75k. Price out the cost of failure because of guessing at the wrong direction, and compare. (Also compare this cost to the marketing budget.) Do this in collaboration with investors and stakeholders. They will take strong interest.
Struggle 2 is more insidious. It’s hard to identify if your team is just producing what should be produced, and it’s embarrassing to realize it, so no one wants to discuss it. I have heard of teams producing so much research that it piles up in layers, and when it comes time to make prototypes, they are not drawn from the accumulation but from standard UX approaches and generalizations about users. These are the symptoms. If you notice these symptoms, consider running a rescue operation, but doing it in such a way that it saves face for both the stakeholders and the other practitioners involved.
For example, what is the viability of a pop-up ad within a few seconds (or a click) of someone reaching your page? You might get 4x the conversion with a popup than with an ad at the bottom of the page, but how sustainable is that — how many of those 4 people will have a relationship with your organization four years from now? Is there a connection? Can you find out? Furthermore, can you ask actual people what goes through their minds when the popup appears? Most likely it’s not about you and your conversion. But if you take the time to find out, you’ll have pieces of the puzzle that can help pull together the layers of prior research in a concise way. When the research is consolidated (people use a mental model diagram/opportunity map for this), then it can effectively influence design and also guide development phases, use cases, inventory lists, etc.)
In the third scenario, Struggle 3, you’ll want to employ evidence that taking time now helps your organization save time later. Problem space exploration is only hard because it requires a different mindset. It’s disconnected from the solution, so it seems like a bad investment. It doesn’t seem necessary. It takes a mindset that, to support people with more breadth and depth, you need to understand people completely divorced from any thought of the solution.
Here’s an example. You have listening sessions with people, and then you take time to digest what you heard. What you thought you understood the person was saying takes on different meaning when you hold the entirety of what they said together. One concept they mentioned early in discussion gets illustrated with an example later, takes on an extra nuance, branches off into an explanation of where that nuance came from in their history, and then gets restated further on in a more succinct way. If you go over each transcript of each listening session carefully, and it’s magic how much better you understand that person’s thinking afterward. There is clarity that forms, and then patterns across different people coalesce from that clarity. This requires more effort, more time, but results in great depth of understanding. You can create better support for them.
The value comes from that depth of understanding, but also from breadth. With breadth, there are more aspects of what a person is doing that you can support, plus there are more philosophies you can support. Not everyone comes from the same background; people have different approaches. It’s a case like the long-tail–there is plenty more opportunity for your organization. This evidence is the sort of thing you can bring to the Struggle 3 scenario.
Teams and stakeholders all ask the right questions and wonder about risks, but unless there is strong user-centered leadership, very little is done to answer those questions accurately enough to mitigate risk. I think all this hurry has been introduced. It’s not the natural pace of the software craftspeople that I know, who are truly trying to create a beautiful set of code, an algorithm as close to complete as possible. It takes time to consider all the angles and potentials. Most long-lived software has its roots in craft. (Google founders, Apple founders, Microsoft founders) Alan Cooper said, in a UXWeek presentation, “In those days programmers wanted to make good software, and if money came as a result, that was a plus, whereas these days it’s the opposite.” He’s got a point, but I don’t think this is true of every software engineer today — programming still takes craft. It requires a mind that is interested in taking time to consider many different possibilities.
Possibly by identifying which struggle your organization is engaged in, and by encouraging the fine craft of creating digital products, you can slowly change the attitude toward taking time to understand the people you aim to support.
Q: What are some examples of questions to ask in a listening session? You list some standard questions in Practical Empathy, like, “What were you thinking when you made that decision?” What else?
A: In a listening session, you are following what the participant is saying really closely. You ask questions based on what they said, but only when you think you might be assuming what they mean. You want 90% of the words in the recording to be from the participant. (But you also want to be supportive, especially early on, to form trust and rapport via emotional empathy. Don’t be cold and terse.) So, here are a few examples. When you are wondering one thing, here’s and example of how to ask it:
Why did he respect this potential mentor in the business he was in? ==> “What passed through your mind when she said ‘gym?‘”
What made him go look for the contractor? ==> “Contractor …?”
Why did she say the business such a risk? ==> “What did your inner voice say when the banker said that?”
Q: I don’t really get what you mean when you say that the scope of a study needs to be problem focused, and you’re not supposed to reference the product.
A: Developing an awareness of when you want to explore the solution space and when you want to explore the problem space is important. Most of the time you’ll want to be doing user research, which is in the solution space. At intervals, maybe once a year, you’ll want to add to your understanding of the problem, from different perspectives. If you don’t clarify the contexts in which you’d do the latter, the two will slump together, and out of habit you might end up exploring the solution space exclusively.
For example, say you work for an insurance company. A marketing need might be getting more people to become aware of different levels of insurance for their home. But that mentions the product, so what is the larger problem people are trying to address?
Solution exploration: How do we get people to pay attention to home insurance levels?
Problem exploration: How people understand what risks are possible (and discover risks they never thought about)? Then how do they establish whether these are areas they don’t mind feeling vulnerable about?
See the difference?
indi can help you
coffee with indi
newsletter #32 | 16-Jan-2018
Saturday morning, 13-Jan-2017, an emergency alert went out to Hawaii’s alert-enabled mobile phones that a missile was inbound. Discussion ensued about how this mistake was made and what we could do about it. Cyd Harrell, who has experience with government software and processes, tweeted a thread about the complexity of both how and what can be done.
The news story made me think about our human tendency to want to explain things simply, state a theory, and set off in a certain direction based on that theory–as quickly as possible. I was listening the the Radiolab podcast episode about stereotype threats, and the replication crisis (in psychology). In particular, an experiment was formerly replicated to show that college women’s math test scores increased to equal men’s when they were told that the particular test they were given had never shown any unequal gender results. There were other variations on the experiment, but they were having trouble replicating it. They were wondering if it was because stereotypes change over time, this generation is more aware and able to block the power of stereotypes, or if the level of the preparation for difficult math tests was different, etc. I kept wondering: why not ask the participants what went through their minds during the test? Why is this component, hearing from the actual participants, not included in their research?
Probably because of academic methods. Possibly because the answers different participants would give would each be different. Those differences would be difficult to summarize into a simple explanation.
This reluctance to embrace complexity exists in the business world. In the tech space. We’re trying to support simple versions of people and scenarios that just don’t exist in any clean way. We want to reduce cognition to heuristics and pattern processing. People in tech see little difference between training a pilot to fly a plane and writing software to control a car on a road full of other cars driven by other people. I believe there is more complexity involved, specifically involving time, lived experience, continuous consideration of inputs, and emotion.
People change over time. It’s not simple rationality. They change their minds. They get influenced by others and shift perspectives. Within the industry of making algorithms, this is not viewed as a good thing. You can’t be pinned down. It’s harder to chase you and get your attention and appease you. Simplicity and minimalism, these are the watchwords. But humans are complex. The algorithms we interact with are no match for the speed of our thinking, the cultural references, jokes, and digs we make, the leaps of intuition we have, the motivations and guiding principles we follow, the empathy we feel, or the compassion and kindness we display. Continuing in the vein of “simple” has brought us to a wall.
Mathew Desmond, a sociologist, author, and professor, writes about the complexity of poverty and race systematized in American society. He summarizes the human tendency to simplify others this way: “There are two ways we de-humanize others: cleanse them of all virtue, or remove all sin from their lives. Neither is true.”
The best way I can think of to address our fear of complexity is to accept it and start investigating pieces of it, little by little. Pick a particular thing a person is trying to do, a set of contexts, and a couple of different thinking styles. Gather from people what goes through their minds as they seek to accomplish this purpose, translate, and look for patterns. The “translate” part is where you put into words the crazy-human-ness of our inner voices. When a person tells you of an event that happened when she was a teenage lifeguard at a pool, you listen. A group of boys had jumped in the deep end, and one of them was struggling to stay afloat. She had to decide whether he needed rescuing or if he was “fake drowning” to catch her attention. She wondered if he actually couldn’t swim and jumped into the deep end just to keep up appearances with his friends. “Of course I’d jump in the deep end if I didn’t know how to swim!” You have to recognize sarcasm and take into account context and mood. How do you do this unless you have a human brain equipped with life and social experience and capable of cognitive empathy?
So our job is to help those around us embrace the complexity and feel confident in exploring it, rather than racing past it with a hasty theory. We don’t want our own crisis, do we?
Indi has office hours! If you have a question, you can book time with me in one of three ways. Use the time however you need. I set up availability for connecting from all the time zones.
A new “commercial” website is on its way, aimed at managers and those whom you wish to persuade of the value of problem space knowledge. It will contain case studies from the interviews I’m conducting with people who have made use of mental model diagrams, thinking-style segments, and opportunity maps.
In the meantime, indiyoung.com now harbors a lot of free content. It includes:
- podcast episodes – transcripts and recordings, along with free webinar recordings, and videos of my talks
- newsletter archives with Q&A – forward one to your boss to give a little force your point
- articles – whitepaper versions of my Medium articles, to print and slap down on a certain person’s desk, articles from Interactions magazine
- diagram generator – the new app that converts your formatted data into a mental model diagram
Take advantage of the wealth of available content, and feel free to pass along links to those you are mentoring.
In my book Practical Empathy, I write about conducting listening sessions by phone. In Describing Personas, I also write, “All this is better to do by phone than in person. (In most cases.) You don’t need to see their artifacts or observe their behavior,because the knowledge you are after only exists in their minds.”
Q: (from Bobby Gonzalez 21-Dec-2016) Thank you Indi. Interesting article. Googled a bit and found some research indicates people with prosopagnosia have diminished capacity for empathy for others. Also, psychophysiological responses to empathy recruit a region between the inferior temporal (important for perspective taking), and fusiform gyrus (imperative for facial recognition). Maybe “seeing a face” does activate parts of the nervous system that vitalize or enhance ones ability to empathize?
Q: Lauren Faggella 19-Apr-2017 Would you agree that some people respond differently in person than over the phone? For example, I don’t always like talking over the phone, particularly to people I don’t know very well. But if I sit down with someone and get to know them for the first time, I’m likely to open up and speak on things that I wouldn’t ordinarily touch upon.
A: It depends. (The classic answer!) First, your own preferences are not evidence of anything, so the thought of your preference acts as a red flag to re-ask yourself about what might work best for a particular participant.
Second, during a listening session you use emotional empathy to establish rapport and generate a comfortable atmosphere for the participant to tell you their inner reasoning. You aren’t doing any perspective-taking during the listening session–that’s for later, when you’re using artifacts from the data to work on your ideas and solutions. During the listening session, you’re in the problem space, not thinking about solutions nor ideas, and not thinking about synthesizing or analyzing what you hear. You do want to help participants trust you during the listening session, and you can do that in any way that feels right to the participant. If you are making the participant feel supported and comfortable, you shouldn’t get different results whether you’re on the phone or person, unless the comfort depends on visual cues as well as spoken word.
Third, I emphasize phone to motivate teams who don’t have the resources to get out to see participants. I want them to still gather this knowledge. I want everyone to be able to conduct listening sessions. Taking travel out of the equation helps in many cases.
Fourth, if you are doing solution space research, watching and interviewing a participant making use of a particular solution, then perspective-taking, cognitive empathy, does come into play. This is a different sphere than listening sessions, which are (usually) part of the problem space.
It always depends. Make your decisions based on your participants.
indi can help you
coffee with indi
newsletter #31 | 19-Dec-2017
Welcome to the end of the year (for some cultures), when everyone seems to think about time. You might think about time left until a deadline … how you spent your time over the past year … time for changes for the new year … time to renew yourself. One of the things I’d like to hit the reset button on is how much time teams take to deeply understand the problem space. With the [mistaken] emphasis on speed in the agile process, speed through design thinking, and speed to iterate in lean development, time seems to be the stern dictator ruling our thinking … banning words like “deep understanding” and “explore” in favor of words like “hypothesis” and “automatic” and “quick survey.”
It isn’t just ideological; it’s cultural. Speeding through the process of pushing stuff out into the world is an accepted, lauded mindset in technology and design. I have nothing against speed in moving from idea to prototype or launch, but moving TO the idea with haste is risky.
So much of our technology and design culture is focused on ideas; reward systems are based on the merit of ideas. Methodologies focus on evaluating ideas and bringing those ideas to fruition. Ideas are the basis of stories you tell to prove your worth. Yet, generating the ideas takes very little portion of the whole process. And generating ideas is mostly an act of imagination. Imagination is necessary, but it’s not the only component. There are other components such as the business perspective of ROI and risk and market. And there are important components such as supporting non-dominant thinking styles.
“We’re still working in a profit-at-all-cost model. Open it up to impact. Lower profit for some other type of gain. Point out opportunity cost to influence investment.” – Peter Falt, Director at BMW DesignworksUSA, DMI Symposium 2017
It all has to do with really focusing on the people you’re trying to support. Often those people are called the “customer” or the “users.” What I’ve been trying to do is separate that out a little farther. When we refer to people as “customers” or “users,” we are looking at the problem through the lens of an offering, an organization. We are looking at what we do to support them and how we think about them … all the knowledge we already have about them. But it tends to force you back into the mindset of problem solving. It encourages you to come up with new ideas, or dive into innovation work. I would like to insert a little bit more knowledge that has nothing to do with innovating, nothing to do with idea generation; this knowledge only has to do with understanding a problem. It’s what Practical Empathy is about. It’s the idea of taking that first step of Design Thinking and snapping it off, so that it’s not a part of a cycle. It’s not a part of improving a product or coming up with new ideas for a product.
Ideas are the exciting part. All this focus on ideas makes it tempting for people to skip over the understanding part. Instead, if you think of the problem space as understanding a person (not a “user” or a “customer,”) … you can let go of your need to solve problems for a bit. This opens up your awareness that there are other approaches, thinking styles, and guiding principles out there. You can find themes and patterns, codify them, and match your priorities and strategy to them. This not only expands your market but also lets a much broader set of people to not only access your services, but be supported by them.
This is where you run into the pervasive problem of Cultural Fit … most of our technology has been created as “one algorithm to serve them all.” And that algorithm, written by a team of people, represents that team’s understanding of the world. It represents their culture and history. It misses the history and experience and purposes of other people. Often the team does not preoccupy themselves about the missing perspectives, because these other people are seen as low ROI. And the team believes they are low ROI because it believes that the dominant culture brings more value. Is more valuable. If they use data to back up their assessment, it is most likely data that embodies historic bias in favor of the dominant culture.
Sara Wachter-Boettcher writes about this toward the end of her book Technically Wrong. She also explains the problem the tech world has with lack of diversity among team members. The defense has been that the pipeline is nearly empty. Sara points out evidence that the pipeline is actually full of diverse candidates, but when hiring managers evaluate people, they look at “cultural fit.” If this is true to any degree, it’s a terribly self-perpetuating cycle.
So, here at the end of the year, if you have time to renew yourself, pick up Sara’s book and spend a couple of hours seeing things from the point of view of her stories. Or at least peek at the reviews.
And please reach out to me if you have stories of how your own experiences have shown the value of taking time to form knowledge about the problem before jumping to idea generation.
Quick plug for the new app I released that converts your formatted data into a mental model diagram: Diagram Generator
In case you need to point someone to a concise explanation of Dark Patterns. (by UIE)
Aurelius Podcast: Episode 15 with Indi Young discussing Empathy, User Research, Synthesis and Thinking Styles (posted on December 19th 2017 by Zack Naylor)
Q: An experience map shows the problem space, doesn’t it?
A: The word “experience” often leads people to think “how customers experience our service,” so in that context it’s in the solution space. If you’re doing solution-focused work, it is about the experience people have with an org’s product. So it’s fine to say “experience map” there. But if you’re doing problem-focused work, it’s not about the org nor the product/service. In this case I call it a mental model diagram.
Q: Your advice is to stop describing age using numbers in persona descriptions. Here’s a question then: I’m on a project where >50% of the service users are over 80. And fewer than 10% are under 60. Their age defines elements of how the service is delivered. How would I legitimately exclude age in this case? In fact, every project I’ve done with this client has had important elements of service defined by age groups. (Federal government … Veterans … Pensions and compensation to veterans and their children.) Age defines certain kinds of eligibility, engagement, or delivery approach. It’s been fascinating work. (from Stephen Collins)
A: Your population is qualified by age, sure, but within and across all the brackets, the way they approach the support provided to vets varies by thinking style. There might be a thinking style “never gonna get what I need” or “reluctant to spend the time with the red tape,” etc., that could be supported with different conduits in, different language or tone, and also representations of what the system already knows about their preferences. Defining the different thinking styles (from listening sessions) will help you provide more supportive, accessible experiences.
indi can help you
coffee with indi
newsletter #30 | 21-Nov-2017
I often speak about using vocabulary that clarifies meaning. For example, I encourage people to say the word “user” when they mean someone who has a relationship to the organization or the product/service, and say the word “person” when engaged in understanding the problem space. I try to clarify the difference between the solution space, where the focus is on the thing you’re creating and the ideas you have for it, and the problem space, where the focus is on the person. In the problem space, I am interested in understanding all the things running through a person’s mind as they seek to achieve an intent or purpose. And I wonder if I should settle on one word or the other: “intent” or “purpose.” Read More
newsletter #29 | 17-Oct-2017
This newsletter will be short, because I live near the Santa Rosa, Sonoma & Napa fires. It has been an intense week. I tracked friends and family, volunteered at a local evacuation center, and made donations. I saw so many other members of my community doing the same. We had more physical donations (food, clothes, bedding, pet food) than families to take them! So, it becomes a problem of getting the items to the right places, and thence to the hands of those that lost their homes. Read More
newsletter #28 | 19-Sep-2017
“Doesn’t the problem space come first? Shouldn’t it appear on the left in your diagram?” Lots of people ask me this. Research in the problem space isn’t a part of any solution cycle, so no, it does not come first. In an industry filled with process cycles (there are even figure-eight cycles), it’s hard to reconcile this idea with conventional step-wise approaches. Read More
newsletter #27 | 15-Aug-2017
When I wrote my book Practical Empathy, I chose my vocabulary carefully. I was thinking of the many clients who got distracted by the words “feelings” and “emotion,” who got great laughs by turning a listening session into a Hollywood psychoanalysis session. “How does that make you feel?” (They were pretending to ask this of their customers, who were engineers trying to solve systems problems.) Read More
newsletter #26 | 18-Jul-2017
Ideas are sexy. You get attention and credit if you have good ideas; you and your organization gain success if your ideas really catch on. But there’s not a heck of a lot of focus on where great ideas come from. We just assume they will show up, leaping like a goddess from our foreheads. Consequently we focus all our resources and effort on perfecting these already-generated ideas. It’s time to mature your practice of creating ideas–the stuff that comes before an idea forms. Read More
newsletter #25 | 20-Jun-2017
Last week I apparently caused a short Denial of Service attack on my own website when I asked listeners at the Agile UX Virtual Conference to look at the problem/solution diagram on my website. Read More