EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • Once Upon a Time 

    Once upon a time, I had a coworker named Bob who, when he needed help, would start the conversation in the middle and work to both ends. My phone would ring, and the first thing I heard was: “Hey, so, we need the spreadsheets on Tuesday so that Information Security can have them back to us in time for the estimates.”

    Spreadsheets? Estimates? Bob and I had never discussed either. As I had been “discouraged” from responding with “What the hell are you talking about now?” I spent the next 10 minutes of every Bob call trying to tease out the context of his proclamations.

    Clearly, Bob needed help—and not just with spreadsheets.

    Then there was Susan. When Susan wanted help, she gave me the entire life story of a project in the most polite, professional language possible. An email from Susan might go like this:

    Good morning,

    I’m working on the Super Bananas project, which we started three weeks ago and have been slowly working on since. We began with persona writing, then did some scenarios, and discussed a survey.

    [Insert two more paragraphs of the history of the project]

    I’m hoping—if you have the opportunity (due to your previous experience with [insert four of my last projects in chronological order])—you may be able to share a content-inventory template that would be appropriate for this project. If it isn’t too much trouble, when you get a chance, could you forward me the template at your earliest convenience?

    Thank you in advance for your cooperation,

    Susan

    An email that said, “Hey do you have a content-inventory template I could use on the Super Bananas Project?” would have sufficed, but Susan wanted to be professional. She believed that if I had to ask a question, she had failed to communicate properly. And, of course, that failure would weigh heavy on all our heads.

    Bob and Susan were as opposite as the tortoise and the hare, but they shared a common problem. Neither could get over the river and through the woods effectively. Specifically, they were both lousy at establishing context and getting to the point.

    We all need the help of others to build effective tools and applications. Communication skills are so critical to that endeavor that we’ve seen article after article after article—not to mention books, training classes, and job postings—stressing the importance of communication skills. Without the ability to communicate, we can neither build things right, nor build the right things, for our clients and our users.

    Still, context-setting is a tricky skill to learn. Stray too far toward Bob, and no one knows what we’re talking about. Follow Susan’s example, and people get bored and wander off before we get to the point.

    Whether we’re asking a colleague for help or nudging an end user to take action, we want them to respond a certain way. And whether we’re writing a radio ad, publishing a blog post, writing an email, or calling a colleague, we have to set the proper level of context to get the result we want.

    The most effective technique I’ve found for beginners is a process I call “Once Upon a Time.”

    Fairy tales? Seriously?

    Fairy tales are one of our oldest forms of folklore, with evidence indicating that they may stretch back to the Roman Empire. The prelude “Once upon a time” dates to 1380 BCE, according to the Oxford English Dictionary. Wikipedia lists over 75 language variations of the stock story opener. It’s safe to say that the vast majority of us, regardless of language or culture, have heard our share of fairy tales, from the 1800s-era Brothers Grimm stories to the 1987 musical Into the Woods.

    We know how they go:

    Once upon a time, there was a [main character] living in [this situation] who [had this problem]. [Some person] knows of this need and sends the [main character] out to [complete these steps]. They [do things] but it’s really hard because [insert challenges]. They overcome [list of challenges], and everyone lives happily ever after.

    Fairy tales are effective oral storytelling techniques precisely because they follow a standard structure that always provides enough context to understand the story. Almost everything we do can be described with this structure.

    Once upon a time Anne lacked an ice cream sandwich. This forced her to get off the couch and go to the freezer, where food stayed amazingly cold. She was forced to put her hands in the icy freezer to dig the ice cream sandwich box out of the back. She overcame the cold and was rewarded with a tasty ice cream sandwich! And they all lived happily ever after.

    The structure of a fairy tale’s beginning has a lot of similarities to the journalistic Five Ws of basic information gathering: Who? What? When? Where? Why? How?

    In our communication construct, we are the main character whose situation and problem need to be succinctly described. We’ve been sent out to do a thing, we’ve hit a challenge, and now we need specific help to overcome the challenge.

    How does this help me if I’m a Bob or a Susan?

    When Bob wanted to tell his story, he didn’t start with “Once upon a time…” He started halfway through the story. If Bob was Little Red Riding Hood, he would have started by saying, “We need scissors and some rocks.” (Side note: the general lack of knowledge about how surgery works in that particular tale gives me chills.)

    When Susan wanted to tell her story, she started before “Once upon a time…” If she was Little Red Riding Hood, she started by telling you how her parents met, how long they dated, and so on, before finally getting around to mentioning that she was trapped in a wolf’s stomach.

    When we tell our stories, we have to start at the beginning—not too early, not too late. If we’re Bob, that means making sure we’ve relayed the basic facts: who we are, what our goal is, possibly who sent us, and what our challenge is. If we’re Susan, we need to make sure we limit ourselves to the facts we actually need.

    This is where we take the fairy-tale format and put it into the first person. Susan might write:

    Once upon a time, the Bananas team asked me to do the content strategy for their project. We made good progress until we had this problem: we don’t have a template for content inventories. Bob suggested I contact you. Do you have a template you can send us?

    Bob might say:

    Once upon a time, you and I were working on the data mapping of the new Information Security application. Then Information Security asked us to send the mapping to them so they could validate it. This is a problem because we only have until Tuesday to give them the unfinished spreadsheets. Otherwise we’ll hit an even bigger problem: we won’t be able to estimate the project size on Friday without the spreadsheet. Can you help me get the spreadsheet to them on time?

    Notice the parallels between the fairy tales and these drafts: we know the main character, their situation, who sent them or triggered their move, and what they need to solve their problem. In Bob’s case, this is much more information than he usually provides. In Susan’s, it’s probably much less. In both cases, we’ve distilled the situation and the request down to the basics. In both cases, the only edit needed is to remove “Once upon a time…” from the first sentence, and it’s ready to go.

    But what about…?

    Both the Bobs and the Susans I’ve worked with have had questions about this technique, especially since in both cases they thought they were already doing a pretty good job of providing context.

    The original Susan had two big concerns that led her to giving out too much information. The first was that she’d sound unprofessional if she didn’t include every last detail and nuance of business etiquette. The second was that if her recipient had questions, they’d consider her amateurish for not providing every bit of information up front.

    Susans of the world, let me assure you: clear, concise communication is professional. The message isn’t not to use “please” and “thank you”; it’s that “If it isn’t too much trouble, when you get a chance, could you please consider…” is probably overkill.

    Beyond that, no one can anticipate every question another person might have. Clear communication starts a dialogue by covering the basics and inviting questions. It also saves time; you only have to answer the questions your colleague or reader actually have. If you’re not sure whether to keep a piece of information in your story, take it out and see if the tale still makes sense.

    Bob was a tougher nut to crack, in part because he frequently didn’t realize he was starting in the middle. Bob was genuinely baffled that colleagues hadn’t read his mind to know what he was talking about. He thought he just needed the answer to one “quick” question. Once he was made aware that he was confusing—and sometimes annoying—coworkers, he could be brought back on track with gentle suggestions. “Okay Bob, let’s start over. Once upon a time you were…?”

    Begin at the beginning and stop at the end

    Using the age-old format of “Once upon a time…” gives us an incredibly sturdy framework to use for requesting action from people. We provide all of the context they need to understand our request, as well as a clear and concise description of that request.

    Clear, concise, contextual communication is professional, efficient, and much less frustrating to everyone involved, so it pays to build good habits, even if the basis of those habits seems a bit corny.

    Do you really need to start with “Once upon a time…” to tell a story or communicate a request? Well, it doesn’t hurt. The phrase is really a marker that you’re changing the way you think about your writing, for whom you’re writing it, and what you expect to gain. Soup doesn’t require stones, and business communication doesn’t require “Once upon a time…”

    But it does lead to more satisfying endings.

    And they all lived happily ever after.

  • This week's sponsor: ​FullStory 

    With our sponsor FULLSTORY, you get a pixel-perfect session playback tool that helps answer any question about your customer’s online experience.​ ​One easy-to-install script captures everything you need.

  • The Rich (Typefaces) Get Richer 

    There are over 1,200 font families available on Typekit. Anyone with a Typekit plan can freely use any of those typefaces, and yet we see the same small selection used absolutely everywhere on the web. Ever wonder why?

    The same phenomenon happens with other font services like Google Fonts and MyFonts. Google Fonts offers 708 font families, but we can’t browse the web for 15 minutes without encountering Open Sans and Lato. MyFonts has over 20,000 families available as web fonts, yet designers consistently reach for only a narrow selection of those.

    On my side project Typewolf, I curate daily examples of nice type in the wild. Here are the ten most popular fonts from 2015:

    1. Futura
    2. Aperçu
    3. Proxima Nova
    4. Gotham
    5. Brown
    6. Avenir
    7. Caslon
    8. Brandon Grotesque
    9. GT Walsheim
    10. Circular

    And here are the ten most popular from 2014:

    1. Brandon Grotesque
    2. Futura
    3. Avenir
    4. Aperçu
    5. Proxima Nova
    6. Franklin Gothic
    7. GT Walsheim
    8. Gotham
    9. Circular
    10. Caslon

    Notice any similarities? Nine out of the ten fonts from 2014 made the top ten again in 2015. Admittedly, Typewolf is a curated showcase, so there is bound to be some bias in the site selection process. But with 365 sites featured in a year, I think Typewolf is a solid representation of what is popular in the design community.

    Other lists of popular fonts show similar results. Or simply look around the web and take a peek at the CSS—Proxima Nova, Futura, and Brandon Grotesque dominate sites today. And these fonts aren’t just a little more popular than other fonts—they are orders of magnitude more popular.

    When it comes to typefaces, the rich get richer

    I don’t mean to imply that type designers are getting rich like Fortune 500 CEOs and flying around to type conferences in their private Learjets (although some type designers are certainly doing quite well). I’m just pointing out that a tiny percentage of fonts get the lion’s share of usage and that these “chosen few” continue to become even more popular.

    The rich get richer phenomenon (also known as the Matthew Effect) refers to something that grows in popularity due to a positive feedback loop. An app that reaches number one in the App Store will receive press because it is number one, which in turn will give it even more downloads and even more press. Popularity breeds popularity. For a cogent book that discusses this topic much more eloquently than I ever could, check out Nicholas Taleb’s The Black Swan.

    But back to typefaces.

    Designers tend to copy other designers. There’s nothing wrong with that—designers should certainly try to build upon the best practices of others. And they shouldn’t be culturally isolated and unaware of current trends. But designers also shouldn’t just mimic everything they see without putting thought into what they are doing. Unfortunately, I think this is what often happens with typeface selection.

    How does a typeface first become popular, anyway?

    I think it all begins with a forward-thinking designer who takes a chance on a new typeface. She uses it in a design that goes on to garner a lot of attention. Maybe it wins an award and is featured prominently in the design community. Another designer sees it and thinks, “Wow, I’ve never seen that typeface before—I should try using it for something.” From there it just cascades into more and more designers using this “new” typeface. But with each use, less and less thought goes into why they are choosing that particular typeface. In the end, it’s just copying.

    Or, a typeface initially becomes popular simply from being in the right place at the right time. When you hear stories about famous YouTubers, there is one thing almost all of them have in common: they got in early. Before the market is saturated, there’s a much greater chance of standing out; your popularity is much more likely to snowball. A few of the most popular typefaces on the web, such as Proxima Nova and Brandon Grotesque, tell a similar story.

    The typeface Gotham skyrocketed in popularity after its use in Obama’s 2008 presidential campaign. But although it gained enormous steam in the print world, it wasn’t available as a web font until 2013, when the company then known as Hoefler & Frere-Jones launched its subscription web font service. Proxima Nova, a typeface with a similar look, became available as a web font early, when Typekit launched in 2009. Proxima Nova is far from a Gotham knockoff—an early version, Proxima Sans, was developed before Gotham—but the two typefaces share a related, geometric aesthetic. Many corporate identities used Gotham, so when it came time to bring that identity to the web, Proxima Nova was the closest available option. This pushed Proxima Nova to the top of the bestseller charts, where it remains to this day.

    Brandon Grotesque probably gained traction for similar reasons. It has quite a bit in common with Neutraface, a typeface that is ubiquitous in the offline world—walk into any bookstore and you’ll see it everywhere. Brandon Grotesque was available early on as a web font with simple licensing, whereas Neutraface was not. If you wanted an art-deco-inspired geometric sans serif with a small x-height for your website, Brandon Grotesque was the obvious choice. It beat Neutraface to market on the web and is now one of the most sought-after web fonts. Once a typeface reaches a certain level of popularity, it seems likely that a psychological phenomenon known as the availability heuristic kicks in. According to the availability heuristic, people place much more importance on things that they are easily able to recall. So if a certain typeface immediately comes to mind, then people assume it must be the best option.

    For example, Proxima Nova is often thought of as incredibly readable for a sans serif due to its large x-height, low stroke contrast, open apertures, and large counters. And indeed, it works very well for setting body copy. However, there are many other sans serifs that fit that description—Avenir, FF Mark, Gibson, Texta, Averta, Museo Sans, Sofia, Lasiver, and Filson, to name a few. There’s nothing magical about Proxima Nova that makes it more readable than similar typefaces; it’s simply the first one that comes to mind for many designers, so they can’t help but assume it must be the best.

    On top of that, the mere-exposure effect suggests that people tend to prefer things simply because they are more familiar with them—the more someone encounters Proxima Nova, the more appealing they tend to find it.

    So if we are stuck in a positive feedback loop where popular fonts keep becoming even more popular, how do we break the cycle? There are a few things designers can do.

    Strive to make your brand identifiable by just your body text

    Even if it’s just something subtle, aim to make the type on your site unique in some way. If a reader can tell they are interacting with your brand solely by looking at the body of an article, then you are doing it right. This doesn’t mean that you should completely lose control and use type just for the sole purpose of standing out. Good type, some say, should be invisible. (Some say otherwise.) Show restraint and discernment. There are many small things you can do to make your type distinctive.

    Besides going with a lesser-used typeface for your body text, you can try combining two typefaces (or perhaps three, if you’re feeling frisky) in a unique way. Headlines, dates, bylines, intros, subheads, captions, pull quotes, and block quotes all offer ample opportunity for experimentation. Try using heavier and lighter weights, italics and all-caps. Using color is another option. A subtle background color or a contrasting subhead color can go a long way in making your type memorable.

    Don’t make your site look like a generic website template. Be a brand.

    Dig deeper on Typekit

    There are many other high-quality typefaces available on Typekit besides Proxima Nova and Brandon Grotesque. Spend some time browsing through their library and try experimenting with different options in your mockups. The free plan that comes with your Adobe Creative Cloud subscription gives you access to every single font in their library, so you have no excuse not to at least try to discover something that not everyone else is using.

    A good tip is to start with a designer or foundry you like and then explore other typefaces in their catalog. For example, if you’re a fan of the popular slab serif Adelle from TypeTogether, simply click the name of their foundry and you’ll discover gems like Maiola and Karmina Sans. Don’t be afraid to try something that you haven’t seen used before.

    Dig deeper on Google Fonts (but not too deep)

    As of this writing, there are 708 font families available for free on Google Fonts. There are a few dozen or so really great choices. And then there are many, many more not-so-great choices that lack italics and additional weights and that are plagued by poor kerning. So, while you should be wary of digging too deep on Google Fonts, there are definitely some less frequently used options, such as Alegreya and Fira Sans, that can hold their own against any commercial font.

    I fully support the open-source nature of Google Fonts and think that making good type accessible to the world for free is a noble mission. As time goes by, though, the good fonts available on Google Fonts will simply become the next Times New Romans and Arials—fonts that have become so overused that they feel like mindless defaults. So if you rely on Google Fonts, there will always be a limit to how unique and distinctive your brand can be.

    Try another web font service such as Fonts.com, Cloud.typography or Webtype

    It may have a great selection, but Typekit certainly doesn’t have everything. The Fonts.com library dwarfs the Typekit library, with over 40,000 fonts available. Hoefler & Co.’s high-quality collection of typefaces is only available through their Cloud.typography service. And Webtype offers selections not available on other services.

    Self-host fonts from MyFonts, FontShop or Fontspring

    Don’t be afraid to self-host web fonts. Serving fonts from your own website really isn’t that difficult and it’s still possible to have a fast-loading website if you self-host. I self-host fonts on Typewolf and my Google PageSpeed Insights scores are 90/100 for mobile and 97/100 for desktop—not bad for an image-heavy site.

    MyFonts, FontShop, and Fontspring all offer self-hosting kits that are surprisingly easy to set up. Self-hosting also offers the added benefit of not having to rely on a third-party service that could potentially go down (and take your beautiful typography with it).

    Explore indie foundries

    Many small and/or independent foundries don’t make their fonts available through the major distributors, instead choosing to offer licensing directly through their own sites. In most cases, self-hosting is the only available option. But again, self-hosting isn’t difficult and most foundries will provide you with all the sample code you need to get up and running.

    Here are some great places to start, in no particular order:

    What about Massimo Vignelli?

    Before I wrap this up, I think it’s worth briefly discussing famed designer Massimo Vignelli’s infamous handful-of-basic-typefaces advice (PDF). John Boardley of I Love Typography has written an excellent critique of Vignelli’s dogma. The main points are that humans have a constant desire for improvement and refinement; we will always need new typefaces, not just so that brands can differentiate themselves from competitors, but to meet the ever-shifting demands of new technologies. And a limited variety of type would create a very bland world.

    No doubt there were those in the 16th century who shared Vignelli’s views. Every age is populated by those who think we’ve reached the apogee of progress… Vignelli’s beloved Helvetica, . . . would never have existed but for our desire to do better, to progress, to create.
    John Boardley, “The Vignelli Twelve”

    Are web fonts the best choice for every website?

    Not necessarily. There are some instances where accessibility and site speed considerations may trump branding—in that case, it may be best just to go with system fonts. Georgia is still a pretty great typeface, and so are newer system UI fonts likes San Francisco, Roboto/Noto, and Segoe.

    But if you’re working on a project where branding is important, don’t ignore the importance of type. We’re bombarded by more content now than at any other time in history; having a distinctive brand is more critical than ever.

    90 percent of design is typography. And the other 90 percent is whitespace.
    Jeffrey Zeldman, “The Year in Design”

    As designers, ask yourselves: “Is this truly the best typeface for my project? Or am I just using it to be safe, or out of laziness? Will it make my brand memorable, or will my site blend in with every other site out there?” The choice is yours. Dig deep, push your boundaries, and experiment. There are thousands of beautiful and functional typefaces out there—go use them!

  • Never Show A Design You Haven’t Tested On Users 

    It isn’t hard to find a UX designer to nag you about testing your designs with actual users. The problem is, we’re not very good at explaining why you should do user testing (or how to find the time). We say it like it’s some accepted, self-explanatory truth that deep down, any decent human knows is the right thing to do. Like “be a good person” or “be kind to animals.” Of course, if it was that self-evident, there would be a lot more user testing in this world.

    Let me be very specific about why user testing is essential. As long as you’re in the web business, your work will be exposed to users.

    If you’re already a user-testing advocate, that may seem obvious, but we often miss something that’s not as clear: how user testing impacts stakeholder communication and how we can ensure testing is built into projects, even when it seems impossible.

    The most devilish usability issues are those that haven’t even occurred to you as potential problems; you won’t find all the usability issues just by looking at your design. User testing is a way to be there when it happens, to make sure the stuff you created actually works as you intended, because best practices and common sense will get you only so far. You need to test if you want to innovate, otherwise, it’s difficult to know whether people will get it. Or want it. It’s how you find out whether you’ve created something truly intuitive.

    How testing up front saves the day

    Last fall, I was going to meet with one of our longtime clients, the charity and NGO Plan International Norway. We had an idea for a very different sign-up form than the one they were using. What they already had worked quite well, so any reasonable client would be a little skeptical. Why fix it if it isn’t broken, right? Preparing for the meeting, we realized our idea could be voted down before we had the chance to try it out.

    We decided to quickly put together a usability test before we showed the design.

    At the meeting, we began by presenting the results of the user test rather than the design itself.

    We discussed what worked well, and what needed further improvement. The conversation that followed was rational and constructive. Together, we and our partners at Plan discussed different ways of improving the first design, rather than nitpicking details that weren’t an issue in the test. It turned out to be one of the best client meetings I’ve ever had.

    Panels of photos depicting the transition from hand-drawn sketch to digital mockup

    We went from paper sketch to Illustrator sketch to InVision in a day in order to get ready for the test.

    User testing gives focus to stakeholder feedback

    Naturally, stakeholders in any project feel responsible for the end result and want to discuss suggestions, solutions, and any concerns about your design. By testing the design beforehand, you can focus on the real issues at hand.

    Don’t worry about walking into your client meeting with a few unsolved problems. You don’t need to have a solution for every user-identified issue. The goal is to show your design, make clear what you think needs fixing, and ideally, bring a new test of the improved design to the next meeting.

    By testing and explaining the problems you’ve found, stakeholders can be included in suggesting solutions, rather than hypothesizing about what might be problems. This also means that they can focus on what they know and are good at. How will this work with our CRM system? Will we be able to combine this approach with our annual campaign?

    Since last fall, I’ve been applying this dogma in all the work that I do: never show a design you haven’t tested. We’ve reversed the agenda to present results first, then a detailed walkthrough of the design. So far, our conversations about design and UX have become a lot more productive.

    Making room for user testing: sell it like you mean it

    Okay, so it’s a good idea to test. But what if the client won’t buy it or the project owner won’t give you the resources? User testing can be a hard sell—I know this from experience. Here are four ways to move past objections.

    Don’t make it optional

    It’s not unusual to look at the total sum in a proposal, and go, Uhm, this might be a little too much.  So what typically happens? Things that don’t seem essential get trimmed. That usability lab test becomes optional, and we convince ourselves that we’ll somehow persuade the client later that the usability test is actually important.

    But how do you convince them that something you made optional a couple of months ago is now really important? The client will likely feel that we’re trying to sell them something they don’t really need.

    Describe the objective, not the procedure

    A usability lab test with five people often produces valuable—but costly—insight. It also requires resources that don’t go into the test itself: e.g., recruiting and rewarding test subjects, rigging your lab and observation room, making sure the observers from the client are well taken care of (you can’t do that if you’re the one moderating the test), and so on.

    Today, rather than putting “usability lab test with five people” in the proposal, I’ll dedicate a few days to: “Quality assurance and testing: We’ll use the methods we deem most suitable at different stages of the process (e.g., usability lab test, guerilla testing, click tests, pluralistic walkthroughs, etc.) to make sure we get it right.”

    I have never had a client ask me to scale down the “get it right” part. And even if they do ask you to scale it down, you can still pull it off if you follow the next steps.

    Scale down documentation—not the testing

    If you think testing takes too much time, it might be because you spend too much time documenting the test. In a lab test, it’s a good idea to have 20 to 30 minutes between each test subject. This gives you time to summarize (and maybe even fix) the things you found in each test before you move on to the next subject. By the end of the day, you have a to-do list. No need to document it any more than that.

    List of update notifications in the Slack channel

    When user testing the Norwegian Labour party’s new crowdsourcing site, we all contributed our observations straight into our shared Slack channel.

    I’ve also found InVision’s comment mode useful for documenting issues discovered in the tests. If we have an HTML and CSS prototype, screenshots of the relevant pages can be added to InVision, with comments placed on top of the specific issues. This also makes it easy for the client to contribute to the discussion.

    Screen capture of InVision mockup, with comments from team members attached to various parts of the design

    After the test is done, we’ve already fixed some of the problems. The rest ends up in InVision as a to-do on the relevant page. The prototype is actually in HTML, CSS, and JavaSCript, but the visual aspect of InVision’s comment feature make it much easier to avoid misunderstandings.

    Scale down the prototype—not the testing

    You don’t need a full-featured website or a polished prototype to begin testing.

    • If you’re testing text, you really just need text.
    • If you’re testing a form, you just need to prototype the form.
    • If you wonder if something looks clickable, a flat Photoshop sketch will do.
    • Even a paper sketch will work to see if you’re on the right track.

    And if you test at this early stage, you’ll waste much less time later on.

    Low-cost, low-effort techniques to get you started

    You can do this. Now, I’m going to show you some very specific ways you can test, and some examples from projects I’ve worked on.

    Pluralistic walkthrough

    • Time: 15 minutes and up
    • Costs: Free

    A pluralistic walkthrough is UX jargon for asking experts to go through the design and point out potential usability issues. But putting five experts in a room for an hour is expensive (and takes time to schedule). Fortunately, getting them in the same room isn’t always necessary.

    At the start of a project, I put sketches or screenshots into InVision and post it in our Slack channels and other internal social media. I then ask my colleagues to spend a couple of minutes critiquing it. As easy as that, you’ll be able to weed out (or create hypotheses about) the biggest issues in your design.

    Team member comments posted on InVision mockup

    Before the usability test, we asked colleagues to comment (using InVision) on what they thought would work or not.

    Hit the streets

    • Time: 1–3 hours
    • Costs: Snacks

    This is a technique that works well if there’s something specific you want to test. If you’re shy, take a deep breath and get over it. This is by far the most effective way of usability testing if you’re short on resources. In the Labour Party project, we were able to test with seven people and summarize our findings within two hours. Here’s how:

    1. Get a device that’s easy to bring along. In my experience, an iPad is most approachable.
    2. Bring candy and snacks. Works great to have a basket of snacks and put the iPad on the basket too.
    3. Go to a public place with lots of people, preferably a place where people might be waiting (e.g., a station of some sort).
    4. Approach people who look like they are bored and waiting; have your snacks (and iPad) in front of you, and say: “Excuse me, I’m from [company]. Could I borrow a couple of minutes from you? I promise it won’t take more than five minutes. And I have candy!” (This works in Norway, and I’m pretty sure food is a universal language). If you’re working in teams of two, one of you should stay in the background during the approach.
    5. If you’re alone, take notes in between each test. If there are two of you, one person can focus on taking notes while the other is moderating, but it’s still a good idea to summarize between each test.
    Two people standing in a public transportation hub, holding a large basket and an iPad

    Morten and Ida are about to go to the Central Station in Oslo, Norway, to test the Norwegian Labour Party’s new site for crowdsourcing ideas. Don’t forget snacks!

    Online testing tools

    • Time: 30 minutes and up
    • Costs: Most tools have limited free versions. Optimal Workshop charges $149 for one survey and has a yearly plan for $1990.

    There isn’t any digital testing tool that can provide the kind of insight you get from meeting real users face-to-face. Nevertheless, digital tools are a great way of going deeper into specific themes to see if you can corroborate and triangulate the data from your usability test.

    There are many tools out there, but my two favorites are Treejack and Chalkmark from Optimal Workshop. With Treejack, it rarely takes more than an hour to figure out whether your menus and information architecture are completely off or not. With click tests like Chalkmark, you can quickly get a feel for whether people understand what’s clickable or not.

    Screencapture of Illustrator mockup

    A Chalkmark test of an early Illustrator mockup of Plan’s new home page. The survey asks: “Where would you click to send a letter to your sponsored child?” The heatmap shows where users clicked.

    Diagram combining pie charts and paths

    Nothing kills arguments over menus like this baby. With Treejack, you recreate the information architecture within the survey and give users a task to solve. Here we’ve asked: “You wonder how Plan spends its funds. Where would you search for that?” The results are presented as a tree of the paths the users took.

    Using existing audience for experiments

    • Time: 30 minutes and up
    • Costs: Free (e.g., using Hotjar and Google Analytics).

    One of the things we designed for Plan was longform article pages, binding together a compelling story of text, images, and video. It struck us that these wouldn’t really fit in a usability test. What would the task be? Read the article? And what were the relevant criteria? Time spent? How far he or she scrolled? But what if the person recruited to the test wasn’t interested in the subject? How would we know if it was the design or the story that was the problem, if the person didn’t act as we hoped?

    Since we had used actual content and photos (no lorem ipsum!), we figured that users wouldn’t notice the difference between a prototype and the actual website. What if we could somehow see whether people actually read the article when they stumbled upon it in its natural context?

    The solution was for Plan to share the link to the prototyped article as if it were a regular link to their website, not mentioning that it was a prototype.

    The prototype was set up with Hotjar and Google Analytics. In addition, we had the stats from Facebook Insights. This allowed us to see whether people clicked the link, how much time they spent on the page, how far they scrolled, what they clicked, and even what they did on Plan’s main site if they came from the prototyped article. From this we could surmise that there was no indication of visual barriers (e.g., a big photo making the user think the page was finished), and that the real challenge was actually getting people to click the link in the first place.

    Side-by-side images showing the design and the heatmap resulting from user testing

    On the left is the Facebook update from Plan. On the right is the heat map from Hotjar, showing how far people scrolled, with no clear drop-out point.

    Did you get it done? Was this useful?

    • Time: A few days or a week to set up, but basically no time spent after that
    • Costs: No cost if you build your own; Task Analytics from $950 a month

    Sometimes you need harder, bigger numbers to be convincing. This often leads people to A/B testing or Google Analytics, but unless what you’re looking for is increasing a very specific conversion, even these tools can come up short. Often you’d gain more insight looking for something of a middle ground between the pure quantitative data provided by tools like Google Analytics, and the qualitative data of usability tests.

    “Was it helpful?” modules are one of those middle-ground options I try to implement in almost all of my projects. Using tools like Google Tag Manager, you can even combine the data, letting you see the pages that have the most “yes” and “no” votes on different parts of your website (content governance dream come true, right?). But the qualitative feedback is also incredibly valuable for suggesting specific things your design is lacking.

    Feedback submission buttons

    “Was this article helpful?” or “Did you find what you were looking for?” are simple questions that can give valuable insight.

    This technique falls short if your users weren’t able to find a relevant article. Those folks aren’t going to leave feedback—they’re going to leave. Google Analytics isn’t of much help there, either. That high bounce rate? In most cases you can only guess why. Did they come and go because they found their answer straight away, or because the page was a total miss? Did they spend a lot of time on the page because it was interesting, or because it was impossible to understand?

    My clever colleagues made a tool to answer those kinds of questions. When we do a redesign, we run a Task Analytics survey both before and after launch to figure out not only what the top tasks are, but whether or not people were able to complete their task.

    When the user arrives, they’re asked if they want to help out. Then they’re asked to do whatever they came for and let us know when they’re done. When they’re done, we ask a) “What task did you come to do?” and b) “Did you complete the task?”

    This gives us data that is actionable and easily understood by stakeholders. At our own website, the most common task people arrive for is to contact an employee, and we learned that one in five will fail. We can fix that. And afterward, we can measure whether or not our fix really worked.

    Desktop and mobile screenshots from Task Analytics dashboard

    Why do people come to Netlife Research’s website, and do they complete their task? Screenshot from Task Analytics dashboard.

    Set up a usability lab and have a weekly drop-in test day

    • Time: 6 hours per project tested + time spent observing the test
    • Costs: rewarding subjects + the minimal costs of setting up a lab

    Setting up a usability lab is basically free in 2016:

    • A modern laptop has a microphone and camera built in. No need to buy that.
    • Want to test on mobile? Get a webcam and a flexible tripod or just turn your laptop around
    • Numerous screensharing and video conference tools like Skype, Google Hangout, and GoToMeeting mean there’s no need for hefty audiovisual equipment or mirror windows.
    • Even eyetracking is becoming affordable

    Other than that, you just need a room that’s big enough for you and a user. So even as a UX team of one, you can afford your own usability lab. Setting up a weekly drop-in test makes sense for bigger teams. If you’re at twenty people or more, I’d bet it would be a positive return on investment.

    My ingenious colleague Are Halland is responsible for the test each week. He does the recruiting, the lab setup, and the moderating. Each test day consists of tests with four different people, and each person typically gets tasks from two to three different projects that Netlife is currently working on. (Read up on why it makes sense to test with so few people.)

    By testing two to three projects at a time and having the same person organize it, we can cut down on the time spent preparing and executing the test without cutting out the actual testing.

    As a consultant, all I have to do is to let Are know a few days in advance that I need to test something. Usually, I will send a link to the live stream of the test to clients to let them know we’re testing and that they’re welcome to pop in and take a look. A bonus is that clients find it surprisingly rewarding to see other client’s tests and getting other client’s views on their own design (we don’t put competitors in the same test).

    This has made it a lot easier to test work on short notice, and it has also reduced the time we have to spend on planning and executing tests.

    Two men sitting at a table and working on laptops, with a large screen in the background to display what they are collaborating on

    From a drop-in usability test with the Norwegian Labour Party. Eyetracking data on the screen, Morten (Labour Party) and Jørgen (front-end designer) taking notes (and instantly fixing stuff!) on the right.

    Testing is designing

    As I hope I’ve demonstrated, user testing doesn’t have to be expensive or time-consuming. So what stops us? Personally, I’ve met two big hurdles: building testing into projects to begin with and making a habit out of doing the work.

    The critical first step is to make sure that some sort of user testing is part of the approved project plan. A project manager will look at the proposal and make sure we tick that off the list. Eventually, maybe your clients will come asking for it: “But wasn’t there supposed to be some testing in this project?”.

    Second, you don’t have to ask for anyone’s permission to test. User testing improves not only the quality of our work, but also the communication within teams and with stakeholders. If you’re tasked with designing something, even if you have just a few days to do it, treat testing as a part of that design task. I’ve suggested a couple of ways to do that, even with limited time and funds, and I hope you’ll share even more tips, tricks, and tools in the comments.

  • Meaningful CSS: Style Like You Mean It 

    These days, we have a world of meaningful markup at our fingertips. HTML5 introduced a lavish new set of semantically meaningful elements and attributes, ARIA defined an entire additional platform to describe a rich internet, and microformats stepped in to provide still more standardized, nuanced concepts. It’s a golden age for rich, meaningful markup.

    Yet our markup too often remains a tangle of divs, and our CSS is a morass of classes that bear little relationship to those divs. We nest div inside div inside div, and we give every div a stack of classes—but when we look in the CSS, our classes provide little insight into what we’re actually trying to define. Even when we do have semantic and meaningful markup, we end up redefining it with CSS classes that are inherently arbitrary. They have no intrinsic meaning.

    We were warned about these patterns years ago:

    In a site afflicted by classitis, every blessed tag breaks out in its own swollen, blotchy class. Classitis is the measles of markup, obscuring meaning as it adds needless weight to every page.
    Jeffrey Zeldman, Designing with Web Standards, 1st ed.

    Along the same lines, the W3C weighed in with:

    CSS gives so much power to the “class” attribute, that authors could conceivably design their own “document language” based on elements with almost no associated presentation (such as DIV and SPAN in HTML) and assigning style information through the “class” attribute… Authors should avoid this practice since the structural elements of a document language often have recognized and accepted meanings and author-defined classes may not. (emphasis mine)

    So why, exactly, does our CSS abuse classes so mercilessly, and why do we litter our markup with author-defined classes? Why can’t our CSS be as semantic and meaningful as our markup? Why can’t both be more semantic and meaningful, moving forward in tandem?

    Building better objects

    A long time ago, as we emerged from the early days of CSS and began building increasingly larger sites and systems, we struggled to develop some sound conventions to wrangle our ever-growing CSS files. Out of that mess came object-oriented CSS.

    Our systems for safely building complex, reusable components created a metastasizing classitis problem—to the point where our markup today is too often written in the service of our CSS, instead of the other way around. If we try to write semantic, accessible markup, we’re still forced to tack on author-defined meanings to satisfy our CSS. Both our markup and our CSS reflect a time when we could only define objects with what we had: divs and classes. When in doubt, add more of both. It was safer, especially for older browsers, so we oriented around the most generic objects we could find.

    Today, we can move beyond that. We can define better objects. We can create semantic, descriptive, and meaningful CSS that understands what it is describing and is as rich and accessible as the best modern markup. We can define the elephant instead of saying things like .pillar and .waterspout.

    Clearing a few things up

    But before we turn to defining better objects, let’s back up a bit and talk about what’s wrong with our objects today, with a little help from cartoonist Gary Larson.

    Larson once drew a Far Side cartoon in which a man carries around paint and marks everything he sees. “Door” drips across his front door, “Tree” marks his tree, and his cat is clearly labelled “Cat”. Satisfied, the man says, “That should clear a few things up.”

    We are all Larson’s label-happy man. We write <table class="table"> and <form class="form"> without a moment’s hesitation. Looking at Github, one can find plenty of examples of <main class="main">. But why? You can’t have more than one main element, so you already know how to reference it directly. The new elements in HTML5 are nearly a decade old now. We have no excuse for not using them well. We have no excuse for not expecting our fellow developers to know and understand them.

    Why reinvent the semantic meanings already defined in the spec in our own classes? Why duplicate them, or muddy them?

    An end-user may not notice or care if you stick a form class on your form element, but you should. You should care about bloating your markup and slowing down the user experience. You should care about readability. And if you’re getting paid to do this stuff, you should care about being the sort of professional who doesn’t write redundant slop. “Why should I care” was the death rattle of those advocating for table-based layouts, too.

    Start semantic

    The first step to semantic, meaningful CSS is to start with semantic, meaningful markup. Classes are arbitrary, but HTML is not. In HTML, every element has a very specific, agreed-upon meaning, and so do its attributes. Good markup is inherently expressive, descriptive, semantic, and meaningful.

    If and when the semantics of HTML5 fall short, we have ARIA, specifically designed to fill in the gaps. ARIA is too often dismissed as “just accessibility,” but really—true to its name—it’s about Accessible Rich Internet Applications. Which means it’s chock-full of expanded semantics.

    For example, if you want to define a top-of-page header, you could create your own .page-header class, which would carry no real meaning. You could use a header element, but since you can have more than one header element, that’s probably not going to work. But ARIA’s [role=banner] is already there in the spec, definitively saying, “This is a top-of-page header.”

    Once you have <header role="banner">, adding an extra class is simply redundant and messy. In our CSS, we know exactly what we’re talking about, with no possible ambiguity.

    And it’s not just about those big top-level landmark elements, either. ARIA provides a way to semantically note small, atomic-level elements like alerts, too.

    A word of caution: don’t throw ARIA roles on elements that already have the same semantics. So for example, don’t write <button role="button">, because the semantics are already present in the element itself. Instead, use [role=button] on elements that should look and behave like buttons, and style accordingly:

    button,
    [role=button] {
        … 
    }

    Anything marked as semantically matching a button will also get the same styles. By leveraging semantic markup, our CSS clearly incorporates elements based on their intended usage, not arbitrary groupings. By leveraging semantic markup, our components remain reusable. Good markup does not change from project to project.

    Okay, but why?

    Because:

    • If you’re writing semantic, accessible markup already, then you dramatically reduce bloat and get cleaner, leaner, and more lightweight markup. It becomes easier for humans to read and will—in most cases—be faster to load and parse. You remove your author-defined detritus and leave the browser with known elements. Every element is there for a reason and provides meaning.
    • On the other hand, if you’re currently wrangling div-and-class soup, then you score a major improvement in accessibility, because you’re now leveraging roles and markup that help assistive technologies. In addition, you standardize markup patterns, making repeating them easier and more consistent.
    • You’re strongly encouraging a consistent visual language of reusable elements. A consistent visual language is key to a satisfactory user experience, and you’ll make your designers happy as you avoid uncanny-valley situations in which elements look mostly but not completely alike, or work slightly differently. Instead, if it looks like a duck and quacks like a duck, you’re ensuring it is, in fact, a duck, rather than a rabbit.duck.
    • There’s no context-switching between CSS and HTML, because each is clearly describing what it’s doing according to a standards-based language.
    • You’ll have more consistent markup patterns, because the right way is clear and simple, and the wrong way is harder.
    • You don’t have to think of names nearly as much. Let the specs be your guide.
    • It allows you to decouple from the CSS framework du jour.

    Here’s another, more interesting scenario. Typical form markup might look something like this (or worse):

    <form class="form" method="POST" action=".">
    	<div class="form-group">
    		<label for="id-name-field">What’s Your Name</label>
    		<input type="text" class="form-control text-input" name="name-field" id="id-name-field" />
    	</div>
    	<div class="form-group">
    		<input type="submit" class="btn btn-primary" value="Enter" />
    	</div>      
    </form>
    

    And then in the CSS, you’d see styles attached to all those classes. So we have a stack of classes describing that this is a form and that it has a couple of inputs in it. Then we add two classes to say that the button that submits this form is a button, and represents the primary action one can take with this form.

    Common vs. optimal form markup
    What you’ve been using What you could use instead Why
    .form form Most of your forms will—or at least should—follow consistent design patterns. Save additional identifiers for those that don’t. Have faith in your design patterns.
    .form-group form > p or fieldset > p The W3C recommends paragraph tags for wrapping form elements. This is a predictable, recommended pattern for wrapping form elements.
    .form-control or .text-input [type=text] You already know it’s a text input.
    .btn and .btn-primary or .text-input [type=submit] Submitting the form is inherently the primary action.

    Some common vs. more optimal form markup patterns

    In light of all that, here’s the new, improved markup.

    <form method="POST" action=".">
    	<p>
    		<label for="id-name-field">What’s Your Name</label>
    		<input type="text" name="name-field" id="id-name-field" />
    	</p>
    	<p>
    		<button type="submit">Enter</button>
    	</p>
    </form>
    

    The functionality is exactly the same.

    Or consider this CSS. You should be able to see exactly what it’s describing and exactly what it’s doing:

    [role=tab] {
    	display: inline-block;
    }
    [role=tab][aria-selected=true] {
    	background: tomato;
    }
    
    [role=tabpanel] {
    	display: none;
    }
    [role=tabpanel][aria-expanded=true] {
    	display: block;
    }

    Note that [aria-hidden] is more semantic than a utility .hide class, and could also be used here, but aria-expanded seems more appropriate. Neither necessarily needs to be tied to tabpanels, either.

    In some cases, you’ll find no element or attribute in the spec that suits your needs. This is the exact problem that microformats and microdata were designed to solve, so you can often press them into service. Again, you’re retaining a standardized, semantic markup and having your CSS reflect that.

    At first glance, it might seem like this would fail in the exact scenario that CSS naming structures were built to suit best: large projects, large teams. This is not necessarily the case. CSS class-naming patterns place rigid demands on the markup that must be followed. In other words, the CSS dictates the final HTML. The significant difference is that with a meaningful CSS technique, the styles reflect the markup rather than the other way around. One is not inherently more or less scalable. Both come with expectations.

    One possible argument might be that ensuring all team members understand the correct markup patterns will be too hard. On the other hand, if there is any baseline level of knowledge we should expect of all web developers, surely that should be a solid working knowledge of HTML itself, not memorizing arcane class-naming rules. If nothing else, the patterns a team follows will be clear, established, well documented by the spec itself, and repeatable. Good markup and good CSS, reinforcing each other.

    To suggest we shouldn’t write good markup and good CSS because some team members can’t understand basic HTML structures and semantics is a cop-out. Our industry can—and should—expect better. Otherwise, we’d still be building sites in tables because CSS layout is supposedly hard for inexperienced developers to understand. It’s an embarrassing argument.

    Probably the hardest part of meaningful CSS is understanding when classes remain helpful and desirable. The goal is to use classes as they were intended to be used: as arbitrary groupings of elements. You’d want to create custom classes most often for a few cases:

    • When there are not existing elements, attributes, or standardized data structures you can use. In some cases, you might truly have an object that the HTML spec, ARIA, and microformats all never accounted for. It shouldn’t happen often, but it is possible. Just be sure you’re not sticking a horn on a horse when you’re defining .unicorn.
    • When you wish to arbitrarily group differing markup into one visual style. In this example, you want objects that are not the same to look like they are. In most cases, they should probably be the same, semantically, but you may have valid reasons for wanting to differentiate them.
    • You’re building it as a utility mixin.

    Another concern might be building up giant stacks of selectors. In some cases, building a wrapper class might be helpful, but generally speaking, you shouldn’t have a big stack of selectors because the elements themselves are semantically different elements and should not be sharing all that many styles. The point of meaningful CSS is that you know from your CSS that that button or [role=button] applies to all buttons, but [type=submit] is always the primary action item on the form.

    We have so many more powerful attributes at our disposal today that we shouldn’t need big stacks of selectors. To have them would indicate sloppy thinking about what things truly are and how they are intended to be used within the overall system.

    It’s time to up our CSS game. We can remain dogmatically attached to patterns developed in a time and place we have left behind, or we can move forward with CSS and markup that correspond to defined specs and standards. We can use real objects now, instead of creating abstract representations of them. The browser support is there. The standards and references are in place. We can start today. Only habit is stopping us.

  • Prototypal Object-Oriented Programming using JavaScript 

    Douglas Crockford accurately described JavaScript as the world’s most misunderstood language. A lot of programmers tend to think of it as not a “proper” language because it lacks the common object-oriented programming concepts. I myself developed the same opinion after my first JavaScript project ended up a hodgepodge, as I couldn’t find a way to organize code into classes. But as we will see, JavaScript comes packed with a rich system of object-oriented programming that many programmers don’t know about.

    Back in the time of the First Browser War, executives at Netscape hired a smart guy called Brendan Eich to put together a language that would run in the browser. Unlike class-based languages like C++ and Java, this language, which was at some point called LiveScript, was designed to implement a prototype-based inheritance model. Prototypal OOP, which is conceptually different from the class-based systems, had been invented just a few years before to solve some problems that class-based OOP presented and it fit very well with LiveScript’s dynamic nature.

    Unfortunately, this new language had to “look like Java” for marketing reasons. Java was the cool new thing in the tech world and Netscape’s executives wanted to market their shiny new language as “Java’s little brother.” This seems to be why its name was changed to JavaScript. The prototype-based OOP system, however, didn’t look anything like Java’s classes. To make this prototype-based system look like a class-based system, JavaScript’s designers came up with the keyword new and a novel way to use constructor functions. The existence of this pattern and the ability to write “pseudo class-based” code has led to a lot of confusion among developers.

    Understanding the rationale behind prototype-based programming was my “aha” moment with JavaScript and resolved most of the gripes I had with the language. I hope learning about prototype-based OOP brings you the same peace of mind it brought me. And I hope that exploring a technique that has not been fully explored excites you as much as it excites me.

    Prototype-based OOP

    Conceptually, in class-based OOP, we first create a class to serve as a “blueprint” for objects, and then create objects based on this blueprint. To build more specific types of objects, we create “child” classes; i.e., we make some changes to the blueprint and use the resulting new blueprint to construct the more specific objects.

    For a real-world analogy, if you were to build a chair, you would first create a blueprint on paper and then manufacture chairs based on this blueprint. The blueprint here is the class, and chairs are the objects. If you wanted to build a rocking chair, you would take the blueprint, make some modifications, and manufacture rocking chairs using the new blueprint.

    Now take this example into the world of prototypes: you don’t create blueprints or classes here, you just create the object. You take some wood and hack together a chair. This chair, an actual object, can function fully as a chair and also serve as a prototype for future chairs. In the world of prototypes, you build a chair and simply create “clones” of it. If you want to build a rocking chair, all you have to do is pick a chair you’ve manufactured earlier, attach two rockers to it, and voilà! You have a rocking chair. You didn’t really need a blueprint for that. Now you can just use this rocking chair for yourself, or perhaps use it as a prototype to create more rocking chairs.

    JavaScript and prototype-based OOP

    Following is an example that demonstrates this kind of OOP in JavaScript. We start by creating an animal object:

    var genericAnimal = Object.create(null);

    Object.create(null) creates a new empty object. (We will discuss Object.create() in further detail later.) Next, we add some properties and functions to our new object:

    genericAnimal.name = 'Animal';
    genericAnimal.gender = 'female';
    genericAnimal.description = function() {
    	return 'Gender: ' + this.gender + '; Name: ' + this.name;
    };

    genericAnimal is a proper object and can be used like one:

    console.log(genericAnimal.description());
    //Gender: female; Name: Animal

    We can create other, more specific animals by using our sample object as a prototype. Think of this as cloning the object, just like we took a chair and created a clone in the real world.

    var cat = Object.create(genericAnimal);

    We just created a cat as a clone of the generic animal. We can add properties and functions to this:

    cat.purr = function() {
    	return 'Purrrr!';
    };

    We can use our cat as a prototype and create a few more cats:

    var colonel = Object.create(cat);
    colonel.name = 'Colonel Meow';
    
    var puff = Object.create(cat);
    puff.name = 'Puffy';

    You can also observe that properties/methods from parents were properly carried over:

    console.log(puff.description());
    //Gender: female; Name: Puffy

    The new keyword and the constructor function

    JavaScript has the concept of a new keyword used in conjunction with constructor functions. This feature was built into JavaScript to make it look familiar to people trained in class-based programming. You may have seen JavaScript OOP code that looks like this:

    function Person(name) {
    	this.name = name;
    	this.sayName = function() {
    		return "Hi, I'm " + this.name;
    	};
    }
    var adam = new Person('Adam');

    Implementing inheritance using JavaScript’s default method looks more complicated. We define Ninja as a sub-class of Person. Ninjas can have a name as they are a person, and they can also have a primary weapon, such as shuriken.

    function Ninja(name, weapon) {
      Person.call(this, name);
      this.weapon = weapon;
    }
    Ninja.prototype = Object.create(Person.prototype);
    Ninja.prototype.constructor = Ninja;

    While the constructor pattern might look more attractive to an eye that’s familiar with class-based OOP, it is considered problematic by many. What’s happening behind the scenes is prototypal OOP, and the constructor function obfuscates the language’s natural implementation of OOP. This just looks like an odd way of doing class-based OOP without real classes, and leaves the programmer wondering why they didn’t implement proper class-based OOP.

    Since it’s not really a class, it’s important to understand what a call to a constructor does. It first creates an empty object, then sets the prototype of this object to the prototype property of the constructor, then calls the constructor function with this pointing to the newly-created object, and finally returns the object. It’s an indirect way of doing prototype-based OOP that looks like class-based OOP.

    The problem with JavaScript’s constructor pattern is succinctly summed up by Douglas Crockford:

    JavaScript’s constructor pattern did not appeal to the classical crowd. It also obscured JavaScript’s true prototypal nature. As a result, there are very few programmers who know how to use the language effectively.

    The most effective way to work with OOP in JavaScript is to understand prototypal OOP, whether the constructor pattern is used or not.

    Understanding delegation and the implementation of prototypes

    So far, we’ve seen how prototypal OOP differs from traditional OOP in that there are no classes—only objects that can inherit from other objects.

    Every object in JavaScript holds a reference to its parent (prototype) object. When an object is created through Object.create, the passed object—meant to be the prototype for the new object—is set as the new object’s prototype. For the purpose of understanding, let’s assume that this reference is called __proto__1. Some examples from the previous code can illustrate this point:

    The line below creates a new empty object with __proto__ as null.

    var genericAnimal = Object.create(null); 

    The code below then creates a new empty object with __proto__ set to the genericAnimal object, i.e. rodent.__proto__ points to genericAnimal.

    var rodent = Object.create(genericAnimal);
     rodent.size = 'S';

    The following line will create an empty object with __proto__ pointing to rodent.

    var capybara = Object.create(rodent);
    //capybara.__proto__ points to rodent
    //capybara.__proto__.__proto__ points to genericAnimal
    //capybara.__proto__.__proto__.__proto__ is null

    As we can see, every object holds a reference to its prototype. Looking at Object.create without knowing what exactly it does, it might look like the function actually “clones” from the parent object, and that properties of the parent are copied over to the child, but this is not true. When capybara is created from rodent, capybara is an empty object with only a reference to rodent.

    But then—if we were to call capybara.size right after creation, we would get S, which was the size we had set in the parent object. What blood-magic is that? capybara doesn’t have a size property yet. But still, when we write capybara.size, we somehow manage to see the prototype’s size property.

    The answer is in JavaScript’s method of implementing inheritance: delegation. When we call capybara.size, JavaScript first looks for that property in the capybara object. If not found, it looks for the property in capybara.__proto__. If it didn’t find it in capybara.__proto__, it would look in capybara.__proto__.__proto__. This is known as the prototype chain.

    If we called capybara.description(), the JavaScript engine would start searching up the prototype chain for the description function and finally discover it in capybara.__proto__.__proto__ as it was defined in genericAnimal. The function would then be called with this pointing to capybara.

    Setting a property is a little different. When we set capybara.size = 'XXL', a new property called size is created in the capybara object. Next time we try to access capybara.size, we find it directly in the object, set to 'XXL'.

    Since the prototype property is a reference, changing the prototype object’s properties at runtime will affect all objects using the prototype. For example, if we rewrote the description function or added a new function in genericAnimal after creating rodent and capybara, they would be immediately available for use in rodent and capybara, thanks to delegation.

    Creating Object.create

    When JavaScript was developed, its default way of creating objects was the keyword new. Then many notable JavaScript developers campaigned for Object.create, and eventually it was included in the standard. However, some browsers don’t support Object.create (you know the one I mean). For that reason, Douglas Crockford recommends including the following code in your JavaScript applications to ensure that Object.create is created if it is not there:

    if (typeof Object.create !== 'function') {
    	Object.create = function (o) {
    		function F() {}
    		F.prototype = o;
    		return new F();
    	};
    }

    Object.create in action

    If you wanted to extend JavaScript’s Math object, how would you do it? Suppose that we would like to redefine the random function without modifying the original Math object, as other scripts might be using it. JavaScript’s flexibility provides many options. But I find using Object.create a breeze:

    var myMath = Object.create(Math);

    Couldn’t possibly get any simpler than that. You could, if you prefer, write a new constructor, set its prototype to a clone of Math, augment the prototype with the functions you like, and then construct the actual object. But why go through all that pain to make it look like a class, when prototypes are so simple?

    We can now redefine the random function in our myMath object. In this case, I wrote a function that returns random whole numbers within a range if the user specifies one. Otherwise, it just calls the parent’s random function.

    myMath.random = function() {
    	var uber = Object.getPrototypeOf(this);
    if (typeof(arguments[0]) === 'number' && typeof(arguments[1]) === 'number' && arguments[0] < arguments[1]) {
    		var rand = uber.random();
    		var min = Math.floor(arguments[0]);
    		var max = Math.ceil(arguments[1]);
    		return this.round(rand * (max - min)) + min;
    	}
    	return uber.random();
    };

    There! Now myMath.random(-5,5) gets you a random whole number between −5 and 5, while myMath.random() gets the usual. And since myMath has Math as its prototype, it has all the functionality of the Math object built into it.

    Class-based OOP vs. prototype-based OOP

    Prototype-based OOP and class-based OOP are both great ways of doing OOP; both approaches have pros and cons. Both have been researched and debated in the academic world since before I was born. Is one better than the other? There is no consensus on that. But the key points everyone can agree on are that prototypal OOP is simpler to understand, more flexible, and more dynamic.

    To get a glimpse of its dynamic nature, take the following example: you write code that extensively uses the indexOf function in arrays. After writing it all down and testing in a good browser, you grudgingly test it out in Internet Explorer 8. As expected, you face problems. This time it’s because indexOf is not defined in IE8.

    So what do you do? In the class-based world, you could solve this by defining the function, perhaps in another “helper” class which takes an array or List or ArrayList or whatever as input, and replacing all the calls in your code. Or perhaps you could sub-class the List or ArrayList and define the function in the sub-class, and use your new sub-class instead of the ArrayList.

    But JavaScript and prototype-based OOP’s dynamic nature makes it simple. Every array is an object and points to a parent prototype object. If we can define the function in the prototype, then our code will work as is without any modification!

    if (!Array.prototype.indexOf) {
    	Array.prototype.indexOf = function(elem) {
    		//Your magical fix code goes here.
    };
    }

    You can do many cool things once you ditch classes and objects for JavaScript’s prototypes and dynamic objects. You can extend existing prototypes to add new functionality—extending prototypes like we did above is how the well known and aptly named library Prototype.js adds its magic to JavaScript’s built-in objects. You can create all sorts of interesting inheritance schemes, such as one that inherits selectively from multiple objects. Its dynamic nature means you don’t even run into the problems with inheritance that the Gang of Four book famously warns about. (In fact, solving these problems with inheritance was what prompted researchers to invent prototype-based OOP—but all that is beyond our scope for this article.)

    Class-based OOP emulation can go wrong

    Consider the following very simple example written with pseudo-classes:

    function Animal(){
        this.offspring=[];
    }
    
    Animal.prototype.makeBaby = function(){ 
        var baby = new Animal();
        this.offspring.push(baby);
        return baby;
    };
    
    //create Cat as a sub-class of Animal
    function Cat() {
    }
    
    //Inherit from Animal
    Cat.prototype = new Animal();
    
    var puff = new Cat();
    puff.makeBaby();
    var colonel = new Cat();
    colonel.makeBaby();

    The example looks innocent enough. This is an inheritance pattern that you will see in many places all over the internet. However, something funny is going on here—if you check colonel.offspring and puff.offspring, you will notice that each of them contains the same two babies! That’s probably not what you intended—unless you are coding a quantum physics thought experiment.

    JavaScript tried to make our lives easier by making it look like we have good old class-based OOP going on. But it turns out it’s not that simple. Simulating class-based OOP without completely understanding prototype-based OOP can lead to unexpected results. To understand why this problem occurred, you must understand prototypes and how constructors are just one way to build objects from other objects.

    What happened in the above code is very clear if you think in terms of prototypes. The variable offspring is created when the Animal constructor is called—and it is created in the Cat.prototype object. All individual objects created with the Cat constructor use Cat.prototype as their prototype, and Cat.prototype is where offspring resides. When we call makeBaby, the JavaScript engine searches for the offspring property in the Cat object and fails to find it. It then finds the property in Cat.prototype—and adds the new baby in the shared object that both individual Cat objects inherit from.

    So now that we understand what the problem is, thanks to our knowledge of the prototype-based system, how do we solve it? The solution is that the offspring property needs to be created in the object itself rather than somewhere in the prototype chain. There are many ways to solve it. One way is that makeBaby ensures that the object on which the function is called has its own offspring property:

    Animal.prototype.makeBaby=function(){
    	var baby=new Animal(); 
    	if(!this.hasOwnProperty('offspring')){
    		this.offspring=[]; }
    	this.offspring.push(baby); 
    	return baby;
    };
    

    Backbone.js runs into a similar trap. In Backbone.js, you build views by extending the base Backbone.View “class.” You then instantiate views using the constructor pattern. This model is very good at emulating class-based OOP in JavaScript:

    //Create a HideableView "sub-class" of Backbone.View
    var HideableView = Backbone.View.extend({
        el: '#hideable', //the view will bind to this selector
        events : {
            'click .hide': 'hide'
        },
        //this function was referenced in the click handler above
        hide: function() {
          //hide the entire view
        	$(this.el).hide();
        }
    });
    
    var hideable = new HideableView();

    This looks like simple class-based OOP. We inherited from the base Backbone.View class to create a HideableView child class. Next, we created an object of type HideableView.

    Since this looks like simple class-based OOP, we can use this functionality to conveniently build inheritance hierarchies, as shown in the following example:

    var HideableTableView = HideableView.extend({
        //Some view that is hideable and rendered as a table.
    });
    
    var HideableExpandableView = HideableView.extend({
        initialize: function() {
            //add an expand click handler. We didn’t create a separate
            //events object because we need to add to the
            //inherited events.
            this.events['click .expand'] = 'expand';
        },
        expand: function () {
        	//handle expand
        }
    });
    
    var table = new HideableTableView();
    var expandable = new HideableExpandableView();

    This all looks good while you’re thinking in class-based OOP. But if you try table.events['click .expand'] in the console, you will see “expand”! Somehow, HideableTableView has an expand click handler, even though it was never defined in this class.

    You can see the problem in action here: http://codepen.io/anon/pen/qbYJeZ

    The problem above occurred because of the same reason outlined in the earlier example. In Backbone.js, you need to work against the indirection created by trying to make it look like classes, to see the prototype chain hidden in the background. Once you comprehend how the prototype chain would be structured, you will be able to find a simple fix for the problem.

    In conclusion

    Despite prototypal OOP underpinning one of the most popular languages out there today, programmers are largely unfamiliar with what exactly prototype-based OOP is. JavaScript itself may be partly to blame because of its attempts to masquerade as a class-based language.

    This needs to change. To work effectively with JavaScript, developers need to understand the how and why of prototype-based programming—and there’s much more to it than this article. Beyond mastering JavaScript, in learning about prototype-based programming you can also learn a lot of things about class-based programming as you get to compare and contrast the two different methods.

    Further Reading

    Douglas Crockford’s note on protoypal programming was written before Object.create was added to the standard.

    An article on IBM’s developerWorks reinforces the same point on prototypal OOP. This article was the prototypal “aha” moment for me.

    The following three texts will be interesting reads if you’re willing to dive into the academic roots of prototype-based programming:

    Henry Lieberman of MIT Media Labs compares class-based inheritance with prototype-based delegation and argues that prototype-based delegation is the more flexible of the two concepts.

    Classes versus Prototypes in Object-Oriented Languages is a proposal to use prototypes instead of classes by the University of Washington’s Alan Borning.

    Lieberman’s and Borning’s work in the 1980s appears to have influenced the work that David Ungar and Randall Smith did to create the first prototype-based programming language: Self. Self went on to become the basis for the prototype-based system in JavaScript. This paper describes their language and how it omits classes in favor of prototypes.

     

    Footnotes

    • 1. The __proto__ property is used by some browsers to expose an object’s prototype, but it is not standard and is considered obsolete. Use Object.getPrototypeOf() as a standards-compliant way of obtaining an object’s prototype in modern browsers.
  • OOUX: A Foundation for Interaction Design 

    There’s a four-year story behind my current design process, something I introduced last year on A List Apart—“Object-Oriented UX.” The approach advocates designing objects before actions. Now it’s time to get into the deeper benefits of OOUX and the smooth transition it can set up while shifting from object-based system design to interaction design.

    The “metaphor,” once found, is a perfectly definite thing: a collection of objects, actions on objects, and relationships between objects.
    Dave Collins, Designing Object-Oriented User Interfaces (1995)

    Imagine you’re designing a social network that helps chefs trade recipes requiring exotic ingredients. With good ol’ fashioned research, you develop a solid persona (Pierre, the innovator-chef, working in a gourmet restaurant) and you confirm the space in the market. You understand the industry and the project goals. Now it’s time to put marker to whiteboard.

    Where would you start designing?

    Would you start by sketching out an engaging onboarding process for chefs? We do need chefs to make this thing successful—no chefs, no network! So maybe we start by making sure their first interaction is amazing.

    Or maybe you start with one of the most frequent activities: how a chef posts a new recipe. And that could easily lead you to sketching the browsing experience—how will other chefs find new recipes?

    Three or four years ago, I’d start by storyboarding a critical user path. I’d start with the doing.

    Pre-OOUX, my initial design-thinking would look something like this. I’d figure out the interaction design while figuring out what a recipe actually should be.

    Pre-OOUX, my initial design-thinking would look something like this. I’d figure out the interaction design while figuring out what a recipe actually should be.

    I imagine many other user experience designers begin the same way, by designing how someone would use the thing. One interaction flow leads to the design of another interaction flow. Soon, you have a web of flows. Iterate on those flows, add some persistent navigation, and voilà!—you have a product design.

    But there is a problem with this action-first approach. We are designing our actions without a clear picture of what is being acted on. It’s like the sentence, “Sally kicked.” We’ve got our subject (the user) and we’ve got our verb (the action).  But where’s the object? Sally kicked what? The ball? Her brother? A brain-hungry zombie?

    When we jump right into actions, we run the risk of designing a product with a fuzzy reflection of the user’s mental model. By clearly defining the objects in our users’ real-world problem domain, we can create more tangible and relatable user experiences.

    These days, a lot happens before I begin sketching user flows (in this article, I use “user flow” and “interaction flow” interchangeably). I first define my user, asking, “Who’s Sally?” Next, I figure out her mental model, meaning all the things (objects) that the problem is made of, all the things she sees as part of the solution, and how they relate to one another. Finally, I design the interactions. Once I understand that Sally is a ninja armed with only a broomstick, and that she is faced with a team of zombies, I can better design the actions she’ll take.

    In retrospect, I feel like I was doing my job backwards for the first two-thirds of my career, putting interaction flows before building an object-oriented framework. Now, I would figure out the system of chefs, recipes, and ingredients before worrying about the chef onboarding process or how exactly a chef posts a recipe. How do the objects relate to one another? What content elements comprise each object? Which objects make up my MVP and which objects can I fold in later? Finally, what actions does a user take on each object?

    That’s what Object Oriented UX is all about—thinking in terms of objects before actions. In my previous article, we learned how to define objects and design a framework based on those objects. This time, we’re exploring how to smoothly transition from big-picture OOUX to interaction design by using a very simple tool: the CTA Inventory.

    What’s a CTA Inventory, and why is it important?

    Calls to action (CTAs) are the main entry points to interaction flows. If an interaction flow is a conversation between the system and the user, the CTA is a user’s opening line to start that conversation. Once you have an object framework, you can add possible CTAs to your objects, basically putting a stake in the ground that says, “Interaction design might go here.” These stakes in the ground—the CTAs—can be captured using a CTA Inventory.

    A CTA Inventory is a bridge from big-picture OOUX to detailed interaction design.

    A CTA Inventory is a bridge from big-picture OOUX to detailed interaction design.

    A CTA Inventory is just a fancy list of potential CTAs organized around your objects. Since most (all?) interactions involve creating, manipulating, or finding an object, we create this inventory by thinking about what a user wants to do in our system—specifically, what a user wants to do to objects in our system.

    Creating a CTA Inventory does two things. First, it helps us shift gears between the holistic nature of system design to the more compartmentalized work of interaction design. Second, it helps us:

    1. think about interactions creatively;
    2. validate those interactions;
    3. and ultimately write project estimates with greater accuracy.

    Let’s explore these three benefits a little more before creating our own CTA Inventory.

    Creative constraints improve brainstorming

    Simply understanding your objects will help you determine the things that a user might do with them. We know that Sally wants to destroy zombies—but it’s only after we’ve figured out that these are the fast, smart, light-averting zombies that we can be prepared to design exactly how she’ll do it.

    When we think about interactions in the context of an object, we give ourselves a structure for brainstorming. When we apply the constraints of the object framework, we’re likely to be more creative—and more likely to cover all of our bases. Brainstorm your actions object by object so that innovative features are less likely to fall through the cracks.

    For example, let’s think about the object “ingredient” in our Chef Network app. What are all the things that Pierre might want to do to an ingredient?

    • Mark the ingredient as a favorite.
    • Claim he’s an expert on the ingredient.
    • Add the ingredient to a shopping list.
    • Check availability of the ingredient at local stores.
    • Follow the ingredient to see new recipes that are posted using this ingredient.
    • Add a tip for using this ingredient.

    By using the object framework, I might uncover functionality I wouldn’t otherwise have considered if my brainstorming was too broad and unconstrained; structure gives creative thinking more support than amorphous product goals and squishy user objectives.

    Validate actions early

    Good news. You can user-test your system of objects and the actions a user might take on them before spending long hours on interaction design. Create a prototype that simply lets users navigate from one object to another, exploring the framework (which is a significant user goal in itself). Through observation and interviews, see if your system resonates with their mental model. Do you have the right objects and do their relationships make sense? And are the right “buttons” on those objects?

    Armed with a simple prototype of your interconnected objects and their associated CTAs, you now have a platform to discuss functionality with users—without all the hard work of prototyping the actual interactions. In a nutshell: talk to your users about the button before designing what happens when they click it.

    Interaction design can be some of the most difficult, time-consuming, devil-in-the-details design work. I personally don’t want to sweat through designing a mechanism for following chefs, managing alerts from followed chefs, and determining how the dreaded unfollow will work…if it turns out users would rather follow ingredients.

    Estimate with interaction design in mind

    As we’ve established, interaction design is a time- and resources-devouring monster. We have to design a conversation between the system and the user—an unpredictable user who requires us to think about error prevention, error handling, edge cases, animated transitions, and delicate microinteractions. Basically, all the details that ensure they don’t feel dumb or think that the system is dumb.

    The amount and complexity of interaction design your product requires will critically impact your timeline, budget, and even staffing requirements, perhaps more than any other design factor. Armed with a CTA Inventory, you can feel confident knowing you have solid insight into the interaction design that will be handled by your team. You can forecast the coming storm and better prepare for it.

    So, do you love this idea of better brainstorming, early validation, and estimating with better accuracy? Awesome! Let’s look at how to create your amazing CTA Inventory. First, we will discuss the low-fidelity initial pass (which is great to do collaboratively with your team). Next, we will set up a more formal and robust spreadsheet version.

    CTA Inventory: low-fidelity

    If you haven’t read my primer on object mapping, now would be a great time to go and catch up! I walk you through my methodology for:

    • extracting objects from product goals;
    • defining object elements (like core content, metadata, and nested objects);
    • and prioritizing elements.

    The walk-through in the previous article results in an object map similar to this:

    An object map before layering on a CTA Inventory.

    An object map before layering on a CTA Inventory.

    I’ve used outlined blue stickies to represent objects; yellow stickies to represent core content; pink stickies to indicate metadata; and additional blue stickies to represent nested objects.

    A low-fidelity CTA Inventory is quite literally an extension of the object mapping exercise; once you’ve prioritized your elements, switch gears and begin thinking about the CTAs that will associate with each object. I use green stickies for my CTAs (green for go!) and stack them on top of their object.

    An object map with a quick, low-fidelity CTA Inventory tacked on. Potential CTAs are on green stickies placed next to each object.

    An object map with a quick, low-fidelity CTA Inventory tacked on. Potential CTAs are on green stickies placed next to each object.

    This initial CTA brainstorming is great to do while workshopping with a cross-functional team. Get everyone’s ideas on how a user might act on the objects. You might end up with dozens of potential CTAs! In essence, you and your team will have a conversation about the features of the product, but within the helpful framework of objects and their CTAs. Essentially, you are taking that big, hairy process of determining features, then disguising it as a simple, fun, and collaborative activity: “All we’re doing is brainstorming what buttons need to go on our objects! That’s all! It’s easy!” 

    Each object might need roughly 10–15 minutes, so block out an hour or two to discuss CTAs if your system has three to five objects. You’ll be surprised at the wealth of ideas that emerge! You and your team will gain clarity about what your product should actually do, not to mention where you disagree (which is valuable in its own right).

    In our chef example, something pretty interesting happened while the team was hashing out ideas. During the CTA conversation about “ingredient,” we thought that perhaps it would be useful if chefs could suggest a substitute ingredient (see circled green sticky below). “Fresh out of achiote paste? Try saffron instead!” But with that in mind, those “suggested substitute ingredients” need to become part of the ingredient object. So, we updated the object map to reflect that (circled blue sticky).

    After brainstorming CTAs, we needed to add a nested object on 'ingredient' for 'ingredients that could be substituted.'

    After brainstorming CTAs, we needed to add a nested object on “ingredient” for “ingredients that could be substituted.”

    Although I always begin with my objects and their composition, CTA brainstorming tends to loop me back around to rethinking my objects. As always, be prepared to iterate!

    CTA Inventory: high-fidelity

    CTAs can get complicated; how and when they display might be conditional on permissions, user types, or states of your object. Even in our simple example above, some CTAs will only be available to certain users.

    For example, if I’m a chef on an instance of one of my own recipe objects, I will see “edit” and “delete” CTAs, but I might not be able to “favorite” my own recipe. Conversely, if I’m on another chef’s recipe, I won’t be able to edit or delete it, but I will definitely want the option to “favorite” it.

    In the next iteration of our CTA Inventory, we move into a format that allows us to capture more complexities and conditions. After a first pass of collaborative, analogue brainstorming about CTAs, you might want to get down to business with a more formal, digitized CTA Inventory.

    A detailed CTA Inventory for our chef network example.

    A detailed CTA Inventory for our chef network example. Dig in deeper on the actual Google Sheet.

    Using a Google spreadsheet, I create a matrix (see above) that lets me capture thoughts about each object-derived CTA and the inevitable interaction flows for each one:

    • Why do we even have this CTA? What’s the purpose, and what user or business goal does it ladder up to?
    • Who will trigger this CTA? A certain persona or user type? Someone with a special permission or role?
    • Where will the CTAs live? Where are the obvious places a user will trigger this interaction flow? And are there other creative places we should consider putting it, based on user needs?
    • How much complexity is inherent in the interaction flow triggered by this CTA? This can help us estimate level of effort.
    • What is the priority of this interaction flow? Is this critical to launch, slated for a later phase, or a concept that needs to be researched and validated?
    • What questions and discussion points does this CTA raise?

    Before you start designing the interactions associated with each of your CTAs, get comfortable with the answers to these questions. Build an object-oriented prototype and validate the mental model with users. Talk to them and make sure that you’ve included the right doorways to interaction. Then you will be perfectly positioned to start sketching and prototyping what happens when a user opens one of those doors.

    A solid foundation for designing functionality

    You’ve collaboratively mapped out an elegant object-oriented design system and you’ve created a thorough CTA Inventory. You built a rough, clickable prototype of your system. With real users, you validated that the system is a breeze to navigate. Users pivot gracefully from object to object and the CTAs on those objects make sense for their needs. Life is good.

    But OOUX and a CTA Inventory will not help you design the interactions themselves. You still have to do that hard work! Now, though, as you begin sketching out interaction flows, you can feel confident that the functionality you are designing is rooted in solid ground. Because your CTA Inventory is a prioritized, team-endorsed, IxD to-do list, you’ll be more proactive and organized than ever.

    Most important, users getting things done within your system will feel as if they are manipulating tangible things. Interacting will feel less abstract, less fuzzy. As users create, favorite, add, remove, edit, move, and save, they will know what they’re doing—and what they’re doing it to. When you leverage an object-based CTA Inventory, your product designs and your design process will become more elegant, more streamlined, and more user-friendly.

  • Looking for &#8220;Trouble&#8221; 

    I know a colleague who keeps a “wall of shame” for emails he gets from clients—moments of confusion on their end that (for better or worse) are also funny. The thing is, we know how to answer these questions because we’ve heard them all before: Why does this look different when I print it? How do people know to scroll? To a certain extent, making light of the usual “hard questions” is a way of blowing off steam—but it’s an attitude poisonous for an agency.

    So, why do we disregard these humans that we interact with daily? Why do we condescend?

    I think it’s because we’re “experts.”

    As director of user experience at a digital agency, I’m prey to a particular kind of cognitive dissonance: I’m paid for my opinion; therefore, it should be right. After all, I’m hired as a specialist and therefore “prized” for my particular knowledge. Clients expect me to be right, which leads me to expect it, too. And that makes it difficult to hear anything that says otherwise.

    As consultants, we tend to perceive feedback from a client as feedback on our turf—a non-designer giving direction on a design or a non-tech trying to speak tech. As humans, we tend to ignore information that challenges our beliefs.

    This deafness to clients is akin to deafness to users, and equally detrimental. With users, traffic goes down as they abandon the site. With clients, the relationship becomes strained, acrimonious, and ultimately can endanger your livelihood. We wouldn’t dream of ignoring evidence from users, but we so readily turn a deaf ear to clients who interject, who dare to disrupt our rightness.

    When a client hires us, they should come away with more than a website. They should gain a better understanding of how websites are designed, how they work, and what makes them succeed. We are the ones equipped to create this hospitable environment. For every touchpoint our clients have with us, we could be asking the same questions that we do of our users:

    • How do clients interact with our products, e.g., a wireframe, design, or staging site?
    • What knowledge do they have when they arrive, and what level must we help them reach?
    • What are the common stumbling blocks on the way there?

    Thinking back to our wall of shame, suddenly those cries of frustration from clients we’ve branded “difficult” are no longer so funny. Those are now kinks to change in our process; culture problems to address head-on; and product features that need an overhaul. In other words: from user experience, client experience. It means embracing “the uncomfortable luxury of changing your mind.

    I now go out of my way to look for these moments of client confusion, searching my inbox and Basecamp threads for words like “confused,” “can’t,” and “trouble.”

    These examples are just a few pleas and complaints I’ve found along the way, plus the changes my agency has made as a result. It’s helped us revamp our workflow, team, and culture to enhance the “Blenderbox client experience.”

    Make deliverables easy to find

    “Hey guys…I’m having trouble figuring out which version of the white paper is the final version. Could someone attach it to this chain? Thanks.”

    This one was easy. When we asked our clients about the problem—always the first step—we learned that they had trouble finding recent files when they saved deliverables locally. We were naming our files inconsistently, and (surprise!) that inconsistency was coming back at us in the form of confused clients.

    I’ve seen this at every company I’ve been a part of, and it only gets worse outside the office; if you don’t believe me, go home tonight and look at your personal Documents folder. If I can’t keep my own filenames straight, how could we expect 20 of us to do it in unison? Clearly, we needed some rules.

    Our first step was to bring uniformity to our naming structure. We had a tendency to start with the client’s name‚ which is of little use to them. Now, all deliverables at Blenderbox use this style:

    Blenderbox.ClientName.DocName.filetype

    The other point of confusion was over which file was “final.” In the digital world, the label “final” is usually wishful thinking. Instead, the best bet is to append the date in the filename. (We found that more reliable than using the “last edited” date in a file’s metadata, which can be changed inadvertently when printing or opening a file.) Write dates in YMD format, so they sort chronologically.

    Next came version control—or do we call that rounds, or sprints? Unfortunately, there’s no single answer for this, as it depends on whether a contract stipulates a fixed number of rounds or a more iterative process. We gave ourselves some variations to use, as necessary:

    • Blenderbox.ClientName.DocName.Round#.filetype
    • Blenderbox.ClientName.DocName.YYYYMMDD.filetype
    • Blenderbox.ClientName.DocName.Consolidated.YYYYMMDD.filetype

    When a number of rounds is stipulated, the round number is appended. For Agile or other iterative projects, we use only the date. And when compiling months of iterative work (usually for documentation), we call it “Consolidated.” That’s as close to final as we can promise, and of course, that gets a date stamp as well.

    Show how details become the big picture

    “See the attached pic for a cut-and-paste layout”

    Here, the client cut-and-pasted from our design to create their own. Why? It’s not because they were feeling creative. They had a variety of content and they wanted to know that every page on their site was accommodated by the design. Of course, we had already planned for every page, but we needed to better explain how websites work.

    Websites are not magic, nor are they rocket science. We can teach clients at least the basics of how they work. When we step back and take the time to explain what we do, they better understand our role and the value we bring to their business, which results in happier clients and more work for us down the road.

    Prompted by this particular client, we incorporated an explanation of reusable templates and modules right into our wireframes. On page one, we describe how they work and introduce an icon for each template. These icons then appear on every wireframe, telling the client which template is assigned to the page shown.

    Visual example of a template legend in documentation

    Since implementing this technique, we’ve seen our clients start talking more like us—that is, using the language of how websites work. With improved communication, better ideas come out of both sides. They also give feedback that is usable and precise, which makes for more efficient projects, and our clients feel like they’ve learned something.

    Compromise on comfort zone

    “can u please send over the pdf of it so we can print them out and show a&a? tx”

    This is my favorite quote, and we hear this message over and over; clients want to print our deliverables. They want to hold them, pass them around, and write on them, and no iPad is going to take that away from them. Paper and pen are fun.

    It’s a frustrating trend to lay out work on 11″×17″ paper, which is massive, beautiful, and only useful for designers and landscape artists. Who has a printer that size? Certainly not the nonprofits, educators, and cultural institutions we work with. So, we set about making our wireframes printable, and made for trusty 8.5″×11″.

    This was tougher than expected because popular Omnigraffle stencils such as Konigi tend to be large-format, which is a common complaint. (Other programs, like Axure, also face this problem.)

    Since no existing stencils would do, we made our own set (which you can download on our site).

    We also fixed a flaw with common wireframe design that was confusing our clients: the notes. Go do an image search for “annotated wireframes.” Does anyone want to play “find the number on the right”?

    Can you imagine assembling furniture this way? In our new layout, the notes point directly to what they mean. The screen is also smaller, deemphasizing distracting Latin text while giving primacy to the annotation. As a result, we find that our clients are more likely to read the notes themselves, which saves time we’d spend explaining functionality in meetings.

    Visual example of annotations in documentation

    Figure out the format

    “I know I am being dense, but I am finding myself still confused about the Arts Directory. How does that differ from the next two subsections?”

    Here, a client was struggling (and rightly so) with a large set of designs that showed some small differences in navigation over multiple screens. By the end of the design phase, we often rack up a dozen or more screens to illustrate minor differences between templates, on-states, rollovers, different lengths of text, and the other variations that we try to plan for as designers. We also illustrate complex, multistep interactions by presenting a series of screens—somewhat like a flip book. Regardless of whether you present designs as flat files or prototypes, there are usually a few ways to enhance clarity.

    If your designs are flat (that is, just image files) compile them into a PDF. This sounds obvious, but JPG designs require clients to scroll in their browser, and it’s easy to get lost that way. Because PDFs are paginated, it’s easier for clients to track their location and return to specific points. As a bonus, using the left and right arrows to flick through pages will keep repeated elements like the header visually in place. Another reason to use PDFs: some file types are less common than you’d think. For example, one government client of ours couldn’t even open PNG files on their work machine.

    More and more, we’re using prototypes as our default for presenting designs. There is an astounding number of prototyping tools today (and choosing one is a separate article), but we’ve found that prototypes are best for explaining microinteractions, like how a mobile nav works. Even if you don’t have the time or need to demonstrate interactions, putting your designs in a prototype ensures that clients will view them right in their browser, and at the proper zoom level.

    Make time to celebrate

    Clients shouldn’t be the “forgotten user.” We create great products for them by focusing on their end users—while forgetting that clients experience us twice over, meaning their user experience with the product and their user experience with us. Writing off a flustered client as out of touch means we’re disregarding our role as designers who think about real people. When these biases surface, they reveal things that we could be doing better. It’s shortsighted to think our roles make us infallible experts.

    Searching client communications for keywords like “trouble” and other forms of subtle distress can help us identify moments of confusion that passed us by. It forces us to address problems that we didn’t know existed (or didn’t want to see). At Blenderbox, the results have been good for everyone. Our clients are more confident, receptive, and better educated, which empowers them to provide sharp, insightful feedback—which in turn helps our team design and build more efficiently. They’re happier, too, which has helped us gain their trust and earn more work and referrals.

    We’re getting so desensitized to the word, but we all understand that there’s value in empathy. And, like any other ideal, we forget to practice it in the bustle of daily work. Because empathy is a formal part of UX, we don’t get to use the “busy” excuse. Even mundane design activities should be daily reminders to listen to the people around you, like a sticky note on your monitor to “Put yourself in their shoes.” In other words, we can’t overlook that our clients are people, too. When we stop and think about user experience, we might just be doing our job, but we’re also saying that we choose sensitivity to others as our primary professional mission. And that is the first step to making great things happen.

  • The User&#8217;s Journey 

    A note from the editors: We’re pleased to share an excerpt from Chapter 5 of Donna Lichaw ’s new book, The User’s Journey: Storymapping Products That People Love, available now from Rosenfeld Media.

    Both analytics funnels and stories describe a series of steps that users take over the course of a set period of time. In fact, as many data scientists and product people will tell you, data tells a story, and it’s our job to look at data within a narrative structure to piece together, extrapolate, troubleshoot, and optimize that story.

    In the case of FitCounter, our gut-check analysis and further in-person testing with potential users uncovered that the reason our analytics showed a broken funnel with drop-off at key points was because people experienced a story that read something like this:

    • Exposition: The potential user is interested in getting fit or training others.
    • Inciting Incident: She sees the “start training” button and gets started.
    • Rising Action:
      • She enters her username and password. (A tiny percentage of people would drop off here, but most completed this step.)
      • She’s asked to “follow” some topics, like running and basketball. She’s not really sure what this means or what she gets out of doing this. She wants to train for a marathon, not follow things. (This is where the first drop-off happened.)
    • Crisis: This is where the cliffhanger happens. She’s asked to “follow” friends. She has to enter sensitive Gmail or Facebook log-in credentials to do this, which she doesn’t like to do unless she completely trusts the product or service and sees value in following her friends. Why would she follow them in this case? To see how they’re training? She’s not sure she totally understands what she’s getting into, and at this point, has spent so much brain energy on this step that she’s just going to bail on this sign-up flow.
    • Climax/Resolution: If she does continue on to the next step, there would be no climax.
    • Falling Action: Eh. There is no takeaway or value to having gotten this far.
    • End: If she does complete the sign-up flow, she ends up home. She’d be able to search for videos now or browse what’s new and popular. Searching and browsing is a lot of work for someone who can’t even remember why they’re there in the first place. Hmmm…in reality, if she got this far, maybe she would click on something and interact with the product. The data told us that this was unlikely. In the end, she didn’t meet her goal of getting fit, and the business doesn’t meet its goal of engaging a new user.

    Why was it so important for FitCounter to get people to complete this flow during their first session? Couldn’t the business employ the marketing team to get new users to come back later with a fancy email or promotion? In this case, marketing tried that. For months. It barely worked.

    With FitCounter, as with most products and services, the first session is your best and often only chance to engage new users. Once you grab them the first time and get them to see the value in using your product or service, it’s easier to get them to return in the future. While I anecdotally knew this to be true with consumer-facing products and services, I also saw it in our data.

    Those superfans I told you about earlier rarely became superfans without using the product within their first session. In fact, we found a sweet spot: most of our superfans performed at least three actions within their first session. These actions were things like watching or sharing videos, creating playlists, and adding videos to lists. These were high-quality interactions and didn’t include other things you might do on a website or app, such as search, browse, or generally click around.

    With all of our quantitative data in hand, we set out to fix our broken usage flow. It all, as you can imagine, started with some (more) data…oh, and a story. Of course.

    The Plan

    At this point, our goals with this project were two-fold:

    • To get new users to complete the sign-up flow;
    • To acquire more “high-quality” users who were more likely to return and use the product over time.

    As you can see, getting people to pay to upgrade to premium wasn’t in our immediate strategic roadmap or plan. We needed to get this product operational and making sense before we could figure out how to monetize. We did, however, feel confident that our strategy was headed in the right direction because the stories we were designing and planning were ones that we extrapolated from actual paying customers who loved the product. We had also been testing our concept and origin stories and knew that we were on the right track, because when we weren’t, we maneuvered and adapted to get back on track. So what, in this case, did the data tell us that we should do to transform this story of use from a cliffhanger, with drop-off at the crisis moment, to a more complete and successful story?

    Getting to “Why”

    While our quantitative analytics told us a “what” (that people were dropping off during our sign-up funnel), it couldn’t tell us the “why.” To better answer that question, we used story structure to figure out why people might drop off when they dropped off. Doing so helped us better localize, diagnose, and troubleshoot the problem. Using narrative structure as our guide, we outlined a set of hypotheses that could explain why there was this cliffhanger.

    For example, if people dropped off when we asked them to find their friends, did people not want to trust a new service with their login credentials? Or did they not want to add their friends? Was training not social? We thought it was. To figure this out better, once we had a better idea of what our questions were, we talked to existing and potential customers first about our sign-up flow and then about how they trained (for example, alone or with others). We were pretty sure training was social, so we just needed to figure out why this step was a hurdle.

    What we found with our sign-up flow was similar to what we expected. Potential users didn’t want to follow friends because of trust, but more so because it broke their mental model of how they could use this product. “Start training” was a strong call to action that resonated with potential users. In contrast, “follow friends,” was not. Even something as seemingly minute as microcopy has to fit a user’s mental model of what the narrative structure is. Furthermore, they didn’t always think of training as social. There were a plethora of factors that played into whether or not they trained alone or with others.

    What we found were two distinct behaviors: people tend to train alone half the time and with others half the time. Training alone or with others depended on a series of factors:

    • Activity (team versus solitary sport, for example)
    • Time (during the week versus weekend, for example)
    • Location (gym versus home, for example)
    • Goals (planning to run a 5k versus looking to lose pounds, for example).

    This was too complex of a math equation for potential users to do when thinking about whether or not they wanted to “follow” people. Frankly, it was more math than anyone should have to do when signing up for something. That said, after our customer interviews, we were convinced of the value of keeping the product social and giving people the opportunity to train with others early on. Yes, the business wanted new users to invite their friends so that the product could acquire new users. And, yes, I could have convinced the business to remove this step in the sign-up process so that we could remove the crisis and more successfully convert new users. However, when people behave in a certain way 50% of the time, you typically want to build a product that helps them continue to behave that way, especially if it can help the business grow its user base.

    So instead of removing this troublesome cliffhanger-inducing step in the sign-up flow, we did what any good filmmaker or screenwriter would do: we used that crisis to our advantage and built a story with tension and conflict. A story that we hoped would be more compelling than what we had.

    The Story

    In order to determine how our new sign-up flow would work, we first mapped it out onto a narrative arc. Our lead designer and engineer wanted to jump straight into screen UI sketches and flow charts and our CEO wanted to see a fully clickable prototype yesterday, but we started the way I always make teams and students start: with a story diagram. As a team, we mapped out a redesigned sign-up flow on a whiteboard as a hypothesis, brick by brick (see Figure 5.20).

    Photo of a story map (sticky notes arranged on a board, with hand-drawn graphics surrounding them).

    Fig. 5.20 A story map from a similar project with the storyline on top and requirements below.

    This was the story, we posited, that a new user and potential customer should have during her first session with our product (see Figure 5.21). As you can see, we tried to keep it much the same as before so that we could localize and troubleshoot what parts were or weren’t working.

    • Exposition: She’s interested in getting fit or training others. (Same as before.)
    • Inciting Incident: She sees the “start training” button and gets started. (Same as before.)
    • Rising Action:
      • She enters her username and password. (This step performed surprisingly great, so we kept it.)
      • Build a training plan. Instead of “following” topics, she answers a series of questions so that the system can build her a customized training plan. Many questions—ultimately extending the on-boarding flow by 15 screens. 15! There is a method to this madness. Even though there are now many more questions, they get more engaging, and more relevant, question by question, screen by screen. The questions start broad and get more focused as they progress, feeling more and more relevant and personal. Designing the questionnaire for rising action prevents what could be two crises: boredom and lack of value.
    • Crisis: One of the last questions she answers is whether or not she wants to use this training plan to train with or help train anyone else. If so, she can add them to the plan right then and there. And if not, no problem—she can skip this step and always add people later.
    • Climax/Resolution: She gets a personalized training plan. This is also the point at which we want her to experience the value of her new training plan. She sees a graph of what her progress will look like if she sticks with the training plan she just got.
    • Falling Action: Then what? What happens after she gets her plan and sees how she might progress if she uses FitCounter? This story isn’t complete unless she actually starts training. So…
    • End: She’s home. Now she can start training. This initially involves watching a video, doing a quick exercise, and logging the results. She gets a taste of what it’s like to be asked to do something, to do it, and to get feedback in the on-boarding flow and now she can do it with her body and not just a click of the mouse. Instead of saying how many sit-ups she can do by answering a questionnaire, she watches a short video that shows her how to best do sit-ups, she does the exercise, and she logs her results. While humanly impossible to fully meet her goal of getting fit in one session, completing the story with this ending gets her that much closer to feeling like she will eventually meet her goal. Our hope was that this ending would function as a teaser for her next story with the product, when she continued to train. We wanted this story to be part of a string of stories, also known as a serial story, which continued and got better over time.

    Once we plotted out this usage story, we ran a series of planning sessions to brainstorm and prioritize requirements, as well as plan a strategic roadmap and project plan. After we had our requirements fleshed out, we then sketched out screens, comics, storyboards, and even role-played the flow internally and in person with potential customers. We did those activities to ideate, prototype, and test everything every step of the way so that we could minimize our risk and know if and when we were on the right path.

    We were quite proud of our newly crafted narrative sign-up flow. But before we could celebrate, we had to see how it performed.

    The Results

    On this project and every project since, we tested everything. We tested our concept story, origin story, and everything that came after and in between. While we were very confident about all of the work we did before we conceived of our new usage story for the sign-up flow, we still tested that. Constantly. We knew that we were on the right path during design and in-person testing because at the right point in the flow, we started getting reactions that sounded something like: “Oh, cool. I see how this could be useful.”

    Once we heard that from the third, fourth, and then fifth person during our in-person tests, we started to feel like we had an MVP that we were not only learning from, but also learning good things from. During our concept-testing phase, it seemed like we had a product that people might want to use. Our origin story phase and subsequent testing told us that the data supported that story. And now, with a usage story, we actually had a product that people not only could use, but wanted to use. Lots.

    Arc representing the progression of events in a usage story

    Fig. 5.21 The story of what we wanted new users to experience in their first session with FitCounter.

    As planned, that reaction came during our in-person tests, unprompted, near the end of the flow, right after people received their training plan. What we didn’t expect was that once people got the plan and went to their new home screen, they started to tap and click around. A lot. And they kept commenting on how they were surprised to learn something new. And they would not only watch videos, but then do things with them, like share them or add and remove them from plans.

    But this was all in person. What about when we launched the new sign-up flow and accompanying product. This new thing that existed behind the front door. The redesign we all dreaded to do, but that had to be done.

    I wish I could say that something went wrong. This would be a great time to insert a crisis moment into this story to keep you on the edge of your seat.

    But the relaunch was a success.

    The story resonated not just with our in-person testers, but also with a broader audience. So much so that the new sign-up flow now had almost double the completion rate of new users. This was amazing, and it was a number that we could and would improve on with further iterations down the line. Plus, we almost doubled our rate of new user engagement. We hoped that by creating a sign-up flow that functioned like a story, the result would be more engagement among new users, and it worked. We not only had a product that helped users meet their goals, but it also helped the business meet its goals of engaging new users. What we didn’t expect to happen so soon was the side effect of this increased, high-quality engagement: these new users were more likely to pay to use the product. Ten times more likely.

    We were ecstatic with the results. For now.

    A business cannot survive on first-time use and engagement alone. While we were proud of the product we built and the results it was getting, this was just one usage story: the first-time usage story. What about the rest? What might be the next inciting incident to kick off a new story? What would be the next beginning, middle, and end? Then what? What if someone did not return? Cliffhangers can happen during a flow that lasts a few minutes or over a period of days, months, or years. Over time, we developed stories big and small, one-offs and serials, improving the story for both customers and the business. Since we started building story-first, FitCounter has tripled in size and tripled its valuation. It is now a profitable business and recently closed yet another successful round of financing so that it can continue this growth.

  • Design for Real Life 

    A note from the editors: We’re pleased to share an excerpt from Chapter 7 of Eric A. Meyer and Sara Wachter-Boettcher’s new book, Design for Real Life, available now from A Book Apart.

    You’ve seen the fallout when digital products aren’t designed for real people. You understand the importance of compassion. And you’ve learned how to talk with users to uncover their deepest feelings and needs. But even with the best intentions, it’s still easy for thoughtful design teams to get lost along the way.

    What you and your team need is a design process that incorporates compassionate practices at every stage—a process where real people and their needs are reinforced and recentered from early explorations through design iterations through launch.

    Create Realistic Artifacts

    In Chapter 3, we talked about the importance of designing for worst-case scenarios, and how bringing stress cases into audience artifacts like personas and user-journey maps can help. Now let’s talk about creating those materials.

    Imperfect personas

    The more users have opened up to you in the research phase, the more likely you are to have a wealth of real, human emotion in your data to draw from: marriage difficulties or bad breakups, accidents, a friend who committed suicide, or a past of being bullied. The point isn’t to use your interviewees’ stories directly, but to allow them to get you thinking about the spectrum of touchy subjects and difficult experiences people have. This will help you include realistic details about your personas’ emotional states, triggers, and needs—and lend them far more depth than relying solely on typical stats like age, income, location, and education.

    These diverse inputs will also help you select better persona images. Look for, or shoot your own, images of people who don’t fit the mold of a cheerful stock photo.  Vary their expressions and clothing styles. If you can imagine these personas saying the kinds of things you heard in your user interviews, you’re on the right track. 

    More realistic personas make it much easier to imagine moments of crisis, and to test scenarios that might trigger a user’s stressors. Remember that “crisis” doesn’t have to mean a natural disaster or severe medical emergency. It can be a situation where an order has gone horribly wrong, or where a user needs information while rushing to the airport.

    As you write your personas and scenarios, don’t drain the life from them: be raw, bringing in snippets of users’ anecdotes, language, and emotion wherever you can. Whoever picks these personas up down the line should feel as compelled to help them as you do.

    User-journey maps

    In Chapter 3, we mentioned a technique Sara used with a home-improvement chain: user-journey mapping. Also referred to as customer-experience mapping, this technique is well established in many design practices, such as Adaptive Path, the San Francisco-based design consultancy (recently acquired by Capital One).

    In 2013, Adaptive Path turned its expertise into a detailed guide, available free at mappingexperiences.com. The guide focuses on how to research the customer experience, facilitate a mapping workshop, and apply your insights. The process includes documenting:

    • The lens: which persona(s) you’re mapping, and what their scenario is
    • Touchpoints: moments where your user interacts with your organization
    • Channels: where those interactions happen—online, over the phone, or elsewhere
    • Actions: what people are doing to meet their needs
    • Thoughts: how people frame their experience and define their expectations
    • Feelings: the emotions people have along their journey—including both highs and lows

    Constructing a journey map usually starts, as so many UX processes do, with sticky notes. Working as a team, you map out a user’s journey over time, with the steps extending horizontally. Below each step, use a different-colored sticky note to document touchpoints and channels, as well as what a user is doing, thinking, and feeling. The result will be a big (and messy) grid with bands of color, stretching across the wall (Fig 7.1).

    Photo of sticky notes organized on a wall

    Fig 7.1: A typical journey mapping activity, where participants use sticky notes to show a user progress through multiple stages and needs over time.

    Journey mapping brims with benefits. It helps a team to better think from a user’s point of view when evaluating content, identify gaps or disconnects across touchpoints or channels, and provide a framework for making iterative improvements to a major system over time. But we’ve found this technique can also be a powerful window into identifying previously unrealized, or unexamined, stress cases—if you think carefully about whose journey you’re mapping.

    Make sure you use personas and scenarios that are realistic, not idealized. For example, an airline might map out experiences for someone whose flight has been canceled, or who is traveling with a disabled relative, or who needs to book last-minute tickets to attend a funeral. A bank might map out a longtime customer who applies for a mortgage and is declined. A university might map out a user who’s a first-generation college student from a low-income family. The list goes on. 

    In our experience, it’s also important to do this work with as many people from your organization as possible—not only other web folk like developers or writers, but also groups like marketing, customer service, sales, and business or product units. This collaboration across departments brings diverse viewpoints to your journey, which will help you better understand all the different touchpoints a user might have and prevent any one group from making unrealistic assumptions. The hands-on nature of the activity—physically plotting out a user’s path—forces everyone to truly get into the user’s mindset, preventing participants from reverting back to organization-centric thinking, and increasing the odds you’ll get support for fixing the problems you find. 

    In addition to determining an ideal experience, also take time to document where the real-life experience doesn’t stack up. This might include:

    • Pain points: places where you know from research or analytics that users are currently getting hung up and have to ask questions, or are likely to abandon the site or app.
    • Broken flows: places where the transition between touchpoints, or through a specific interaction on a site (like a form), isn’t working correctly.
    • Content gaps: places where a user needs a specific piece of content, but you don’t have it—or it’s not in the right place at the right time.

    Just as you can map many things in your journey—channels, questions, feelings, actions, content needs and gaps, catalysts, and more—you can also visualize your journey in many different ways. Sometimes, you might need nothing more than sticky notes on a conference room wall (and a few photos to refer back to later). Other times, you’ll want to spend a couple of days collaborating, and create a more polished document after the fact. It all depends on the complexity of the experience you’re mapping, the fidelity you need in the final artifact, and, of course, how much time you can dedicate to the process.

    If journey maps are new to your team, a great way to introduce them is to spend an hour or two during a kickoff or brainstorm session working in small groups, with each group roughing out the path of a different user. If they’re already part of your UX process, you might just need to start working from a wider range of personas and scenarios. Either way, building journey maps that highlight stress cases will help you see:

    • How to prioritize content to meet the needs of urgent use cases, without weakening the experience for others. That’s what the home-improvement store did: walking through stress cases made it easier for the team to prioritize plain language and determine what should be included in visually prominent, at-a-glance sections.
    • Places where copy or imagery could feel alienating or out of sync with what a user might be thinking and feeling at that moment. For example, imagine if Glow, the period-tracking app, had mapped out a user journey for a single woman who simply has trouble remembering to buy tampons. The designers would have seen how, at each touchpoint, the app’s copy assumed something about this woman’s needs and feelings that wasn’t true—and they could have adjusted their messaging to fit a much broader range of potential users.
    • Whether any gaps exist in content for stress-case users. For example, if the Children’s Hospital of Philadelphia had created a journey map for a user in crisis, it might have prevented the content gap Eric experienced: no information about rushing to the hospital in an emergency existed online.

    Strengthen Your Process

    With more realistic representations of your audience in hand, it’s time to build checks and balances into your process that remind the team of these humans, and ward against accidentally awful outcomes. Here are some techniques to get you started.

    The WWAHD test

    In many cases, the easiest way to stress-test any design decision is to ask, “WWAHD?”—“What would a human do?” When you’re designing a form, try reading every question out loud to an imagined stranger, listening to how it sounds and imagining the questions they might have in response.

    Kate Kiefer Lee of MailChimp recommends this for all copy, regardless of where and how it’s used, because it can help you catch errors, improve your flow, and soften your sentences. She says:

    As you read aloud, pretend you’re talking to a real person and ask yourself “Would I say this to someone in real life?” Sometimes our writing makes us sound stodgier or colder than we’d like.

    Next time you publish something, take the time to read it out loud. It’s also helpful to hear someone else read your work out loud. You can ask a friend or coworker to read it to you, or even use a text-to-speech tool. (http://bkaprt.com/dfrl/07-01/)


    That last point is an excellent tip as well, because you’ll gain a better sense of how your content might sound to a user who doesn’t have the benefit of hearing you speak. If a synthesized voice makes the words fall flat or says something that makes you wince, you’ll know you have more work to do to make your content come to life on the screen.

    The premortem

    In design, we create biases toward our imagined outcomes: increased registrations or sales, higher visit frequency, more engaged users. Because we have a specific goal in mind, we become invested in it. This makes us more likely to forget about, or at least minimize, the possibility of other outcomes.

    One way to outsmart those biases early on is to hold a project premortem. As the name suggests, a premortem evaluates the project before it happens—when it “can be improved rather than autopsied,” says Gary Klein, who first wrote about them in 2007 in Harvard Business Review:

    The leader starts the exercise by informing everyone that the project has failed spectacularly. Over the next few minutes those in the room independently write down every reason they can think of for the failure. (http://bkaprt.com/dfrl/07-02/)

    According to Klein, this process works because it creates “prospective hindsight”—a term researchers from Wharton, Cornell, and the University of Colorado used in a 1989 study, where they found that imagining “an event has already occurred increases a team’s ability to correctly identify reasons for future outcomes by 30%.” 

    For example, say you’re designing a signup process for an exercise- and activity-tracking app. During the premortem, you might ask: “Imagine that six months from now, our signup abandonment rates are up. Why is that?” Imagining answers that could explain the hypothetical—it’s too confusing, we’re asking for information that’s too personal, we accidentally created a dead end—will help guide your team away from those outcomes, and toward better solutions.

    The question protocol

    Another technique for your toolkit is Caroline Jarrett’s question protocol, which we introduced in Chapter 4. To recap, the question protocol ensures every piece of information you ask of a user is intentional and appropriate by asking:

    • Who within your organization uses the answer
    • What they use them for
    • Whether an answer is required or optional
    • If an answer is required, what happens if a user enters any old thing just to get through the form

    You can’t just create a protocol, though—you need to bring it to life within your organization. For example, Jarrett has worked the approach into the standard practices of the UK’s Government Digital Service. GDS then used its answers to create granular, tactical guidelines for designers and writers to use while embedded in a project—such as this advice for titles:

    We recommend against asking for people’s title.

    It’s extra work for users and you’re forcing them to potentially reveal their gender and marital status, which they may not want to do. There are appropriate ways of addressing people in correspondence without using titles.

    If you have to implement a title field, make it an optional free-text field, not a drop-down list. Predicting the range of titles your users will have is impossible, and you’ll always end up upsetting someone. (http://bkaprt.com/dfrl/07-03/)

    By making recommendations explicit—and explaining why GDS recommends against asking for titles—this guide puts teams on the right path from the start.

    If user profiles are a primary part of your product’s experience, you might also want to adapt and extend the question protocol to account not just for how a department uses the data collected, but for how your product itself uses it. For example, a restaurant recommendation service can justify asking for users’ locations; the service needs it to prioritize results based on proximity. But we’ve seen countless sites that have no reason to collect location information: business magazines, recipe curators, even municipal airports. If these organizations completed a question protocol, it might be difficult for them to justify their actions.

    You don’t even have to call it a “protocol”—in some organizations, that label sounds too formal, and trying to add it to an established design process will be challenging. Instead, you might roll these questions and tactics into your functional specs, or make them discussion points in meetings. However you do it, though, look for ways to make it a consistent, ingrained part of your process, not an ad hoc “nice to have.”

    The Designated Dissenter

    Working in teams is a powerful force multiplier, enabling a group to accomplish things each individual could never have managed alone. But any team is prone to “groupthink”: the tendency to converge on a consensus, often without meaning to. This can lead teams to leave their assumptions unchallenged until it’s far too late. Giving one person the explicit job of questioning assumptions is a way to avoid this.

    We call this the “Designated Dissenter”—assigning one person on every team the job of assessing every decision underlying the project, and asking how changes in context or assumptions might subvert those decisions. This becomes their primary role for the lifetime of the project. It is their duty to disagree, to point out unconsidered assumptions and possible failure states.

    For example, back in Chapter 1 we talked about the assumptions that went into Facebook’s first Year in Review product. If the project had had a Designated Dissenter, they would have gone through a process much like we did there. They would ask, “What is the ideal user for this project?” The answer would be, “Someone who had an awesome year and wants to share memories with their friends.” That answer could lead to the initial questions, “What about people who had a terrible year? Or who have no interest in sharing? Or both?”

    Beyond such high-level questions, the Designated Dissenter casts a critical eye on every aspect of the design. They look at copy and design elements and ask themselves, “In which contexts might this come off as ridiculous, insensitive, insulting, or just plain hurtful? What if the assumptions in this error message are wrong?” At every step, they find the assumptions and subvert them. (The tools we discussed in the previous sections can be very useful in this process.)

    For the next project, however, someone else must become the Designated Dissenter. There are two reasons for this:

    1. By having every member of the team take on the role, every member of the team has a chance to learn and develop that skill.
    2. If one person is the Designated Dissenter for every project, the rest of the team will likely start to tune them out as a killjoy.

    Every project gets a new Dissenter, until everyone’s had a turn at it. When a new member joins the team, make them the Designated Dissenter on their second or third project, so they can get used to the team dynamics first and see how things operate before taking on a more difficult role.

    The goal of all these techniques is to create what bias researchers Jack B. Soll, Katherine L. Milkman, and John W. Payne call an “outside view,” which has tremendous benefits:

    An outside view also prevents the “planning fallacy”—spinning a narrative of total success and managing for that, even though your odds of failure are actually pretty high. (http://bkaprt.com/dfrl/07-04/)

    Our narratives are usually about total success—indeed, that’s the whole point of a design process. But that very aim makes us more likely to fall victim to planning fallacies in which we only envision the ideal case, and thus disregard other possibilities.

    Stress-Test Your Work

    Usability testing is, of course, important, and testing usability in stress cases even more so. The problem is that in many cases, it’s impossible to find testers who are actually in the midst of a crisis or other stressful event—and, even if you could, it’s ethically questionable whether you should be taxing them with a usability test at that moment. So how do we test for such cases?

    We’ve identified two techniques others have employed that may be helpful here: creating more realistic contexts for your tests, and employing scenarios where users role-play dramatic situations.

    More realistic tests

    In Chapter 3, we shared an experiment where more difficult mental exercises left participants with reduced cognitive resources, which affected their willpower—so they were more likely to choose cake over fruit.

    Knowing this, we can make our usability tests more reflective of real-life cognitive drain by starting each test with an activity that expends cognitive resources—for example, asking participants to read an article, do some simple logic puzzles, play a few rounds of a casual video game like Bejeweled, or complete a routine task like replying to emails.

    After the tester engages in these activities, you can move on to the usability test itself. Between the mental toll of the initial task and the shift of context, the testers will have fewer cognitive resources available—more like they would in a “real-life” use of the product.

    In a sense, you’re moving a little bit of field testing into the lab. This can help identify potential problems earlier in the process—and, if you’re able to continue into actual field testing, make it that much more effective and useful.

    Before you start adding stressors to your tests, though, make sure your users are informed. This means:

    • Be clear and transparent about what they’ll be asked to do, and make sure participants give informed consent to participate.
    • Remember, and communicate to participants, that you’re not evaluating them personally, and that they can call off the test at any time if it gets too difficult or draining.

    After all, the goal is to test the product, not the person.

    Stress roleplays

    Bollywood films are known for spectacular plot lines and fantastical situations— and, according to researcher Apala Lahiri Chavan, they’re also excellent inspiration for stress-focused usability testing.

    In many Asian cultures, it’s culturally impolite to critique a design, and embarrassing to admit you can’t find something. To get valuable input despite these factors, Chavan replaced standard tasks in her tests with fantasy scenarios, such as asking participants to imagine they’d just found out their niece is about to marry a hit man who is already married. They need to book a plane ticket to stop the wedding immediately. These roleplays allowed participants to get out of their cultural norms and into the moment: they complained about button labels, confusing flows, and extra steps in the process. (For more on Chavan’s method and results, see Eric Schaffer’s 2004 book, Institutionalization of Usability: A Step-by-Step Guide, pages 129–130.)

    This method isn’t just useful for reaching Asian markets. It can also help you see what happens when people from any background try to use your site or product in a moment of stress. After all, you can’t very well ask people who are in the midst of a real-life crisis to sit down with your prototype. But you can ask people to roleplay a crisis situation: needing to interact with your product or service during a medical emergency, or after having their wallet stolen, or when they’ve just been in an accident.

    This process probably won’t address every possible crisis scenario, but it will help you identify places where your content is poorly prioritized, your user flows are unhelpful, or your messaging is too peppy—and if you’re already doing usability testing, adding in a crisis scenario or two won’t take much extra time.

    Compassion Takes Collaboration

    One thing you may have noticed about each of these techniques is that they’re fundamentally cross-discipline: design teams talking and critiquing one another’s work through the lens of compassion; content strategists and writers working with designers and developers to build better forms and interactions. Wherever we turn, we find that the best solutions come from situations where working together isn’t just encouraged, but is actively built into a team’s structure. Your organization might not be ready for that quite yet—but you can help them get there. Our next chapter will get you started.

  • Design for Real Life: An interview with Sara Wachter-Boettcher 

    A note from the editors: A List Apart’s managing editor Mica McPheeters speaks with Sara Wachter-Boettcher about getting to the heart of users’ deepest needs.

    Our users don’t live the tidy little lives we’ve concocted for our personas, with their limited set of problems. Life is messy and unpredictable; some days, terrible. When planning a project, it’s important not to let our excitement lull us into blithely ignoring life’s harsher realities.

    Discomfort with others’ burdens has no place in good design. We sat down with coauthor and content strategist Sara Wachter-Boettcher (a past editor-in-chief of ALA), to discuss why she and Eric Meyer became vocal proponents of taking users’ stress cases seriously. Their new book, Design for Real Life, goes to the root of insensitive design decisions that fail to support the very users we’re meant to understand and respect.

    First off, would you tell us a bit about how the book came to be? What was the tipping point that led you to take on this topic?

    SWB: In early 2015, I started writing about the way forms demand users to reveal themselves—and all the ways that can be alienating and unempathetic. In that article, I talk about a couple personal experiences I had with forms: being asked to check a box about sexual assault, without knowing why or where that data would go, and being at the German consul’s office, filling out paperwork that required me documenting a sibling who had died as an infant.

    It’d be easy to call that the tipping point, but to be honest, I didn’t actually feel that way. In fact, I had started writing that article the day I came home from the German consul’s office. But I wasn’t sure there was anything there—or at least, anything more than an emotional anecdote. I set it down for six months. The idea kept sitting in the back of my mind, though, so finally, during some winter downtime, I finished it off and posted it, unsure whether anyone would really care.

    Turns out they did. I got an endless stream of tweets, emails, and comments from people who told me how much the piece resonated with them. And I also started hearing other people’s stories—stories of ways that interfaces had triggered past trauma, or demanded someone to claim an identity that made them uncomfortable, or made assumptions that a user found alienating. Forms that couldn’t handle people who identified as biracial, product settings that assumed heterosexuality, pithy copy that failed if a user’s current emotional state was anything less than ideal. The examples went on and on.

    One of the people who reached out to me was Eric, whose work I had of course also been reading. And that’s really when it clicked for me—when I realized that this topic had touched a nerve across all kinds of groups. It wasn’t fringe. All of us deal with difficult pasts or current crises. Each scenario might be an edge case on its own, but taken together, they’re universal—they’re about being human. And now we’re all dealing with them online. The more Eric and I talked and compared stories others had shared with us, the more certain we were that we had something.

    We’ve been talking about user-centered design for decades. Shouldn’t this sort of “sensitivity blindness” have been dealt with by now?

    SWB: I wish, but historically, teams simply have not been trained to imagine their users as different from themselves—not really, not in any sort of deep and empathetic way.

    That’s not just an issue on the web, though—because it’s a lot bigger than “sensitivity.” It’s really about inclusion. For example, look at gender in product design: crash-test dummies are all sized to the “average male,” and as a result, car accidents are far more dangerous for women than men. Medical research subjects are nearly always men—despite the fact that women experience illnesses at different rates than men, and respond to treatment differently. Of course we’ve transferred these same biased practices to the web. In this context, it’s not surprising that, say, Apple’s Health app didn’t include a period tracker—one of the most normal bits of data in the world—for an entire year after launch.

    Identity issues—gender, race, sexuality, etc.—are huge here, but they’re just one way this lack of inclusivity plays out. Eric’s experience with Facebook’s Year in Review tells that story quite well: Facebook long imagined itself as a place where happy people share their happy updates. After all, it’s a platform that until just the other day literally only offered you one reaction to a post: to like it. The problem was that Facebook’s design mission stayed narrow, even as the reasons its users interacted with the platform became more and more varied.

    While the web didn’t create bias in the world, I do think it has the opportunity to start undoing it—and I am starting to see seeds of that sown around the web. Digital communication has made it so much easier for organizations to get close to their audiences—to see them, talk to them, and most importantly, listen to them. If our organizations can do that—look at their audiences as real, multifaceted, complex people, not just marketing segments—then I think we’ll start to see things truly change.

    Why do you think it’s hard for designers to keep real people in mind? Is it that we tend to be excited and optimistic about new projects, so we forget about the ways things can go wrong?

    SWB: Yeah, I think that is part of it—and I think the reason for that is largely because that’s what organizations have trained design teams to focus on. That is, when a business decides to spend money on a digital product, they do it with positive outcomes in mind. As a result, the team is trained on the positive: “how can we make this delight our users?” If that’s all you’re asking, though, it’s unlikely you’ll catch the scenarios where a product could be alienating or harmful, rather than delightful, because your brain will be focused on examples of the positive.

    For example, if you try to write a tweet that’s too long, Twitter has this little bit of UI copy that says, “Your Tweet was over 140 characters. You’ll have to be more clever.” Now, let’s say I just tweeted about the amazing tacos I just ate for lunch. In that scenario, the copy is light and funny. But what if I was trying to figure out how to tell the world that a friend just died—or even something more everyday, but still negative, like that I’d been rejected from a job? All of a sudden, that interface feels rather insulting. It’s alienating. Sure, it’s a small thing, but it’s hurtful and can even be degrading. And if you only ever test that feature with pithy sample tweets, it’s pretty likely you just wouldn’t notice.

    What Eric and I are really advocating for, then, is for design teams to build a deep breath into their process—to say, every time they make a decision, “who might be harmed by this? In which circumstances does this feature break down for a user? How can we strengthen our work to avoid that?” There’s even an activity we talk about in the book, the “premortem”—where, instead of sitting down after a project ends to discuss how it went, you sit down beforehand and imagine all the ways it could go wrong.

    At one point, you and Eric mention that “compassion isn’t coddling.” In the example with Twitter’s snarky copy, someone might say, “you’re overreacting—it’s just a joke.” How would you respond to that?

    SWB: I’ve definitely gotten plenty feedback from people who say that this is all “too sensitive” and that we’ll all be “walking on eggshells.” Their answer is that people should just have a thicker skin. Frankly, that’s BS—that mentality says, “I don’t want to have to think about another person’s feelings.”

    Coddling someone means protecting them from the world—shielding them from difficult subjects. That’s not what we’re proposing at all. We’re saying, understand that your users are dealing with difficult subjects all the time, even when using your site or service. Being kind means being respectful of that fact, and avoiding making it worse. Think about the normal things you’d do in person—like if your friend were going through a divorce, you’d probably wait for them to open up to you, rather than ask prying questions, right? If you knew someone had just been traumatically assaulted at a specific bar, you’d probably not suggest meeting there for drinks. You’d be compassionate, and avoid making them feel even more uncomfortable or vulnerable.

    Humans learn to be good at this in person, but because we don’t know when or if a user is going to be in a difficult emotional state, we seem to forget about this online. And that’s why niceness isn’t enough. Being nice is easy to reduce to being friendly and welcoming. But compassion is deeper: it’s recognizing that people have all kinds of needs and emotional reactions, and our job is to help them, rather than expect them to fit our narrow ideals.

    If a team understands that, and wants to be compassionate, how much do they need to do to account for “edge cases”? Is there a cutoff point?

    SWB: This is something we talk about a lot in the book. “Edge case” is a really easy way to write something off—to say, “this is not important enough to care about.” Calling something or someone an edge case pushes them to the margins, quite literally. Instead of treating people who don’t quite fit whatever you thought of as “average” as fringe, though, we think it’s a lot more helpful to think of these as “stress cases”: the challenges that test the strength of your design. Because if your work can hold up against people at their worst, then you can be more confident it will hold up for everyone else, too.

    Just like in traditional products. Think about the brand Oxo, which makes ergonomic housewares. People love Oxo products. But they weren’t initially designed to suit the average user. They were initially designed with the founder’s wife, who had arthritis, in mind. But by making something that was better for people with more limited ranges of motion, Oxo ended up making something that was simply more comfortable to use for most people. We have the same opportunity in our interfaces.

    Our message, though, is that it takes a bit of a reframe to get there: it’s not about “how many edge cases do I have to support?” but rather, “how well have I vetted my work against the stress of real life?”

    But won’t that affect creativity, to constantly plan for limiting factors—many that we can’t anticipate?

    SWB: You know, no one complains that designing a car to be safer during an accident limits the engineers’ creativity. So why should we say that about digital products? Of course thinking about users’ varied identities and emotional states creates limiting factors. But that’s what design is: it is a creative solution to a set of problems. We’re redefining which problems are worth solving.

    Of course we can’t anticipate every single human issue that might arise. But I can’t imagine not trying to do better. After all, we’re designing for humans. Why wouldn’t we want to be as humane as possible? I don’t think we need to be perfect; humans never are. But our users deserve to have us try.

    Pick up your copy of Design for Real Life from A Book Apart.

  • Web Animation Past, Present, and Future 

    Web animation has been exploding during the past year or two—and the explosion has been nothing short of breathtaking. JavaScript animation libraries like GreenSock have become the weapon of choice among interaction developers, and web design galleries like Awwwards and CSS Design Awards abound with sites reminiscent of the Flash era. It seems like every front-end development conference includes a talk about web animation. Last year at motion design conference Blend, Justin Cone of Motionographer called web animation the future.

    Things are moving fast. So let’s recap.

    Web Animations API coverage increasing

    The Web Animations API is a spec created to unite CSS Animations, Transitions, and SMIL (native SVG animation) under one animation engine. JavaScript developers can tap into it to build more performant DOM animations and libraries. Browsers can also tap into the API to build more advanced animation developer tools—and they’ve been doing just that.

    With solid support from Firefox and Chrome teams, the Edge team moved the Web Animations API from “under consideration” to “medium priority,” clearly a reaction to the web development community’s votes for the API via Edge’s User Voice campaign. And let’s not forget WebKit—a team at Canon might be taking up the Web Animation banner! Could this mean WAAPI in iOS Safari? Find out in 2016! Or 2017.

    Screenshot of the WAAPI Browser Support Test

    Caniuse.com is a surprisingly unreliable source for uncovering just how much of the Web Animations API is covered in a given browser. Dan Wilson’s browser support test and Are We Animated Yet for Firefox remain good references.

    SMIL falls as SVG rises

    Ironically, just as Edge moved to support the Web Animations API (a prerequisite to Microsoft’s adoption of SMIL), Chrome announced it would be retiring SMIL! Even without SMIL, SVG remains synonymous with web animation. That said, due to varying implementations of the SVG spec, animating it reliably across browsers is often made easier with third-party JavaScript libraries like SnapSVG or GreenSock’s GSAP, a tweening library for Flash that was rewritten in JavaScript.

    Fortunately, some of the bugs that got in the way of animating SVG components with CSS have been addressed, so library-independent SVG animation should become more common in the future. Down the line, we can expect SVG behavior to be normalized across more and more browsers, but because of its unreliable reputation, developers will probably continue to associate SVG animation with GSAP and use it regardless.

    In this regard, GSAP might become the next jQuery as SVG behavior normalizes and browsers expand their native abilities to match developer needs. That is to say, GSAP will remain the tool of choice for ambitious, complex, and/or backward-compatible projects, but could also suffer a reputation blow if it starts to look like a crutch for inexperienced or out-of-touch developers.

    Screenshot of GSAP result

    GreenSock will likely always stay ahead of the curve, providing stability and plugins for the things that browsers currently struggle with. We’re still at least a year or two away from a standalone SVG morph JavaScript library, and yet we can do this today with GSAP.

    Prototyping solutions fall short

    One of the greatest challenges facing web animation has been tooling. Animating the web today requires years of accumulated CSS and JavaScript knowledge to accomplish things that seem primitive in comparison to what a designer with Adobe After Effects can learn to do in a month. This means that either front-end developers become animators, or designers become coders. (Could animation be the thing that unites these two at long last?)

    Various tools and frameworks have emerged to try to meet this need. Frameworks like the aptly named Framer create “throwaway code” you can test with users; to work with it requires a basic knowledge of web development. Some apps, like Adobe After Effects, provide critical animation tooling (like a timeline UI) but only export videos, which makes iteration fast but user-testing impossible. Others, like InVision and much-lauded newcomer Principle, fall somewhere in between, providing a graphical interface that produces interactable prototypes without actually creating HTML in the process.

    Framer's animation development interface

    Framer: because “code isn’t just for engineers.”

    Principle's animation development interface

    It’s easy to imagine visual designers reaching for Principle’s animation-centric interface first.

    All of them have their pros and cons. For instance, the animation workflow may be right, but the web development workflow ends up wrong (and vice versa). This leaves an opening for differentiation. Animation tooling might be the winning feature during upcoming jostling for market share in this crowded arena.

    But right now, none is a clear winner. And some are already losing.

    The framework Famo.us once touted its 3D physics animation engine and excellent performance to prototypers and ad designers. In 2015, it pivoted abruptly out of the space. Similarly, Adobe retired its web animation racehorse, Edge Animate while rebranding Flash Animate CC. Flash will continue to export to WebGL and SVG, but the message seems clear: Flash’s future looks more cinematic than interactive.

    Browser tooling improves

    In December of 2014, Matt DesLauriers wrote, “I also feel the future of these tools does not lie in the document.body or in a native application, but in between, as part of the browser’s dev tools.”

    The Web Animations API’s increased adoption allowed Chrome Canary and Firefox Developer Edition (disclaimer: I helped build the demo site) to launch their own animation tools in 2015. In the future, we can hope to see these tools grow and change to accommodate the web animator’s process. Maybe even as the Web Animations API becomes more well known, we will see third-party tooling options for CSS and SVG animation editing.

    In-browser animation timeline tools

    Firefox Developer Edition’s animation timeline was a first for browsers. While nowhere near as finished as Flash’s UI, this and Canary’s timeline tools are steps in the right direction.

    Motion guidelines adoption up

    Following the lead of Google’s Material Design system, IBM and Salesforce released their own design systems with motion guidelines. (Disclosure: I assisted Salesforce with the motion portion of their Lightning Design System.) Increasingly, large companies that can afford to spend more time finessing their design systems and branding have been investing in codifying their UI animations alongside microinteraction guidelines. We’ll most likely see more medium-to-large companies following in the example of these giants.

    How that documentation plays out largely depends on the audience. Persuasive and beautiful documentation of animation theory prevails at large companies recruiting internal animation evangelists, but when the product is designed for mix-and-match developers, animation rules become stricter and more codified alongside best practices and code.

    Documentation library for Salesforce’s Lightning Design System

    Motion design docs run the gamut from overarching design principles (IBM’s Design Language) to the atomic and modular descriptions seen here in Salesforce’s Lightning Design System.

    UX and accessibility

    We learned a lot in 2015 about vestibular disorders (I even did a special screencast on the topic with Greg Tarnoff), a concern that may be wholly new to the web development community. Unlike contrast and ARIA roles, which can be “accessible by default,” the only animations that are accessible to everyone by default are opacity-based.

    For those not willing to abandon animation or convert to an entirely fade-based UI, the challenge we face is how to give users choice in how to experience our sites. I’ve already written about our options going forward, and we are starting to see a proliferation of “reduce motion/turn animation off” UI experimentation ranging from discreet toggles to preference panels. Like the “mute audio” option, one or two of these will likely rise to the top in a few years as the most efficient and widely recognized.

    As more edge cases reveal themselves, UX and accessibility concerns will deepen. If left unaddressed, the company’s front end will carry extra technical debt forward to be addressed “another day.”

    Animation matters

    Since animation’s return to the web development and design toolkit, we’ve been using it to tell stories and entertain; to increase the perceived speed of interactions; to further brand ourselves and our products; and to improve our users’ experiences. And we’re just getting started. New specs like scroll snap and motion paths build upon the foundation of web animation. New tools and libraries are coming out every day to help us further enrich the sites we create. And more and more job postings request familiarity with CSS animations and libraries like GSAP.

    As the field of web animation expands, it will be abused. The next parallax is always just around the corner; as new and unusual trends proliferate, clients and managers will want to see them reflected in their sites. Hopefully we learned something in those years without Flash; good design is about more than chasing after trends and trying to impress each other or a segment of our audience. We learned that building terrific web experiences means listening to users as well as pushing the web forward. And if we listen, we’ll hear when the bouncy buttons are too much.

  • Aligning Content Work with Agile Processes 

    As a content strategist, I work on teams with agile developers, user experience designers, user acceptance testers, product managers, and data scientists. A mere decade ago, I would have been called a journalist or an editor. I would have worked with copyeditors, proofreaders, graphic designers, and printers—but times, job titles, and platforms have changed. Content strategists have had to adjust to the rise of development-centric projects focused on products. Big changes challenge traditional content culture and processes.

    Agile has the potential to set content strategists free from dated ways of working. And developers and designers can help. How can they better understand content and align their objectives and outputs with content producers?

    I’ve identified four areas—iteration, product, people, and communication—where developers and designers can find common ground with their content colleagues and help them adapt to the agile world, while themselves gaining a greater understanding of content culture.

    Iteration

    Most content producers might think that the concept of iteration doesn’t apply to them—traditionally, at least, things aren’t usually published iteratively. Readers expect the finished article to be just that: finished.

    But if content teams take a step back and analyze their work in the context of agile, they will recognize that they are already regularly adapting and adjusting to changing circumstances. The content landscape already has many of the characteristics of an agile environment. Consider these scenarios:

    • A story is breaking on social media and you need to get your brand involved as soon as possible.
    • A new source emerges with vital information just as you’re about to publish.
    • A massive national breaking-news story consigns your lovingly crafted PR campaign to the scrap heap.
    • A deadline one month away is pulled forward by two weeks, throwing workflows into chaos.

    Requirements and constraints change just as readily in content as in agile development; deadlines can be viewed in the same terms as sprints.

    The key to content teams understanding agile iteration is helping them view content like building blocks, where each communication makes up a larger message and that message becomes more honed, focused, and optimized over time. Content folks readily use data to get information on audience, platforms, and engagement so in a sense they are already in the business of iteration.

    How can developers encourage this? Well, for example, during a new build, don’t accept lorem ipsum text—politely ask content people to knuckle down and produce a first iteration of the actual content that will appear in the finished product. If you’re using a content-first strategy, this is self-explanatory; even if you’re not, though, early content iteration creates focus and sends a positive signal to stakeholders. It helps them better visualize the end product and, most importantly, gives your team a first draft to build on to challenge those stakeholders—marketing, sales, data scientists—who need to give feedback on the process. Their feedback may be as simple as a bunch of questions.

    On a critical conversion page, for example, the stakeholders’ notes might read, “Is this where our super-important CTA is? Have you A/B tested this? Have we benchmarked it against our competitors?!” Or, “Hey data science! Just wondering if you could check the tagging on this page when you get a moment… I’ll need solid tagging to measure its popularity! Thanks.”

    View each unit in the content-production process as a single step on the path to your final goal rather than an end in itself. This fosters early and continuous development, frequent delivery, iterative pieces of output, and sustainability, all of which are cornerstones of the agile approach.

    Additionally, by using team collaboration or project management software to create, plan, review, edit, and publish content, key stakeholders are given both oversight and readily accessible insight into the fluid, iterative process of content production.

    This puts content at the heart of development and UX. Even when other strategies are in play (such as when developers are working to a waterfall model), make sure that content milestones are agreed upon early and allow stakeholders to see exactly what’s going on. This open, documented approach also works great when content isn’t directly feeding into a new build, but is part of an ongoing, business-as-usual workflow. It sets a powerful precedent and showcases how iteration can be easily tracked on both dev and content sides, providing an early focus on regular milestones.

    Product

    Content strategists should easily be able to recognize how agile principles apply to their output: frequent delivery, sustainable development, attention to detail, good design. More difficult is determining how content should fit into Kristofer Layon’s product-development framework.

    My favorite strategy is a content-first approach because it’s bottom-up. The smallest unit of currency in any development or design project is a word, an image, a punctuation mark. Everything grows out of this. While other strategies can be convincing, readers generally don’t visit a website to swoon over a sublime user journey, admire the poetic code, or gaze in awe at the artistry of the design.

    They come for content.

    Even in waterfall-driven projects, though, a content-first “lite” approach can work very effectively when content output is addressed early and prominently in the requirements-gathering process.

    Whether agile, waterfall, or some hybrid thereof, the key is to synchronize UX and content early in the discovery phase and lock that collaboration in so it becomes a cornerstone of the project.

    Additionally, a content-first approach doesn’t have to apply solely to new stuff. Existing products can benefit from an overhaul where the content-production cycle is broken down to its smallest components, and then optimized and rebuilt to better effect with, perhaps, optimized calls to action, punchier copy, or more dynamic imagery.

    A content-first strategy also creates boundaries, ownership, and a sense of control. It places content at the heart of the agile process from the beginning rather than tacking it on as an afterthought once design and dev work is underway. This gives content managers a more insightful and impactful window into the agile world from which they can adapt their own processes and workflows. I’ve seen projects flounder spectacularly when various business departments go to battle with their own vested interests.

    Recently, I was asked to ship a new online product by the print arm of a department I had never worked with before. I was hesitant, but saying no wouldn’t have been a wise move politically. I foresaw problems because of the tight timeline combined with the fact that the company was new to the product-management framework but hey, this was a newspaper with a thriving digital business. What could possibly go wrong with a content-first approach?

    The problems quickly escalated as I tried to corral over 40 people (five percent of the entire workforce!) in a half dozen departments, all of whom wanted their say in the transition of the product from print to digital.

    In retrospect, this transition would have benefitted from a dedicated project manager who could have streamlined communications and better managed stakeholder expectations. If you’re struggling to pull all the strands together in a project involving complex content and development work, it’s never too late to pull back and take a bird’s-eye view with the product or project manager, head of content, or head of development to try to regain perspective and address any challenges.

    Whether your project employs an aggressive content-first strategy or a “lite” version, understanding and embracing content in the context of product instills a sense of ownership and investment on both the development and content side. Both teams will see early dividends from this mutually beneficial approach.

    People

    Years ago, I developed an admiration for dev teams. They seemed calm under pressure, thoroughly professional, deliberate, focused, and, above all, respectful—pretty much the antithesis of many newsrooms or communications teams I’d been part of. As my career developed, I was fortunate enough to be able to build my own teams. I looked to these qualities (outlined by Jonathan Kahn), as key characteristics I wanted in the people around me.

    Whether building a team from scratch or inheriting it, we apportion ownership—and that empowers. That’s the key to building strong, vibrant, successful teams. My preferred strategy for doing this is to confer end-to-end ownership on each piece of content so that its creator also oversees the review, optimization, and publishing process (including working with the developer or designer). Exposing content creators to agile practices through stand-up meetings, discovery and planning meetings, retrospectives and group communications will give them a more holistic, invested view of operations.

    Lifecycle ownership motivates and trusts individuals in an agile way. It also has the advantage of giving devs and designers more influence while broadening the skills of content producers through increased exposure. This will ultimately assist the move toward agile self-organization within the team. Self-organization allows teams to create their own opportunities, which builds individual and collective confidence and challenges people to regularly test their limits.

    Motivation and trust will blossom in this environment and help ensure the focus will always be on your people. If you get one thing right, make it your people; the rest will fall into place.

    Communication

    Given that communication is at the core of content, you’d be forgiven for thinking that it would be an obvious area for content producers to put their best foot forward. But unfamiliar language such as dev- or design-speak can be intimidating.

    Patience will be its own reward here. Encouraging daily communication between devs and content is an ideal way to immerse each in the challenges the other faces and a great opportunity to highlight how agile can be a positive force within content.

    And—even though content folks might not (yet) be accustomed to this—I’ve also found it beneficial to introduce morning stand-ups into my content routine. The opportunity to collectively address challenges for a few minutes each day amplifies both individual ownership and team responsibility.

    Even more beneficial is when developers invite relevant members of their content teams along to stand-ups. I’ve always been pleasantly surprised how receptive each side is to the challenges of the other. Encouraging content to attend for the duration of a sprint both allows them to see the beginning, middle, and end of the release cycle and helps align goals.

    As a developer, if you commune, share, and consume with your content team, the “us and them” divide soon crumbles. Before you know it, you’re fully exposing content folks to the agile environment.

    Good communication is vital, says Inayaili de Leon. The agile principles of daily cooperation and face-to-face communication create regular opportunities to surface and address problems.

    Prevention is always better than cure, of course, but in organizations with a multitude of moving parts, communication can break down, particularly in the heat of battle. What to do then?

    If things begin to go awry and face-to-face is not enough, you may need to make a more compelling case for communication change. The best route is through rigorously collecting specific examples of the problems your team is having with content. Solutions can lie in the precise documentation of blockers, resource challenges, and impact assessment, particularly if related to return on investment.

    Agile communications should always favor the personal over the process, and collaboration over confrontation. And while sometimes we have to revert to the latter when the former proves insufficient, we should always try to remain positive and strive to build on our failures as well as our successes.

    A Rallying Cry

    Adaptability is hard-coded into our genes. Not only is it essential for survival, but it helps us flourish. As business environments change and roles adjust to new technologies, platforms, tastes, and consumption habits, so we (developers, designers, content strategists) must look for ways to remain at the cutting edge of our disciplines.

    Traditionally, content was the message; developers provided the method of delivery. The two coexisted side-by-side, if occasionally a little uneasily. But our mutual dependencies have forced us closer together. I believe that the open, collaborative, people-focused, and change-embracing approach of modern agile development is a framework within which content work can refine itself, test, and learn.

    By taking this step on the path to helping content align itself with agile, you may find both your development and content teams energized—maybe even revolutionized.

  • Impulses and Outcomes 

    A couple of years ago while I was working on a project with Kevin M. Hoffman, he related a story to me about his consulting work with an agency on improving presentations to clients. The story centers around a designer who was asked to change his mode of dress. This designer was fond of the goth look, but the agency considered it inappropriate for some client meetings.

    Long story short, Kevin told the staff member that he could wear whatever he wanted to, but to consider the situation in terms of his desired outcomes. Clients’ opinions about clothing and personal expression aside, what was more important to the designer: dressing a certain way, or successfully pitching the design direction he was the most excited to work on? The “what to wear” decision then becomes less about the designer’s pride or interest in retaining control, and more about getting what he wants in the situation; acting in his own self-interest to the best of his ability.

    Recently, as I worked on an extended project for a client at Bearded, these ideas started percolating through my brain again, but this time with regard to design. Every designer (and really everyone involved in a design) has tendencies and predilections that, like it or not, will be guiding the design process.

    Of course we make our best efforts to ground the creative process in research, interviews, and egalitarian decision-making activities (sticky notes and card sorts, anyone?). But no matter what we do, our entire process is filtered through the very fallible, very subjective minds of people.

    Let’s talk about your childhood

    For a moment, we’ll zoom in from our mile-high view and consider a single person. Let’s imagine a child who grows up under the care of well-meaning but disorganized parents. The parents’ plans and decisions are unpredictable. The child, never knowing what will happen next, grows into a bright, talented young person with an unresolved longing for structure and order.

    We might even imagine that this need to bring order from chaos, to sort out the unpredictable, messy parts of the world, is what draws them into a career as a designer.

    Responsible adults

    As they find their professional legs, they mature into not so much the David Carson, Stefan Sagmeister, James Victore, anything goes sort of designer — more of an Erik Spiekermann, Willi Kunz, or Sylvia Harris. They’re not out to tear the design world a new one. To the contrary, they’re focused on sorting out the garbled information before them; smoothing over the rough patches, and slowly rotating their subject’s gleaming, flawless, chrome surface towards the world.

    And there’s nothing wrong with this orderly sort of design. In fact, it’s very, very useful to the world. Perhaps even necessary.

    Likewise there’s nothing wrong with that wild-eyed sort of design, either. One might imagine (as many do) these types of design as poles on a spectrum of useful approaches. Poles between the qualities of Consistency and Variety.

    Non-binary systems

    As it turns out, every project demands to be somewhere on this scale. And it’s essential during the first part of every project (the research, the interviews, even the sticky notes) to figure out which spot on the spectrum is best suited to the project.

    Now the extremes of this range are rarely appropriate for any project. An unwaveringly consistent design approach will often seem generic and boring. On the other hand, a design that is entirely committed to variety will make the audience struggle at every turn to orient themselves and parse the information before them.

    So it seems fair to say that most designs benefit from being somewhere in the middle zone, balancing between the unhappy extremes of boredom and chaos.

    Advanced calibration

    But what happens when our imagined designer – the one who is drawn to systems of order and control – determines through their research that their project requires a design approach that is on the more varied side of center? When the organization, in fact, will benefit from a less rigidly ordered design? This organization, it turns out, needs an approach that may not sing with as much immediate and obvious clarity, but will bring more surprise and thrill to the audience.

    And so we find ourselves with a mismatch between impulses (bring order!) and outcomes (show us surprises!). The problem is that the designer’s approach is not in a conversation with the project and its goals; it’s stuck in a very old dialog with the designer’s childhood. If left unaddressed, a successful project outcome will depend on whether or not those old desires happen to match up with the project’s requirements.

    If we don’t want to leave things up to chance, this situation requires the identification of both the designer’s impulses and the project’s desired outcomes, and a conscious assessment of their overlap and contradictions.

    When I was in a critique at design school, one of my classmates commented on another’s work that they “really seemed to be developing a style.” My professor became suddenly incensed (a rare thing for her), and declared “you’re designers, you don’t have a style! Your style is whatever is appropriate to the project!” In that way design is very different than art. Art stems primarily from the artist’s internal world. But design does not. Design aims to solve problems outside of the designer. Whereas the artist might be seen as a sort of oracle speaking to the world, the designer is more of a tool that is useful in solving problems.

    Which is why, in the cases where a designer’s internal impulses and the project’s desired outcomes are not in alignment, the designer must consider adjusting. To be a useful tool, we must recognize that some of our impulses work against the needs of the organization that is employing us, and compensate.

    This is no easy task. It’s a process that requires knowing oneself, and questioning our own nature and subjectivity. But it’s only through this sort of rigorous self-assessment and awareness that we can grow beyond our limitations–becoming better designers, and perhaps if we’re lucky, more sensitive and thoughtful people in the process.

  • Defeating Workplace Drama with Emotional Intelligence 

    I was on a client call and I couldn’t believe what I was hearing. The client contact had discovered that if she resized her desktop browser to mobile size, showed and hid the mobile form, and then resized back to desktop size, the previously-visible desktop form disappeared. “Do we anticipate a lot of people doing that?” I asked. “Well, you never know,” she responded.

    I muted the phone and sighed heartily. The bottom line was the client contact cared about it and needed it fixed—I knew that. I just didn’t understand why.

    Irrationality is one of the most frequent complaints of creatives and devs dealing with clients. “Clients just don’t get it,” I hear frequently. I’ve been there. We all have.

    But our coworkers aren’t much better. There’s that project manager who thinks that the only solution for a project behind schedule is more status meetings. There’s that account manager who thinks that even the most mundane detail needs to be clarified and confirmed ad nauseam. There’s that supervisor who feels the need to micromanage your every move. What’s up with those people?

    Doesn’t anyone get it? Isn’t irrationality just the worst?

    The anxiety problem

    A few weeks after the conversation I mentioned above, I was on a call again with the same client, but this time the client’s boss was also on the line. It was a much different conversation. The client’s boss berated all of us, client contact included, for a solid hour. It turns out the client had missed their budget goals for the last two quarters, and the blame fell squarely on the marketing team—whether deserved or not. Our client contact was under a tremendous amount of pressure, so even the slightest mess-up, if noticed, could have disastrous results.

    What I realized then was that the problem wasn’t irrationality—in fact, it rarely is. The problem was anxiety.

    We’re going to do some math with our emotions. Ready? Good. Here’s the formula:

    Anxiety + Time = Drama

    That’s right, when anxiety goes up against an approaching deadline, it grows and that results in drama. And when there’s drama on a project, everyone feels it.

    I often hear people say, “I don’t do drama.” What this basically means is that they don’t deal with emotional issues in the people around them. Ironically, this results in drama surrounding these people everywhere they go. You wouldn’t hear a developer say, “I don’t do bugs.” You wouldn’t hear a designer say, “I don’t do revisions.” As web professionals, those things are your job. If you work with people, it’s your job to take care of drama, too.

    Taking care of drama means learning to recognize and deal with the roots of anxiety. Anxiety comes from a few different places, but it’s at the center of a number of problems in the workplace. Understanding it is the key to defusing a lot of those problems.

    Power and responsibility

    We’re going to do some more math with our emotions. Here’s a formula for anxiety:

    Responsibility − Power = Anxiety

    The more pressure someone is under, the greater the responsibility. And our client contacts (as well as our accounts teams and project managers) have very little power to fix these problems. This is a classic recipe for anxiety.

    It’s a concept we, as problem-solvers, may not be familiar with in a workplace setting. After all, people come to us to solve their problems. We rarely have to go to others to solve our problems.

    Remember those irrational coworkers I mentioned above? In all cases, they suffered workplace anxiety due to responsibility minus power. They were being held responsible for something they didn’t have the power to directly do. They may not state it. They may not even realize it. But anxiety is a way of life for the people you work for.

    Clients, too, suffer from this anxiety. In fact, the very act of a client coming to you means that they’ve realized that they can’t solve the problem on their own, even though they’re responsible for the outcome. Every client relationship is fundamentally based on the root of anxiety.

    If anxiety is caused by holding responsibility for something without having the power to fix it, we can alleviate it by either taking on some of the responsibility or giving away some of the power to fix it.

    “Not my problem” is a problem

    Early on in my career at my current agency, I noticed a bit of tension between Dev and Creative over the usage of pre-built creative assets in our front-end framework of choice. Designers were designing elements from scratch, which meant that many of the built-in modules in our front-end framework were wasted. This also meant additional time in dev to build those custom elements, which was bad for both dev and the client. Developers were complaining about it. And designers had no idea this was going on.

    Rather than complain some more about it, I created an in-depth presentation showcasing the creative capabilities of our front-end framework for our Creative department. When I showed it to my director, he said, “This is exactly what we need.” The problem had been on the back burner, boiling over, until I took it on myself.

    When people complain about something, they’re acknowledging that something should be done, but refusing the undertaking themselves. Essentially, they’re saying, “It’s not my problem.” This isn’t always strictly due to negligence, though.

    There was an experiment that placed participants in separate rooms with microphones and had them take turns talking about problems they were having and what they were doing to resolve them. The first participant would be connected with between one and five others, when one of the other participants would start having an epileptic seizure during the experiment. Here’s the catch: there was only one real participant in each round of the experiment. The other voices, whether one or many, were recordings—including the person having the seizure. Want to guess how many of the real participants went to the experimenters to seek help? 100 percent? 75 percent?

    Would you believe only 31% of participants went to seek help for the (fake) other participant in distress? What’s more, the more participants the real participant thought were there, the less likely he or she was to do anything. Why is this?

    Researchers have studied the behavior of crowds surrounding emergency situations. If you have an emergency in public and you ask the crowd for help, you’re probably not going to get it because of what’s known as the bystander effect. For a variety of reasons (including believing that someone more qualified will jump in, and worrying about the consequences of jumping in), the more strangers are present around an emergency, the less likely any one person is to help. The way to actually get help in a crowded emergency is to pick one individual and ask that person to do something specific, like phone an ambulance or help with first aid.

    Bystander apathy is real. Understanding it can help you cope with emergencies, the zombie apocalypse, and even work situations.

    People who are complaining probably don’t know whose responsibility it is to fix the problem—they just know it’s not them. This is your opportunity to be a helpful individual rather than an apathetic bystander.

    Look for unidentified needs and projects that have been on the back burner so long that they’re boiling over. See about taking them on yourself. A word of caution: there’s a fine, fine line between stepping up and stepping on toes. If you’re going to step up, and the thing you’re taking on is someone’s direct responsibility, get their blessing first—especially if the person in question outranks you. And if stepping up would squash someone’s ego, that’s a good sign that you should focus your efforts elsewhere.

    Taking this a step further, take responsibility for the end product, not just your part in it. I work in dev, but I’m known to give creative feedback when it’s appropriate, as well as helping think through any aspect of a client project. I now get called into meetings not just to lend my dev expertise, but also to help other teams think through their problems.

    You don’t want to overstep your bounds, but simply caring about the end product and how each step is done is what responsibility-sharing is all about.

    The power is yours

    I have a kid. When he runs into situations where he has no control, no power, his anxiety builds and he panics. The quickest way to resolve that problem is to give him some choices to make within the bounds of his situation: do you want to go for lunch here or there? Do you want to wear the red shirt or the green one? Which punishment do you want?

    Adults are slightly more sophisticated about this, but we never really outgrow the fundamental human need to have some control over our situations. With some degree of power, we remain calm and collected; with a loss of power, we become anxious and irrational.

    What’s more, when people lose power in one area of their lives, they compensate by seizing power in other areas. If someone feels a situation is slipping out of their grasp, they will often work harder to exert power wherever they feel they still have some control. Those irrational coworkers at the beginning of this article were all compensating for losing control over the work of the project itself. The client’s extreme caution about site development was in reaction to them not being able to keep their budget in check.

    Loss of power can take a lot of different forms. Not knowing what result is required of you can render power meaningless. The client at the beginning of this article was unsure how a minor bug would affect the outcome of the website, so they couldn’t gauge the level of risk in leaving the bug unresolved. Not having the right information is another scenario. This is often why clients come to us in the first place. And, of course, there’s the good, old-fashioned total loss of power due to lack of skills required to solve the problem.

    As a problem-solver, you hold a lot of the power that other people depend on for resolving their problems. Sharing that decision-making power is a surefire way to calm down the people involved in a project.

    When solving a problem, you make countless decisions: how to solve it, how thorough to be, how to integrate the solution into the existing product, and sometimes whether to solve the problem at all. Giving away power means sharing decision-making with others. The people responsible for the outcome usually appreciate being a part of the process.

    When I managed of a team of designers and developers, I frequently encountered this kind of scenario: an account person came to me in a panic, asking for an emergency change to a website based on client feedback. I wasn’t handed a problem, but a solution. With a little reverse engineering we arrived at the problem, which made it a lot easier to see what was being attempted.

    A better solution was available in this case. I explained the options to the account person, the pros and cons of each, and we settled on my solution. I typed up an email to aid the account person in explaining the solution to the client. In the end, everyone was happier because I took the time to share some of that decision-making power with the account team and client.

    As an architect for a front-end development team, sharing decision-making power often means explaining the options in terms of time and budget. The language is different, but the principle is the same: educate and empower the key stakeholders. You’d be surprised how quickly some seemingly irrational revisions get nixed after the options—and expenses—are discussed.

    Getting to the heart of the matter

    Anxiety’s causes run deep into human nature, but knowing how to calm it can go a long way in preventing workplace drama. Remember: irrationality is not the issue. People are a lot more complex than we often give them credit for, and their problems even more so. Dealing with them is complicated—but vital to getting ahead in the workplace.

  • Designing the Conversational UI 

    In the first part of this article, we discussed the basic principles of conversational interfaces, and why you should consider building one for your own product. Here, we’ll dive a little deeper into more specific patterns, and see how you can translate them into a conversational form.

    I want to present a few cases as examples to illustrate the challenges of designing a conversational interface, and go through some of the solutions we came up with at Meekan.

    Validating input

    With a typical GUI, when asking a user to supply more information (usually by filling out a form), you have lots of ways to make sure you’re getting a clean and useful response before moving on to process it. Is this a valid email address? Is this a phone number? Is this username already taken? You can restrict the input to be just numerals, or something that you pick from a predetermined list.

    In a conversation, this privilege doesn’t exist. The person you’re talking to is free to type (or say) anything, so it’s up to you to construct your questions properly and digest the answers in the smartest possible way.

    Mine the request for info

    Let’s say your robot is giving away t-shirts. He needs to ask the user for the size and color. If the user opens with “I want a medium size red shirt,” you already have everything you need right there.

    But if the robot opens the conversation, or the user just says “Can I have a shirt?,” you’ll need to put the missing pieces together.

    Give hints

    Whenever possible, avoid open-ended questions and try to demonstrate the type of answer you’re looking for. If the pool of possible answers is small, just list them.

    What size t-shirt are you? We have medium, large, and extra-large

    As a general rule, you should handle every item separately. Ask about the size; when you have the answer, ask about the color. Mixing several details in one sentence will be much more difficult to parse correctly, so ask your questions in a way that encourages a specific answer.

    Acknowledge

    When the answer is valid, repeat it to make sure you understood it correctly, and move on.

    Got it. Size large. And what color would you like?

    Explain what went wrong

    If the input isn’t valid, explain again what you were expecting (versus what you received). If possible, be smart about distinguishing between answers you don’t understand and answers that make sense, but that you can’t accept.

    And what color would you like?

    purple

    I’m sorry, we don’t have purple. We have white, gray, brown, red, orange, pink, and black. What color would you like?

    brbrbl

    I’m sorry, “brbrbl”? Is that a color? We have white, gray, brown, red, pink and black. What color would you like?

    gray

    Cool! So a large gray t-shirt!

    To forgive is divine

    Remember, users are talking with you, not pointing to things on a list. They have more than one way to say what they want. If you’re asking for a shirt size, “extra-large,” “XL,” or even “the largest size you have” can all mean the same thing. “Thursday”, “thu”, “thrusday” (yes, with a typo) and possibly “tomorrow” could all point to the same day.

    Switching tasks

    Let’s go back to our good old GUI for a moment. A traditional app can perform different functions, which would usually be separated into discrete windows (or pages, or screens). If I have a calendar, it will perhaps show all of my monthly meetings laid out on the screen; when I want to edit one of them, I’ll switch to a different screen, and return to the previous screen when I’m done.

    But a conversation is just one long string of sentences. How do you switch between different functions? How do you know which task you’re working on right now? Let’s see how this plays out.

    The user starts a new task:

    Meekan, schedule a team meeting tomorrow

    The robot goes into his “schedule a new meeting” state and starts gathering the info he needs. But now something happens: the user is no longer interested in scheduling a new meeting, and wants to check her Thursday agenda instead.

    Meekan, how busy am I on Thursday?

    This is the equivalent of hitting the “close” button on the New Meeting window, or pressing Escape on the keyboard. We have a few ways to handle it.

    Resist the change

    When the robot detects that the user wants to switch to a different task, he asks the user to explicitly abort the current task first:

    We have an active meeting in progress. Say “cancel” if you want to abort the current meeting.

    You would typically take this route if the current task required a lot of investment to create, and aborting it would cause all this effort to be lost.

    Go with the flow

    Again, the user asks for a task switch. The current task at hand is not too important, so the robot swiftly moves into the new mission. When this happens, it’s important to tell the user what’s happening: you’re aborting the current task and moving to a new one.

    Here, the robot is expecting the user to RSVP for an upcoming meeting, but the user is instead starting a new, unrelated meeting:

    Chris invited you to the project launch meeting on Friday 10am. Would you like to Accept or Decline?

    Arrange a new meeting tomorrow with @jesse

    Okay, Skipping the RSVP for now.

    Matching your calendars for a meeting tomorrow (…)

    Tracking back

    So we aborted task A in favor of new task B. Now that B is done, should we go back to doing A again?

    At Meekan, we found that going back to the previous task seemed like the correct and smart thing to do, but would often cause confusion (for example, the user didn’t realize that task A had resumed, and tried to start it anew). If you do decide to track back, the key is to communicate properly. That way, the user knows what just happened and expectations on both sides of the conversation are aligned.

    Looking again at our t-shirt example, this would make perfect sense:

    What size t-shirt are you? We have medium, large, and extra-large

    large

    Got it. Size large. And what color would you like?

    actually make it extra large

    Okay, extra-large. And what color would you like?

    Timing out

    This happens frequently in a chat. You’re talking about something, and then you step out to lunch, or get called to a meeting. By the time you’re back, you forget what the conversation was about. Quite frequently, the channel is filled with other people talking about other things, and whatever you were saying has scrolled into oblivion.

    The robot should be aware of this. If the user starts something, disappears for a whole day, comes back, and starts something new, the robot can safely assume that the task from yesterday should be aborted.

    Awaiting critical input

    Sometimes you need a piece of information that you absolutely cannot proceed without. Maybe the user’s email address is required for the robot’s basic operation. Maybe you need to delete something first, and you cannot go further unless the user confirms.

    In a GUI environment, you can pop up a modal window. It will block access to everything until the user answers this question: “Are you sure you want to delete everything?” The window will stay there until the user clicks either “Yes” or “Cancel.”

    Replicating this behavior in a conversation requires the robot to refuse to do anything until he gets that vital piece of info from the user. This starts a loop that can get very annoying very quickly. Make sure you explain exactly what you need and why you need it. Prepare a few versions of this explanation, so that if the user tries to avoid it again and again, the robot will not get repetitive (and make the situation even worse).

    To do my job, I need access to your schedule. Follow this link to connect your calendar.

    Schedule a new meeting tomorrow

    Seriously, you need to connect your calendar here to enjoy my scheduling superpowers.

    Am I busy tomorrow?

    I can’t wait to start working on your schedule! Please connect your calendar so I can do my magic.

    This is not an ideal situation. Think carefully about whether you really cannot proceed until you have this particular information. If at all possible, make an educated guess and proceed, or at least allow some limited functionality, and ask for the information again when you absolutely need it.

    Picking from a list

    Depending on your robot’s job, you might eventually reach a point where the user has to choose from a list. Remember: no drop-down menus, combo boxes, radio buttons. If you have a list of items, and you need the user to select one or more, you have to give each item a unique name or number so that users can tell you which option they like.

    Meekan, cancel that meeting I have tomorrow

    You have 3 meetings tomorrow, which one would you like me to cancel?

    (1) Project launch at 10 am
    (2) Team stand-up at 11 am
    (3) 1-on-1 with Jake at 2:30 pm

    Cancel 3

    Okay, I’m canceling “1-on-1 with Jake” tomorrow. I’ll notify Jake about this.

    Numbering the meetings on the list gives users an easy way to reference the item they want to focus on.

    What’s next?

    There are still no rules; we have to make up our own. Let’s tell everyone what works and what fails for us. Let’s share, talk, write about it. It’s an amazing time to be a pioneer.

    I hope my examples have stirred up some thoughts. Consider this article a launchpad for your own adventures in robotland. Now go build a robot! (And then tell everyone about it.)

  • All Talk and No Buttons: The Conversational UI 

    We’re witnessing an explosion of applications that no longer have a graphical user interface (GUI). They’ve actually been around for a while, but they’ve only recently started spreading into the mainstream. They are called bots, virtual assistants, invisible apps. They can run on Slack, WeChat, Facebook Messenger, plain SMS, or Amazon Echo. They can be entirely driven by artificial intelligence, or there can be a human behind the curtain.

    Still from the movie WarGames
    WarGames: David Lightman talking with Joshua.

    My own first encounter with a conversational interface was back in 1983. I was just a kid, and I went with some friends to see WarGames. Young hacker David Lightman (played by Matthew Broderick) dials every phone number in Sunnyvale, California, until he accidentally bumps into a military supercomputer designed to simulate World War III.

    We immediately realize that this computer is operating at a different level: it engages in conversation with Lightman, asks him how he feels, and offers to play some games. No specific commands to type—you just talk to this computer, and it gets you, and responds to you.

    Fast-forward 30 years. My teammates and I at Meekan set out to build a new tool for scheduling meetings. We thought, “It’s 2014! Why aren’t calendars working for us?” We wanted simply to be able to tell our calendar, “I need to meet Jan for coffee sometime next week,” and let the calendar worry about finding and booking the best possible time and place.

    First we sketched out a web page; then we built an Android app, then an iOS app, and finally an Outlook add-in. Each one was different from the next; each attacked the problem from a different angle. And, well, none of them was really very good.

    Screenshot from Meekan’s iOS app showing time-of-day options
    Time-of-day options on our iOS App.

    After building user interfaces for more than 15 years, for the first time I felt that the interface was seriously limiting what I was trying to do. Almost no one understood what we were attempting, and when they did, it seemed to be more difficult to do it our way than the old-school way. We could go on and crank out more and more versions, but it was time for a different approach. The range of possible actions, the innumerable ways users can describe what they need—it was just too big to depict with a set of buttons and controls. The interface was limiting us. We needed something with no interface. You could tell it about your meeting with Jan, and it would make it happen.

    And then it dawned on us: we’re going to build a robot!

    I’m going to tell you all about it, but before I do, know this. If you’re a designer or developer, you’ll need to adjust your thinking a bit. Some of the most common GUI patterns and flows will not work anymore; others will appear slightly different. According to Oxford University, robots will replace almost half of the jobs in the US over the next 20 years, so someone is going to have to build these machines (I’m looking at you) and make sure we can communicate properly with them. I hope that sharing some of the hurdles we already jumped over will help create a smoother transition for other designers. After all, a lot about design is telling a good story, and building a robot is an even purer version of that.

    Photoshop? Where we’re going, we don’t need Photoshop

    Think about it. You now have almost no control over the appearance of your application. You can’t pick a layout or style, can’t change the typography. You’re usually hitching a ride on someone else’s platform, so you have to respect their rules.

    Screenshot showing how the same message appears across Slack, HipChat, and WhatsApp
    The same message in Slack, HipChat, and WhatsApp.

    And it gets worse! What if your platform is voice-controlled? It doesn’t even have a visual side; your entire interface has to be perceived with the ears, not the eyes. On top of that, you could be competing for the same space with other conversations happening around you on the same channel.

    It’s not an easy situation, and you’re going to have to talk your way out of it: all of your features need to be reachable solely through words—so picking the right thing to say, and the tone of your dialogue with the user, is crucial. It’s now your only way to convey what your application does, and how it does it. Web standards mandate a separation of content and style. But here, the whole style side gets thrown out the window. Your content is your style now. Stripped of your Photoshop skills, you’ll need to reach down to the essence of the story you’re telling.

    And developers? Rejoice! Your work is going to be pure logic. If you’re the type of developer who hates fiddling with CSS, this might be the happiest day of your life.

    The first tool in your new toolbox is a text editor for writing the robot’s script and behavior. When things get more complicated, you can use tools like Twine to figure out the twists and turns. Tools and libraries for coding and scaling bots are cropping up by the dozens as we speak—things like Wit.ai for handling language understanding, Beep Boop for hosting, and Botkit for integrating with the popular Slack platform. (As I write this, there is still no all-encompassing tool to handle the entire process from beginning to end. Sounds like the voice of opportunity to me.)

    But, let me say it again. The entire field of visual interface design—everything we know about placing controls, handling mouse and touch interaction, even picking colors—will be affected by the switch to conversational form, or will go away altogether. Store that in your brain’s temp folder for a little while, then take a deep breath. Let’s move on.

    First impression: introduce yourself, and suggest a next step

    Imagine a new user just installed your iOS app and has launched it for the first time. The home screen appears. It’s probably rather empty, but it already has some familiar controls on it: an options menu, a settings button, a big button for starting something new. It’s like a fruit stand. Everything is laid out in front of you: we got melons, we got some nice apples, take your pick.

    Compared to that, your first encounter with a robot is more like a confession booth. You depend on the voice from the other side of the door to confirm that you’re not alone, and guide you toward what to do next.

    Your first contact with the user should be to introduce yourself. Remember, you’re in a chat. You only get one or two lines, so keep it short and to the point. We’ll talk more about this in a second, but remember that having no visible interface means one of two things to users:

    • This thing can do whatever I ask him, so I’m going to ask him to make me a sandwich.
    • I have no idea what I’m supposed to do now, so I’m just going to freeze and stare at the screen.

    When we did our first tests, our users did just that. They would either just stare, or type something like “Take me to the moon, Meekan.”

    We were upset. “Why aren’t you asking him to schedule stuff for you, user?”

    “Really? He can do that?”

    It’s not obvious. So use introductions to define some expectations about the new robot’s role on the team. Don’t be afraid to glorify his mission, either. This robot handles your calendar! That way, users will be less disappointed when they find out he doesn’t make sandwiches.

    Immediately follow this intro with a call to action. Avoid the deer-in-headlights part by suggesting something the user can try right now.

    Hi Matty! I’m Meekan, your team’s new scheduling assistant. I can schedule meetings in seconds, check your schedule, and even find flights! Try it now, say: Meekan, we want to meet for lunch next week.

    Try to find something with a short path to victory. Your users just type this one thing, and they immediately get a magical treasure in return. After this, they will never want to return to their old life, where they had to do things without a robot, and they’ll surely want to use the robot again and again! And tell all their friends about it! (And…there you go, you just covered retention and virality in one go. It’s probably not going to be that easy, but I hope you get my point about first impressions.)

    Revealing more features

    When designing GUIs, we often talk about discoverability. If you want the user to know your app is capable of doing something, you just slap it on the screen somewhere. So if I’m new to Twitter, and I see a tweet for the first time, my options are set in front of me like so:

    Twitter screenshot showing various UI elements like the Heart and Retweet icons, etc.

    Easy. I’ll just hover my mouse over these little icons. Some of them (like stars or hearts) are pretty obvious, others might require some more investigation, but I know they’re there. I look around the screen, I see my Notifications link, and it has a little red number there. I guess I received some notifications while I was away!

    Screenshot showing Twitter UI elements: the Home, Notifications, and Messages icons

    But when talking to a robot, you’re just staring into a void. It’s the robot’s job to seize every opportunity to suggest the next step and highlight less-familiar features.

    • Upon introduction: as we mentioned earlier, use your first contact with users to suggest a task they could ask the robot to perform.
    • Upon receiving your first command: start with a verbose description of what’s happening and what the robot is doing to accomplish his mission. Suggest the next possible steps and/or explain how to get help (e.g., link to a FAQ page or a whole manual).
    • Now gradually remove the training wheels. Once the first interactions are successful, the robot can be less verbose and more efficient.
    • Unlock more achievements: as the relationship progresses, keep revealing more options and advanced tips. Try to base them on the user’s action history. There’s no point explaining something they just did a few moments ago.
    Meeting synced! Did you know I can also find and book a conference room?
    • Proactively suggest things to do. For example, users know the robot reminds them about meetings, but don’t know the robot can also order food:
    Ping! There is a meeting coming up in one hour. Would you like me to order lunch for 3 people?

    If the robot is initiating conversation, make sure he gives relevant, useful suggestions. Otherwise, you’re just spamming. And of course, always make it easy for users to opt out.

    Cheat whenever you can

    It’s easy to assume our robot is operating inside a pure messaging or voice platform, but increasingly this is not the case: Amazon Echo is controlled by voice, but has a companion app. WeChat and Kik have built-in browsers. HipChat allows custom cards and a sidebar iframe. Facebook and Telegram have selection menus. Slackbot inserts deep links into messages (and I suspect this technology will soon be more widely available).

    Screenshot showing how Slack uses deep links
    Slackbot uses deep links to facilitate actions.

    With all the advantages of a conversational interface, some tasks (like multiple selections, document browsing, and map search) are better performed with a pointing device and buttons to click. There’s no need to insist on a purely conversational interface if your platform gives you a more diverse toolbox. When the flow you present to your user gets narrowed down to a specific action, a simple button can work better than typing a whole line of text.

    Screenshot showing Telegram’s interface, which uses pop-up buttons
    Telegram uses pop-up buttons for discovery and for shortcuts.

    These capabilities are changing rapidly, so be prepared to adapt quickly.

    And now, we ride

    As users become more familiar with chat robots, they will form expectations about how these things should work and behave. (By the way, you may have noticed that I’m referring to my robot as a “he”. We deliberately assigned a gender to our robot to make it seem more human, easier to relate to. But making our assistant robot male also allowed our team to subvert the common stereotype of giving female names to robots in support roles.)

    The definitive book about conversational design has yet to be written. We’ll see best practices for designing conversations form and break and form again. This is our chance as designers to influence what our relationship to these machines will look like. We shape our tools and thereafter they shape us.

    In the next part of this article, we’ll dive deeper into basic GUI patterns and discuss the best way to replicate them in conversational form.

  • Validating Product Ideas 

    A note from the editors: We are pleased to present an excerpt from Tomer Sharon's Validating Product Ideas Through Lean User Research published by Rosenfeld Media. Get 20% off your copy using code ALAVPI.

    Amazingly, 198 out of the 200 enterprise product managers and startup founders interviewed for this book said they were keeping a list of product ideas they wanted to make a reality some day. While keeping a wish list of solutions is a great thing to have, even more impressive is what only two startup founders were doing. These founders were keeping a list of problems they wanted to solve. They chose to first fall in love with a problem rather than a solution.

    Focusing on learning how people solve a problem as IDEO did for Bank of America can lead to innovative solutions, or in this specific case, a successful service offering. IDEO designers and Bank of America employees observed people in Atlanta, Baltimore, and San Francisco. They discovered that many people in both the bank’s audience and the general public often rounded up their financial transactions for speed and convenience. They also discovered that moms were not able to save money due to a lack of resources or willpower. The result married these two observations into “Keep the Change,” a Bank of America checking account (Figure 3.1). This account “rounds up purchases made with a Bank of America Visa debit card to the nearest dollar and transfers the difference from individuals’ checking accounts into their savings accounts.” In less than a year, the offering attracted 2.5 million customers, generating more than 700,000 new checking accounts and one million new savings accounts for Bank of America.

    Bank of America explains on its website how “Keep the Change” works.

    Figure 3.1

    Bank of America explains on its website how “Keep the Change” works.

    Why Is This Question Important?

    The question is important because it’s the biggest blind spot (Figure 3.2) of the Lean Startup approach and its Build-Measure-Learn feedback loop concept. Getting feedback on a product and iterating is generally a convergent process based on what you know, what you experience in the world, what your friends tell you, what you whiteboard with your team, and what analytics tell you about your product. But sometimes the solution is outside.

    The value of uncovering observable problems sometimes comes by discovering the blind spot rather than going through the cycles of product iteration (with permission from Benjamin Gadbaw).

    Figure 3.2

    The value of uncovering observable problems sometimes comes by discovering the blind spot rather than going through the cycles of product iteration (with permission from Benjamin Gadbaw).

    The question “How do people currently solve a problem?” is critical, because deeply understanding a problem can go a long way toward solving it with a product, feature, or service. Falling in love with a problem happens through observing it happen in a relevant context, where the problem is occurring to people in your target audience.

    The modern GPS network is a great example of how identifying this blind spot resulted in a solution.1 The GPS network was originally built in the 1970s for the U.S. Navy and Air Force as a tracking system for planes, boats, and missiles. The Army, however, had always had a problem with mobile ground forces losing their way during a battle. They needed a reliable tracking mechanism for ground navigation. Obviously, the army’s need came from real, life-threatening situations where fighting units found themselves in the wrong place at the wrong time or late to arrive to the right place due to mistakes in manual navigation. The personal navigation systems and apps developed for this need are now what we use today as GPS devices on our smartphones.

    When Should You Ask the Question?

    All. The. Time. Assuming you and your team have fallen in love with a problem to solve, constantly asking (and answering) the, “How do people currently solve a problem?” is critical for achieving Product/Market Fit. Otherwise, after it’s too late, you’ll find that your audience is already satisfied with a different way of solving the same problem and that your company, startup, or product has become redundant. To be more specific, here are some great times to ask the question (Figure 3.3):

    • When you strategize: Exploring how people solve a problem today helps you come up with a great idea tomorrow, since the best predictor of future behavior is current behavior. Even if you have a product idea, figuring out the problem it solves might lead you to improve it significantly.
    • When you execute: Keeping your eyes open even during the development of your product idea can help validate it, fine-tune it, or pivot to a better strategy, if needed. Or perhaps even invalidate it if you find the idea is no longer relevant.
    • When you assess: Putting your product aside for a moment and bringing fresh eyes to the field to observe how people behave without your product can help prioritize features on your roadmap.
    When is a good time to ask “How do people currently solve a problem?” The big circles represent the best times, while the smaller ones indicate other times recommended for asking the question.

    Figure 3.3

    When is a good time to ask “How do people currently solve a problem?” The big circles represent the best times, while the smaller ones indicate other times recommended for asking the question.

    Answering the Question with Observation

    One of the most reliable ways to answer the question “How do people currently solve a problem?” is through observation. While not an easy technique to apply, observing people in their natural context of using products or services can take you a long way toward deeper learning about a real problem. Observation involves gathering data in the user’s environment, so it is the science of contextualization. Observation can be referred to by many names, including:

    • Field observation
    • Field study, fieldwork, field research
    • Contextual inquiry
    • Guided tour
    • Fly-on-the-wall
    • Shadowing
    • Ethnography

    The different names sometimes indicate how much interaction happens between the participant and the moderator. Fly-on-the-wall and shadowing hint at no interaction, while guided tour and contextual inquiry might indicate there’s more of a conversation going on. The only exception to the list of names is ethnography. In classic ethnography, the researcher (that’s you) immerses herself among the group she is studying and joins its activities. For example, if family cooking is of interest, the researcher joins a family and cooks with them rather than interviewing them, while simultaneously observing what family members do. The truth is that it doesn’t really matter what you call it. As long as you are observing a person in her natural environment, you are in the observation business.

    There are five important pillars for observation:

    1. Observing: Watching people as they go about their daily lives at home, work, in between, or wherever is relevant to what the product team is interested in. Observing will help you uncover not only what happened, but also why it happened.
    2. Listening: Learning the language and jargon people use in their own environments, as well as witnessing conversations they have with others. Listening to people’s jargon has an extra benefit of identifying words they use to describe things. For example, when using online banking, many people struggle to find mortgage information because banks use the word loan to describe a mortgage. Uncovering user jargon in observation can help you identify language to be used in your product.
    3. Noticing: Paying attention to a variety of behaviors and occurrences that might have significant implications on user needs. Just standing there watching what people do can be a challenging and overwhelming experience if you don’t know what to look for. Looking for and paying attention to behaviors such as routines, annoyances, interferences, habits, etc. turns “just being there” into an effective design tool.
    4. Gathering: Collecting different things (aka, artifacts) that people use or create to complete certain tasks might signal user needs or missing features or products. For example, an artifact you might find useful if you were conducting an observation in a grocery store would be a person’s grocery list.
    5. Interpreting: Figuring out what the observed behavior means and why the person is doing it that way.

    Why Observation Works

    Observation is an effective user research technique that carries the following benefits:

    • Identifying new features or products
    • Validating/invalidating team assumptions about users
    • Identifying problems people might have
    • Understanding user goals
    • Understanding people’s workflows

    Other Questions Observation Helps Answer

    Other than the “How do people currently solve a problem?” question, observation is a great method for answering the following questions as well. If you ask yourself any one of these questions, observation can help you get an answer:

    • Is there a need for the product?
    • Why are people signing up and then not using the product?
    • What are some feature ideas our customers have?
    • How do people choose what to use among similar options?
    • How do we make using this product a habit?
    • Does the product solve a problem people care enough about?
    • Which customer needs does the product satisfy?
    • How do we define the right requirements for the product?
    • How will the product solve people’s pain points?
    • Which features are most important?
    • Should we build [specific feature]?
    • Who are the product users?
    • What are the different lifestyles of potential product users?
    • What motivates people?
    • What jargon do people use to talk about a specific topic?

    How to Answer the Question

    The following is a how-to guide that takes you step-by-step through the process of using observation to answer the question “How do people currently solve a problem?”

    Step 1: Find eight research participants.

    Finding participants for observation raises a limitation of the method that you should be aware of. Naturally, when you want to observe people, you should be right next to them. This means that you and your participants should be in the same location. However, several situations might happen based on the location of your target audience:

    • Your target audience resides in your location. No problem. Carry on.
    • Your target audience resides in your location and in other locations. Try to make an effort to travel to other locations for observation. If traveling is not an option, observe people in your location and apply other research techniques with people in other locations (such as interviewing, experience sampling, or a diary study).
    • Your target audience resides in other locations, some (or all) very far from where you are located. If your most important audience is far away from you, make an effort to travel for observation. If traveling is not an option, you can either be creative with remote observation (ask your participants to broadcast live from their phone as they go about their lives) or apply other research techniques (such as interviewing, experience sampling, or a diary study).

    Recruiting participants is the greatest bottleneck of user research. Start as soon as you can. Chapter 9 guides you through how to find participants for research through social media. The following are the key steps in this process (previously shown in Chapter 1):

    1. List your assumptions about participant criteria (e.g., business traveler).
    2. Transform participant criteria into measurable benchmarks (e.g., travels for business at least three times a year).
    3. Transform the benchmark into a screening question or questions (e.g., How often do you go on an airplane?). If a person chooses the “right” answer, he’s in. If not, he’s out.
    4. Craft a screening questionnaire (also called a screener) that you can send people. (Here is a sample screener.)
    5. Pilot-test the screener with a couple of people and make improvements.
    6. Identify relevant social media groups, pages, communities, and hashtags where your audience is likely to linger and post calls to take your screener.

    Observation generates huge amounts of rich data, somewhat similar to the amounts you might collect in interviewing (see Chapter 2) or diary studies (see Chapter 4). These large amounts of collected data directly affect your choice for the number of participants you observe in the study. As in other qualitative methods, keep this number low and digestible. Eight participants is a good number. More than that requires more time or hands when it comes to analyzing data and coming up with results.

    Step 2: Prepare a field guide.

    Before you go to the field to observe people, prepare a field guide that will help you focus on your goals and support your data collection. The first thing on your field guide is a short list of research questions. Research questions are questions that will be answered during the study. Don’t confuse them with questions you might ask research participants. They are not the same. Research questions are questions you or the team has. They indicate a knowledge gap you want to fill in with insights. As a ballpark estimate, you should probably have around five research questions. Research questions will help you during observations and will guide you into what parts you need to pay most attention to. For example, here is a list of research questions you might have prior to observing someone during grocery shopping:

    • How do people choose which items to buy?
    • What are the items people have most difficulty in finding?
    • What is the primary challenge people have when grocery shopping?
    • In what situations do people use their smartphone to support their grocery shopping? What is the motivation behind it?

    The primary goal of a field guide is to help you capture the necessary data during observation sessions. The level of detail in a field guide depends on your level of experience conducting observations and how much structure you need in taking notes. Here are two approaches you can pick from:

    • Less structure: Create a list of things to pay attention to and look for while keeping an open mind about new, sometimes surprising, things that will reveal themselves to you during observation. If you are a person who trusts her intuition, do just that and allow yourself to add to the list or deviate from it as observation progresses. Here is a sample such list for a grocery shopping observation:
      •  
      • Problems we want to observe: Challenges in grocery shopping such as deciding what to purchase, finding items, or wasted time.
      •  
      • Problem could occur when: Participant stalls, doesn’t know where to go, asks for help, looks repeatedly at grocery list, or calls spouse.
      •  
      • Details to be recorded: Full description of observable problem, time spent on solving problem, participant’s decision tree, participant’s motivation to solve the problem, external factors affecting problem or solution (technology, other people), or chosen solution.
      •  
      • Back-up strategies if the problem/behavior is not happening: Ask for a retrospective demonstration as close to the real thing as possible, ask what was challenging, ask participant if what happened was typical; if not, probe to explore past challenges.  
      •  
    • More structure: Structure your note taking and prepare as many placeholders as possible to save time during observations and make sure you pay attention and document everything you need. Structure the field guide so that it begins with simple, specific questions your participant feels comfortable answering. Then go broad and list questions or behaviors to look for that are broader, and finally after setting the context for more targeted questions, finish with deeper probes that will help you with your innovation challenge. Figure 3.4 is a screenshot from a sample field guide for grocery shopping you can use as a reference .
    A screenshot from a sample, structured field guide for grocery shopping.

    Figure 3.4

    A screenshot from a sample, structured field guide for grocery shopping.

    Step 3: Brief observers.

    Assign people in your team who join observation sessions with roles such as note-taker, photographer, and videographer. Conduct a brief meeting during which you give observers a short background about the participant and about what’s going to happen. Dedicate enough time to discuss expected participant actions and your reactions. An observation might present uncomfortable situations for participants, moderators, and observers.

    In a typical observation, a small team, consisting of a moderator and two to three observers, visits a participant in her natural environment such as home or work. The challenge of the team is to overcome situations that might cause participants or other stakeholders (e.g., a participant’s spouse or manager) not to cooperate or to put a complete halt to the session. The observer brief is a great opportunity to discuss potential participant reactions during observations and ways to overcome them with appropriate reactions by the team. The more prepared the team is for these situations, the higher the chances are to defuse them.

    Following are several situations that might come up during observation sessions and how you should react. The purpose here is not to intimidate you, but to have you be aware of and prepared for unexpected situations that might happen.

    1. The participant is reluctant to share what’s on his computer/phone. (“Let me go to my bedroom where the computer is, find the answer to your question, and quickly get back here.”)
         
      • Prevention: Ask more personal questions or requests when you are getting on well with the participant. Feel the energy. Spend time building rapport and try to make these requests later in the session.
      •  
      • Explain why you are so interested. Have a story prepared for how you are trying to help people, how important your research is, and how many people could benefit.
      •  
      • Don’t take photos of the participant’s screen.
      •  
      • In your notes, indicate “return to topic” next to what happened throughout the early parts of the session and try again later if you feel it is right.
      •  
    2. The participant refuses to hold the session at the most relevant location. (“My office is really small so I don’t think we can all fit.”)

         
      • Ask if you can pass by his desk or study room or bedroom (wherever the relevant location is) on the way out just to get a feeling of his home or workplace (“That’s totally fine. We don’t mind sitting on the floor.” Or you could ask “Can we pass by on our way out just to get a feel?”).
      •  
      • If appropriate, ask if you can take a picture of the relevant room or area.
      •  
      • Don’t push too hard on this.
      •  
    3. The participant thinks too many people showed up. (“Wow, a whole crew!”)

         
      • Apologize if it wasn’t communicated clearly enough in advance.
      •  
      • Make the participant feel like a superstar: “You have been specially selected because you are so interesting and important. The team wants to model its product on you because you represent the ideal customer so much, but, of course, it will only be worthwhile if you are really honest and act completely naturally.”
      •  
      • Explain that it is important for team members to attend since they want to learn from her knowledge and experience.
      •  
      • Explain that observers will not be very active during the session.
      •  
      • Worst comes to worst, ask one to two observers to leave. Best thing to do is to completely avoid this situation altogether. Have a maximum of three people show up for observation. If more people want to observe, create an observation shift schedule.
      •  
    4. The participant refuses to be photographed or recorded. This refers to a situation when the participant initially agreed, but when you actually take the camera out or even after you take a few pictures, he is clearly uncomfortable with it or directly asks you to stop.

         
      • Prevention: Mute the camera’s “click” sound and use a smartphone, which is less intimidating than a huge SLR camera.
      •  
      • Prevention: Let the participant know you will ask permission every time you want to take a photo, and he can say no at any time. You will respect his privacy and understand that certain situations should not be recorded.
      •  
      • Explain that photos are extremely important for this study since they show important decision-makers in your company that there are real people behind your recommendations.
      •  
      • Offer to show him all the photos you took when the session is over and allow him to delete ones he is not comfortable with.
      •  
    5. The participant does not answer the questions.

         
      • Prevention: Ask the question and stay silent. Your participant will feel an urge to fill in the vacuum with an answer.
      •  
      • Try to find out if there is anything that makes her uncomfortable. If you are in a team, you could take the participant aside for a one-to-one meeting. This could be to find out if she feels uncomfortable or to get more detailed data on your own.
      •  
      • Tell a story. It can either be about you (something relevant) or about another participant (anonymous, of course) who had similar opinions or experiences. It can show that there is nothing embarrassing about the situation, because you come across it all the time and you can relate to it personally. It also helps to make you more human, likeable, and friendly.
      •  
      • If the participant insists, move on.
      •  
    6. The participant invites others to join.

         
      • Find out who the others are and why they want to join. It might become an advantage by having additional perspectives and seeing how the person behaves in front of friends.
      •  
      • Explain this is an individual session and that you cannot have others join it.
      •  
      • Explain it is important for you to get his personal perspective.
      •  
      • Give up if he insists. Sometimes, it’s even the right thing to do.
      •  
    7. The participant says she doesn’t use X.

         
      • Explain that the products she is using are not the focus of the session.
      •  
      • Say, “Interesting. What do you use to solve for ____?”
      •  
    8. The participant’s manager/spouse is not aware of what’s happening.

         
      • Repeat the introduction sections dealing with goals, procedure, confidentiality, and documentation to the manager or spouse.
      •  
      • Apologize that the other person wasn’t informed of this in advance.
      •  
      • Ask for permission to continue.
      •  
      • If you are kicked out, leave quietly. Make no fuss about it.
      •  
    9. The participant is nervous, stressed, or physically uncomfortable.

         
      • Telling a story can help break the ice. Sometimes pointing out that he seems nervous can make it more awkward, but changing the energy with a story can be a stealthy way to deal with it. Talking to the participant like a friend will help make it seem less like research. The story could be about another participant and how she helped to develop a great solution that solved real-life problems others had.
      •  
      • Offer to take a short break.
      •  
      • Answer questions the participant has about the session.
      •  
      • Sneak in another mention of your confidentiality commitment.
      •  
      • If you feel he is at a distress level that is too much, stop the session, politely apologize, and leave.
      •  
    10. The participant wants to finish earlier.

         
      • Try to figure out why and answer her questions.
      •  
      • If she mistakenly scheduled a conflicting event, ask how much time you have, promise to finish on time, and keep your promise.
      •  
      • Reschedule for another day. Split the session, but still get all of your data.
      •  
    11. You want to finish a few minutes later.

         
      • As soon as you realize you need more time, ask the participant if that’s okay. Don’t do it when time is up.
      •  
      • Tell the participant how much more time you will need. Keep your promise. Don’t ask for more.
      •  
      • If the participant doesn’t agree, thank her and finish on time even if you didn’t get to ask everything you wanted to.
      •  

    Step 4: Practice!

    It’s a short yet critical step. Gather the observer team, recruit a fake participant (a colleague perhaps), and practice observation for 15–20 minutes. This will help with setting expectations, getting used to paying attention to important things, note-taking, taking photos, recording video, invading personal space, not bumping into one another, and other small logistics stuff that would prevent you from wasting precious time or looking stupid and unprofessional.

    Step 5: Gather equipment.

    Whatever equipment you decide to take with you, keep the following in mind:

    • Small cameras: Large video cameras with tripods intimidate participants and make them change their behavior. It’s enough that three people are there to look at them and document their every move. Don’t add to that feeling. Consider using a GoPro (or a similar small size) camera. The size of the camera creates a better feeling than showing up with a four-foot tripod and a high-quality, full-size video camera. There’s no rule you have to video record observation sessions. Consider video recording if there are team members who cannot attend sessions yet are interested in watching them, or if you feel you will need to come back to your team with stronger evidence.
    • Smartphones as cameras: Instead of large or dedicated video cameras, consider using your smartphone as a video and a still photo-recording device. Most people are more or less used to seeing a phone held in front of their face.
    • Quiet cameras: Silence your camera. Clicks and beeps intimidate participants and remind them they are being recorded, which will cause them to deviate from their natural behavior.
    • A shot list: This is a list of photos you hope to take—the person’s portrait, an artifact, a contextual shot of their space, etc.
    • Extra batteries: Batteries drain. Take a lot of spare ones for everything that requires charging.
    • Chargers: Take charging equipment in case you will have an opportunity to charge while in the field. Cables, plugs, dongles, and power splitters can become extremely handy. When you get to an observation location, immediately survey the area to identify power sockets. If there are any, ask for permission to use them. Do that after you establish rapport (see next step). If you don’t expect to have a power outlet in your observation area, consider taking with you a charging device or even a fully charged laptop from which you’ll charge your other devices.
    • Memory cards: Make sure that you have spare memory cards for phones, cameras, and videos. Remember to check them and take previous data off so that you have maximum space. Try to calculate how much you will need—for example, if one hour of video recording takes up 4GB, then take enough memory cards or change the recording quality. Don’t run out halfway. If you are recording in HD, then consider the implications of how you will store and transfer files. How much space do you have on your laptop if you’re in the field and need to transfer every day?
    • NDA: This is a non-disclosure agreement that describes the confidential aspects of your study as well as why and how you use the recordings and data you collect. There might not be a need to ask participants to sign an NDA if you are not showing them anything confidential. However, sometimes, you might want them to sign one to protect the method you are using and the questions you are asking, which might indicate a confidential aspect of your business.
    • Incentive: If you promised the participant money or a gift, make sure you take these with you.
    • Audio recorder and lavalier microphone: While a big bulky camera is intimidating, a recorder and lav is quickly forgotten and ignored by participants.
    • Save juice: Make sure that you limit your device usage during observation days to observation needs. No Twitter, Facebook, or texting with friends. Watch those cat videos when you’re back at home.
    • Office equipment: Take notebooks, pens, and pencils in case all hell breaks loose. I know it’s hard, but I guarantee you will quickly learn what to do with them in case all of your device batteries are drained.
    • Post-it notes: Taking notes during observation sessions on Post-it notes can save you precious analysis time later on. Get a pack of colored Post-it notes, one color per participant. Make sure that you have enough of them so that every observer can use a pack in each observation session. So, for example, if you plan to observe 6 people and you have a small team of 3 observers, you will need a minimum of 3 blocks of red Post-it notes, 3 blue blocks, 3 yellow, 3 green, 3 purple, and 3 orange blocks.
    • Mints and Water.

    Step 6: Establish rapport.2

    When people agree to participate in “research,” they imagine meeting someone who spends all his time in a lab, wearing a white robe with a name tag, holding a writing pad, experimenting with rats all day long, and wearing rectangular glasses on half his nose.

    Then you show up.

    All the things you say in the first five minutes—every word, each of your voice intonations, the things you do, your gestures and body language—have a tremendous, sometimes underestimated, effect on your participant’s behavior throughout the entire interview session.

    The following things to say and do will help you create rapport with your participant. Your participants will perceive you and this whole weird situation of interviewing in a more positive way. Not completely positive, but more positive. Most of these things are also true for first dates. There’s a good reason for it.

    1. Smile. It’s our most powerful gesture as humans. Research shows that smiling reduces your stress levels, lifts your mood, and lifts the mood of others around you.3 It will also make you live a longer, happier life, but that’s for a different book.
    2. Look ’em in the eye. When you look someone in the eye, it shows you are interested. When you do it all the time, it’s creepy. Try to look your participants in the eye during the first 10 minutes of the session for at least 30% to 60% of the time—more when you are listening and less when you are talking.
    3. Avoid verbal vs. nonverbal contradictions. When your participants identify such contradictions, they will be five times more likely to believe the nonverbal signal than the verbal one.4 For example, if you say to participants you will not use the study’s video recording publicly while you wipe sweat from your forehead three times, they are going to think you are lying. When you are sending inconsistent messages, you are confusing participants and making them believe you are insincere.
    4. Listen. From the moment you first meet your research participants, listen carefully to every word they say. Show them you care about what they have to say.
    5. Say thank you. Keep in mind they volunteered to help you. Agreeing to have someone follow you, look at you, take notes about everything you say and do, sometimes in your own home is something you should appreciate and be grateful for. Don’t forget to thank your participants from the very first moment. They should know you really mean it.
    6. Dress to match: If you normally wear a suit and you’re meeting a customer in her home who may be wearing sweats and a t-shirt, you might come off intimidating. Likewise, if you are a hoodie and sneakers type meeting someone in a professional setting, then dress to match that setting. You don’t want to be disrespectful. Always ask the participant if he would like you to take off your shoes in his home. You are his guest.
    7. Check your appearance: Make sure that you don’t have something stuck in your teeth. That piece of gunk will make you look unprofessional, which will not help with establishing rapport, as stated in Chapter 2.

    Step 7: Obtain consent.

    As stated in Chapter 2 and bears repeating, informed consent means that your research participants are aware of their rights during a research session. It is not about whether they sign a form or not. It’s about having people truly understand and agree to the following:

    1. They agree to participate in the research session.
    2. They understand what is going to happen during the research session.
    3. They understand how data collected during the research session will be used.
    4. They understand they are being recorded.
    5. They agree to being recorded.
    6. They understand how the recording will be used.
    7. They understand that their identity and privacy will be kept.
    8. They understand that participation is voluntary.
    9. They understand they have the right to stop the session at any point.
    10. They agree to raise any concerns they might have immediately.
    11. They have an opportunity to ask questions before the session starts.

    Why You Must Obtain Consent

    As I said in the last chapter, I can give you my spiel about how applying the Scientific Method5 is important and that obtaining consent from research participants is a key part of it. But I’m not going to do that. Instead, I’ll just say that obtaining consent is the right, ethical thing to do even if you are “just talking with people.” Half-assing your research ethics, means you’re half-assing your learning process, means you are half-assing your product development. Although informed consent sounds like a term taken from a court of law, it is not. It is the fair thing to do and the best way to treat people who happen to be your research participants.


    Step 8: Collect data and pay attention.

    The hardest thing to do during field observation is to pay attention to everything that is going on in front of your eyes. You might not realize it, but observing how humans behave generates tons of rich data. It is sometimes challenging to notice when something important happens. Your guide as to what to look for is to stay focused on the reason that you are running this research in the first place and the goals you have set for user research. Focus on things related to your goals. When you are observing a study participant, look for the following occurrences:

    • Routines: Routines are things that seem to be regular actions the participant is following. For example, each time a new work-related task comes up, the participant logs it on a spreadsheet that he has created. This routine can later turn into a feature in your product.
    • Interactions: Follow her interactions when a study participant uses a certain product, tool, or service, or when she converses with another person. For example, when a study participant doesn’t understand a certain word, she might use an online dictionary to figure it out.
    • Interruptions: An interruption might occur when a study participant stops a task or breaks its continuity either because he has decided to do so or because another person caused it. For example, when a phone call comes in and diverts the study participant from what he is doing. Note that it is intuitive for the researcher to ignore these interruptions, yet in many cases they can teach you a lot. Life is not always “clean” of interruptions so we must understand them.
    • Shortcuts/workarounds: When a study participant chooses a shorter alternative, it is sometimes an indication of a small problem to pay attention to. For example, when instead of writing something down, a participant takes a pen and marks an X on the back of her hand. What that means for your product or people’s needs is not clear when you observe it. Yet this behavior might relate to a different one you observe that might make sense later on.
    • Contexts: Context occurs when a certain action or behavior is demonstrated in a different manner because of the environment in which it happens. For example, when a participant does not take a note on his smartphone because of direct sunlight that makes it hard for him to see anything he types.
    • Habits: These are behaviors participants demonstrate that are almost automatic. For example, scribbling something with a pen to make sure it works, even though it is brand new.
    • Rituals: A ritual is an established sequence of behaviors in a particular context that cannot be explained purely in terms of being functionally necessary. It’s almost optional or voluntary; for example, buying a drink if it’s someone’s birthday and singing happy birthday to them.
    • Jargon: Paying attention to the language and jargon people use in their own environments, as well as witnessing conversations they have with others, is extremely helpful in empathizing with them and uncovering their needs. Using the unique language people use when they talk about different things will prevent you from using language your audience doesn’t understand in your product or service. For example, if a person you observe keeps referring to a mortgage, that’s a good signal for you to use this label in your online banking app rather than calling it a loan. You might learn that people interpret the term loan very differently than how you or your team does. It’s also a good cue to mimic their language in the observation session in order to appear less different and to build rapport.
    • Annoyances: Annoyances are obstacles that keep people from completing their tasks or achieving their goals. An annoyance would not necessarily prevent them from reaching their goals, but it would make them angry, frustrated, overwhelmed, or disappointed along the way. For example, a person might get annoyed while filling out an online form while dealing with noise from a nearby room.
    • Delights: The things people enjoy can teach you a lot about what they need. Many people perceive research as an activity that uncovers problems and frustrations. That’s partially true. Uncovering things that delight and work well for users can go a long way toward developing great products. For example, you might notice people who are satisfied by in-field form validation instead of validation done after submitting the form.
    • Transitions: When people move from place to place, it’s a great time for them to share things that might become invaluable—especially when they think the research session is over or on a temporary pause. For example, if you observe someone taking notes in a certain classroom, pay extra attention to what happens when the class is over and until you part ways with the participant.
    • Artifacts: Artifacts are tools, services, products, any other thing that people use to complete tasks, or seemingly useless yet meaningful objects (such as rubber duckies for developers). Your job is to pay attention to the usage of artifacts, and if possible, collect or document them. For example, if a person is taking notes while using a LiveScribe pen and notebook, that’s an important artifact to take note of, no pun intended.

    Here are some additional pointers to note in order to get the most out of observation:

    • Approach each observation session with an open mind. You’ll find that in many cases, you invalidate your initial assumptions about people and their problems and reach insights you never realized.
    • Have a conversation with the person, not an interview. Don’t just go through the motions of what you planned. If you feel there’s something to talk about that’s worth the time, make the time for it. Don’t feel you must stick to the script.
    • Let your participants be. Don’t interrupt or talk over them. If you do, they’ll avoid sharing additional things with you, and you might be missing key insights.
    • Pitch your level of knowledge to match the participants. Try not to make them feel like you’re more knowledgeable than they are.

    As you observe, that’s a lot to track and digest. You need a lot of practice to get it right. Don’t worry, though. Even if you miss a few things, you’ll still get to learn many valuable lessons and you’ll get better in time.

    Step 9: Debrief.

    A common mistake is to assume that every observer interpreted the same things you did or placed similar value on certain observations. Debriefs and syntheses are a process of creating a shared understanding so that the team can move forward in a unified direction. Debriefs will help you capture your insights while they are still fresh in your mind and will decrease the load of analysis and synthesis that awaits you after all of the observations are completed. There are two types of debriefs, the quick debrief and the daily debrief.

    Quick Debrief

    Shortly after you are done with each field observation session, conduct a quick debrief with observers. Do it in the lobby of the building, in the train, cab, or bus, in a park, on a bench, wherever. The most important thing is to conduct the debriefing shortly after the session ends so that things are still fresh in your mind. This will also prevent you from getting confused if you run several sessions in one day. In addition, take five quiet minutes to yourself and write a short paragraph that summarizes the session.

    During the quick debrief, ask yourself and the observers the following four questions (inspired by IDEO’s human centered design kit6):

    1. What did the participant say or do that surprised you? Were there any memorable quotes?
    2. What mattered most to the participant?
    3. What stood out during this session? What are the big lessons?
    4. What should we do differently in future sessions?

    If you’d like to try this debriefing technique, run a quick debrief for the second field observation video you watched in the sidebar after Step 8.

    Daily Debrief

    When you conduct several observations per day for several days (e.g., two half-day observations every day for three consecutive days), gather the team at the end of each day in front of a large wall and run the following exercise (aka, affinity diagramming):

    1. Put all of your Post-it observations on the wall.
    2. Organize them into temporary, logical groups. The groups can change from daily brief to daily brief.
    3. If you used unique Post-it colors per participant, you’ll notice very quickly which groups of observations were popular among different participants and which ones were only observed with one or two participants (see Figure 3.5).
    4. Take photos of the wall (these Post-it notes tend to fly off).
    5. Log groups and items into a spreadsheet.
    6. Continue working on affinity diagramming until data collection is completed.
    Affinity diagramming wall during a daily brief.

    Figure 3.5

    Affinity diagramming wall during a daily brief.

    What to Do with Photos

    Photos you took during observation sessions provide inspiration, visual context, and sometimes, supporting evidence for your findings. The idea is to fill your design space with inspiration from the field. Here are some ideas:

    • Organize photos based on groups of observations you identified during debriefs.
    • Curate topical photo galleries.
    • Print a photo of each participant to remember that person.
    • Cover a wall or board in photos from the field and tag them with observations.

    Step 10: Analyze and synthesize.

    There is no one way to make sense out of observation data you collect. That said, affinity diagramming combined with storytelling is a straightforward approach that seems to work for teams. Here are the steps to complete an affinity diagramming and storytelling exercise:

    1. Complete the affinity diagramming exercise you started in the daily debriefs. Sort all of the observations into groups and give each group a name. Alternatively, you can do that by implementing the KJ Technique.7
    2. As a team, select the most important and meaningful groups.
    3. Per group of observations, write a short story that describes a future scenario of a person using a product or feature that doesn’t exist yet. The story can be very short—about 150–200 words. Base the story on a problem or need you identified during observation.
    4. Share the stories with the team, gather feedback, and get agreement and shared understanding.

    Other Methods to Answer the Question

    While observation is a great, immersive way for answering the “How do people currently solve a problem?” question, the following are two additional methods for answering it. Ideally, if time is on your side, a combination of two to three methods is the best way for uncovering insights to help you answer this question.

    • Interviewing is a research activity in which you gather information through direct dialogue. It is a great way to uncover and understand people’s feelings, desires, struggles, delights, attitudes, and opinions. Interviewing people whom you know to be your target audience (and those you think are not) is a great way to get to know your users, segment them, design for them, solve their problems, and provide value. An interview can be held in person or remotely over a phone or some kind of a video conference. Chapter 2 guides you through conducting interviews for uncovering needs.
    • In a diary study, participants document their activities, thoughts, and opinions and share them with you over a period of time. A diary might be a record of their experience using a product or a means to gain understanding of ordinary life situations in which products might be usefully applied. Diary studies are best for learning about more complex processes. Chapter 4 walks you through conducting a useful diary study.

    Note On Resources

    Access the online resource page for observation on the book’s companion website at leanresearch.co. You’ll find templates, checklists, videos, slide decks, articles, and book recommendations.

    Observation Checklist

    • Find eight research participants.
    • Prepare a field guide.
    • Brief observers.
    • Practice!
    • Gather equipment.
    • Establish rapport.
    • Obtain consent.
    • Collect data and pay attention.
    • Debrief.
    • Analyze and synthesize.

    Footnotes

    • 1. Read Famous products invented for the military.
    • 2. Similar to Step 7 in Chapter 2.
    • 3. Seaward, B. L. Managing Stress: Principles and Strategies for Health and Well-Being. Sudbury, Mass.: Jones and Bartlett, 2009.
    • 4. Argyle, M., Alkema, F., and Gilmour, R. (1971). The communication of friendly and hostile attitudes by verbal and non-verbal signals. Eur. J. Soc. Psychol., 1.
    • 5. A method of inquiry based on measurable evidence subject to specific principles of reasoning (Isaac Newton, 1687).
    • 6. IDEO's human centered design kit.
    • 7. See Chapter 2, Steps 6 and 10.
  • Finessing `feColorMatrix` 

    Have you seen Spotify’s end-of-year campaign? They’ve created a compelling visual aesthetic through image-color manipulation.

    Screenshot of Spotify’s end-of-year campaign

    Image manipulation is a powerful mechanism for making a project stand out from the crowd, or just adding a little sparkle—and web filters offer a dynamic and cascadable way of doing it in the browser.

    CSS vs. SVG

    Earlier this year, I launched CSSgram, a pure CSS library that uses filters and blend modes to recreate Instagram filters.

    Image grid from Una Kravets’ CSSGram showing a variety of filters and blend modes that recreate Instagram filters

    Now, this could be done with tinkering and blend modes—but one key feature CSS filters lack is the ability to do per-channel manipulation. This is a huge downside. While CSS filters are convenient, they are merely shortcuts derived from SVG and therefore provide no control over RGBA channels. SVG (particularly the feColorMatrix map) gives us much more power and lets us take CSS filters to the next level, granting significantly more control over image manipulation and special effects.

    SVG filters

    In the SVG world, filter effects are prefixed with fe-. (Get it? For “filter effect.”) They can produce a wide variety of color effects, ranging from blur to generated 3-D textures. The term fe- is a bit loose, though; see the end of this article for a summary of each of the SVG filter effects’ methods.

    SVG filters are currently supported in the following browsers:

    Screenshot from caniuse.com

    Screenshot from caniuse.com.

    So yeah, you should be good to go for the most part, unless you need to support IE9 or older. SVG filter support is relatively stable, and is more widespread than CSS filters and blend modes. There are also few major weird bugs, unlike with CSS blend modes (where only Chrome 46 has issues rendering the multiply, difference, and exclusion blend modes).

    Note: Some of the 3-D filters, such as feConvolveMatrix, do have known bugs in certain browsers, though feColorMatrix, which this article focuses on, does not. Also, keep in mind that performance will inevitably take a tiny hit when it comes to applying any action in-browser (as opposed to rendering a pre-edited image on the page).

    Using filters

    The basic layout of an SVG filter goes like this:

    
    <svg>
      <filter id="filterName">
        // filter definition here can include
        // multiple of the above items
      </filter>
    </svg>
    
    

    Within an SVG, you can declare a filter. Most of the time, you’ll want to declare filters within defs of an SVG and can apply them in CSS like so:

    
    .filter-me {
      filter: url('#filterName');
    }
    
    

    The filter URL is relative, so both filter: url('../img/filter.svg#filterName') and filter: url('http://una.im/filters.svg#filterName') are valid.

    feColorMatrix

    When it comes to color manipulation, feColorMatrix is your best option. feColorMatrix is a filter type that uses a matrix to affect color values on a per-channel (RGBA) basis. Think of it like editing the channels in Photoshop.

    This is what the feColorMatrix looks like (with each RGBA value as 1 by default in the original image):

    
    <filter id="linear">
        <feColorMatrix
          type="matrix"
          values="R 0 0 0 0
                  0 G 0 0 0
                  0 0 B 0 0
                  0 0 0 A 0 "/>
      </filter>
    </feColorMatrix>
    
    

    The matrix here is actually calculating a final RGBA value in its rows, giving each RGBA channel its own RGBA channel. The last number is a multiplier. The final RGBA value can be read from top to bottom like a column:

    
    /* R G B A 1 */
    1 0 0 0 0 // R = 1*R + 0*G + 0*B + 0*A + 0
    0 1 0 0 0 // G = 0*R + 1*G + 0*B + 0*A + 0
    0 0 1 0 0 // B = 0*R + 0*G + 1*B + 0*A + 0
    0 0 0 1 0 // A = 0*R + 0*G + 0*B + 1*A + 0
    
    

    Here’s a better visualization:

    Hand-drawn sketch showing a schematic visualization of the fecolormatrix

    RGB values

    Colorizing

    You can colorize images by omitting and mixing color channels like so:

    
    <!-- lacking the B & G channels (only R at 1) -->
    <filter id="red">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the R & G channels (only B at 1) -->
    <filter id="blue">
     <feColorMatrix
        type="matrix"
        values="0   0   0   0   0
                0   0   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the R & B channels (only G at 1) -->
    <filter id="green">
      <feColorMatrix
        type="matrix"
        values="0   0   0   0   0
                0   1   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    Here’s what adding the “green” filter to an image looks like:

    Photo showing what the addition of the “green” filter would look like

    Channel mixing

    You can also mix RGB channels to get solid colorizing results:

    
    <!-- lacking the B channel (mix of R & G)
    Red + Green = Yellow
    This is saying there is no yellow channel
    -->
    <filter id="yellow">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the G channels (mix of R & B)
    Red + Blue = Magenta
    -->
    <filter id="magenta">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- lacking the R channel (mix of G & B)
    Green + Blue = Cyan
    -->
    <filter id="cyan">
      <feColorMatrix
        type="matrix"
        values="0   0   0   0   0
                0   1   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    In each of the previous examples, we mixed colors in CMYK mode, so removing the red channel would mean that green and blue remain. When green and blue mix, they create cyan. Red and blue make magenta. We still retain some of the red and blue values where they are most prominent, but in areas that lack the two (light areas of white, where all colors are present in the RGB schema, or areas of green), the RGBA values of the other two channels replace them.

    Justin McDowell has written an excellent article that explains HSL (hue, saturation, lightness) color theory. With SVG, the lightness value is the luminosity, which we also need to keep in mind. Here, each luminosity level is retained in each channel, so for magenta, we get an image that looks like this:

    Photo showing how a magenta effect is produced when each luminosity level is retained in each channel

    Why is there so much magenta in the clouds and lightest values? Consider the RGB chart:

    RGB chart

    When one value is missing, the other two take its place. So now, without the green channel, there is no white, cyan, or yellow. These colors don’t actually disappear, however, because their luminosity (or alpha) values have not yet been touched. Let’s see what happens when we manipulate those alpha channels next.

    Alpha values

    We can play with the shadow and highlight tones via the alpha channels (fourth column). The fourth row affects overall alpha channels, while the fourth column affects luminosity on a per-channel basis.

    
    <!-- Acts like an opacity filter at .5 -->
    <filter id="alpha">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   0   0
                0   0   1   0   0
                0   0   0   .5  0 "/>
    </filter>
    
    <!-- increases green opacity to be
         on the same level as overall opacity -->
    <filter id="hard-green">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   1   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <filter id="hard-yellow">
      <feColorMatrix
        type="matrix"
        values="1   0   0   1   0
                0   1   0   1   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    In the following example, we’re reusing the matrix from the magenta example and adding a 100% alpha channel on the blue level. We retain the red values, yet override any red in the shadows so the shadow colors all become blue, while the lightest values that have red in them become a mix of blue and red (magenta).

    
    <filter id="blue-shadow-magenta-highlight">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1   1   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing what happens when we reuse the matrix from the magenta example and add a 100% alpha channel on the blue level

    If this last value was less than 0 (up to -1), the opposite would happen. The shadows would turn red instead of blue. At -1, these create identical effects:

    
    <filter id="red-overlay">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1  -1   0
                0   0   0   1   0 "/>
    </filter>
    
    <filter id="identical-red-overlay">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a red overlay, making the shadows red instead of blue

    Making this value .5 instead of -1, however, allows us to see the mixture of color in the shadow:

    
    <filter id="blue-magenta-2">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   0   0   0   0
                0   0   1  .5   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a mixture of colors in the shadows

    Blowing out channels

    We can affect the overall alpha of individual channels via the fourth row. Since our example has a blue sky, we can get rid of the sky and the blue values by converting blue values to white, like this:

    
    <filter id="elim-blue">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   1   0   0   0
                0   0   1   0   0
                0   0   -2   1   0 "/>
    </filter>
    
    
    Image showing an example of blowing out a channel. We can get rid of the sky and the blue values by  converting blue values to white

    Here are a few more examples of channel mixing:

    
    <!-- No G channel, Red is at 100% on the G Channel, so the G channel looks Red (luminosity of G channel lost) -->
    <filter id="no-g-red">
      <feColorMatrix
        type="matrix"
        values="1   1   0   0   0
                0   0   0   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- No G channel, Red and Green is at 100% on the G Channel, so the G Channel looks Magenta (luminosity of G channel lost) -->
    <filter id="no-g-magenta">
      <feColorMatrix
        type="matrix"
        values="1   1   0   0   0
                0   0   0   0   0
                0   1   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    <!-- G channel being shared by red and blue values. This is a colorized magenta effect (luminosity maintained) -->
    <filter id="yes-g-colorized-magenta">
      <feColorMatrix
        type="matrix"
        values="1   1   0   0   0
                0   1   0   0   0
                0   1   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    

    Lighten and darken

    You can create a darken effect by setting the RGB values at each channel to a value less than 1 (which is the full natural strength). To lighten, increase the values to greater than 1. You can think of this as expanding or decreasing the RGB color circle shown earlier. The wider the radius of the circle, the lighter the tones created and the more white is “blown out”. The opposite happens when the radius is decreased.

    Diagram showing how you can create a darken effect by setting the RGB values at each channel to a a value less than 1; to lighten, increase the values to greater than 1

    Here’s what the matrix looks like:

    
    <filter id="darken">
      <feColorMatrix
        type="matrix"
        values=".5   0   0   0   0
                 0  .5   0   0   0
                 0   0  .5   0   0
                 0   0   0   1   0 "/>
    </filter>
    
    
    Image with a darken filter applied
    
    <filter id="lighten">
      <feColorMatrix
        type="matrix"
        values="1.5   0   0   0   0
                0   1.5   0   0   0
                0   0   1.5   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image with a lighten filter applied

    Grayscale

    You can create a grayscale effect by accepting only one shade’s pixel values in a column. There are different grayscale effects, however, based on which active levels one applies. Here we’re doing a channel manipulation, since we’re grayscaling the image. Consider these examples:

    
    <filter id="gray-on-light">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                1   0   0   0   0
                1   0   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a 'gray on light' effect
    
    <filter id="gray-on-mid">
      <feColorMatrix
        type="matrix"
        values="0   1   0   0   0
                0   1   0   0   0
                0   1   0   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a 'gray on mid' effect
    
    <filter id="gray-on-dark">
      <feColorMatrix
        type="matrix"
        values="0   0   1   0   0
                0   0   1   0   0
                0   0   1   0   0
                0   0   0   1   0 "/>
    </filter>
    
    
    Image showing a 'gray on dark' effect

    Pulling it all together

    The real power of feColorMatrix lies in its ability to mix channels and combine many of these concepts into new image effects. Can you read what’s going on in this filter?

    
    <filter id="peachy">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0  .5   0   0   0
                0   0   0  .5   0
                0   0   0   1   0 "/>
    </filter>
    
    

    We’re using the red channel at its normal alpha channel, applying green at half strength, and applying blue on the darker alpha channels but not at its original color location. The effect gives us dark blue in the shadows, and a mix of red and half-green for the highlights and midtones. If we recall red + green = yellow, red + (green/2) would be more of a coral color:

    Image showing what happens when we use the red channel at its normal alpha channel, apply green at half strength, and apply blue on the darker alpha channels but not at its original color location

    Here’s another example:

    
    <filter id="lime">
      <feColorMatrix
        type="matrix"
        values="1   0   0   0   0
                0   2   0   0   0
                0   0   0  .5   0
                0   0   0   1   0 "/>
    </filter>
    
    

    In that segment, we’re using the normal pixel hue of red, a blown-out green, and blue devoid of its original hue pixels, but applied in the shadows. Again, we see that dark blue in the shadows, and since red + green = yellow, red + (green*2) would be more of a yellow-green in the highlights:

    Image showing what happens when we use the normal pixel hue of red, a blown-out green, and blue devoid of its original hue pixels, but applied in the shadows. Again, we see that dark blue in the shadows, and since red + green = yellow, red + (green*2) would be more of a yellow-green in the highlights

    So much can be explored by playing with these values. An excellent example of such exploration is Rachel NaborsDev Tools Challenger, where she filters out the longer wavelengths (i.e., the red and orange channels) from the fish in the sea, explaining why “Orange Roughy” actually appears black in the water. (Note: requires Firefox.)

    How cool! Science! And color filters! Now that you have a basic grasp of the situation, you, too, have the tools you need to create your own effects.

    For some of those really rad Spotify duotone effects, I recommend you check out an article by Amelia Bellamy-Royds, who goes into even more detail about feColorMatrix. Sara Soueidan also wrote an excellent post on image effects where she recreates CSS blend modes with SVG.

    Filter effects reference

    Once you understand what’s going on with the feColorMatrix, you have the basic tools to create detailed filters within a single contained filter definition, but there are other options out there that will let you take it even further. Here’s a handy guide to all of the fe-* options currently out there for further exploration:

    • feBlend: similar to CSS blend modes, this function describes how images interact via a blend mode
    • feComponentTransfer: an umbrella term for a function that alters individual RGBA channels (i.e. , feFuncG)
    • feComposite: a filter primitive that defines pixel-level image interactions
    • feConvolveMatrix: this filter dictates how pixels interact with their close neighbors (i.e., blurring or sharpening)
    • feDiffuseLighting: defines a light source
    • feDisplacementMap: displaces an image (in) using the pixel values of another input (in2)
    • feFlood: complete fill of the filter subregion with a specified color and alpha level
    • feGaussianBlur: blurs input pixels using an input standard deviation
    • feImage: for use within other filters (like feBlend or feComposite)
    • feMerge: allows for asynchronous application of filter effects, instead of layering them
    • feMorphology: erodes or dilates lines of source graphic (think strokes on text)
    • feOffset: used for creating drop shadows
    • feSpecularLighting: source for the alpha component as a bump map, a.k.a. the “specular” portion of the Phong Reflection Model
    • feTile: refers to how an image is repeated to fill a space
    • feTurbulence: allows the creation of synthetic textures using Perlin Noise

    Additional resources

  • The Pain With No Name 

    Twenty-five years into designing and developing for the web and we still collectively suck at information architecture.

    We are taught to be innovative, creative, agile, and iterative, but where and when are we taught how to make complex things clear? In my opinion, the most important thing we can do to make the world a clearer place is teach people how to think critically about structure and language.

    We need to teach people that information architecture (IA) decisions are just as important as the look and feel of technology stack choices. We need to teach people the importance of semantics and meaning. We need to teach people to look past the way the rest of the web is structured and consider instead how their corner of the web can be structured to support their own unique intentions.

    The web was born to be a democratized building site, and it’s grown into a place that most people visit multiple times per day.

    The role of IA is democratizing as well. The tools and resources we use to structure, design, and develop the web are becoming easier to use, and so we need to know the impact that our structural and linguistic choices have on the integrity, efficacy, and accessibility of the places we’re making.

    The choices we make about structure and language so things make sense is the essence of IA. It’s a responsibility unevenly distributed across job titles ranging from user experience design, interaction design, content strategy, instructional design, environmental wayfinding, and database architecture. It’s also practiced widely outside the technology and design sector by people like teachers, business owners, policy makers, and others who make things make sense to other people.

    Fact: Most people practicing information architecture have never heard the term before. I believe that this is why we aren’t collectively getting better at this important practice.

    Without a label, a common nomenclature, IA can seem like an insurmountable mountain to climb. Let’s say you’re working on how to arrange and label the parts of your marketing website, as well as improve the categorization of your online product catalog. To help with these tasks, what do you use as keywords to find your way?

    “How to organize a website?”

    “What are e-commerce catalog best practices?”

    “How to choose categories for product catalogs?”

    This is like googling symptoms of a disease you’re suffering from. It is a long, hard, frustrating road to take. Without knowing the words “information architecture,” you are only likely to find the ways other people have already solved specific problems.

    Don’t get me wrong, this is a fine first step, but without understanding the conceptual underpinnings of IA, people are more likely to end up propagating patterns they see on the parts of the web they experience. This trend is making too much of the web look and act the same, as if everyone is working from a single floor plan and the entire world is slowly becoming one big suburban subdivision.

    In 2013, I was preparing to interview Lou Rosenfeld onstage at World Information Architecture Day in New York City. While doing my homework for the interview, I had the chance to speak with Peter Morville about the rise of IA as a field of practice. He told me that before the term “information architecture” was popularized, people referred to something called “the pain with no name.”

    Users couldn’t find things. Sites couldn’t accommodate new content. It wasn’t a technology problem. It wasn’t a graphic design problem. It was an information architecture problem.
    Peter Morville, A Brief History of Information Architecture

    The phraseology of “the pain with no name” is powerful because it properly captures the anxiety involved in making structural and linguistic decisions. It is messy, painful, brain-melting work that takes a commitment to clarity and coherence.

    These pains did not die with the birth of web 2.0. Every single person working on the web today has dealt with a situation where the pain with no name has reared its ugly head, leaving disinformation and misinformation in its wake. Consider:

    “Our marketing team has a different language than the technology team.”

    “Our users don’t understand the language of our business.”

    “The way this is labeled or classified is keeping users from finding or understanding it.”

    “We have several labels for the same thing and it gets in the way when discussing things.”

    These pains persist on every project; disagreements about language and structure often go unresolved due to a lack of clear ownership. Since they’re owned and influenced by everything from business strategy to technical development, it’s hard to fit these conversations onto a Gantt chart or project plan. IA discussions seem to pop up over the course of a project like a game of whack-a-mole.

    When I worked on an agency team, it was quite common for copywriters to want responsibility for coming up with the final labels for any navigation system I proposed. They rightly saw these labels as important brand assets. But it was also quite common for us to learn through testing and analytic reports that these branded labels were not performing as expected with users. In meeting after meeting, we struggled and argued over the fact that my proposed labels—while more to the point than theirs—were dry, boring or not “on brand.” Sometimes I won these arguments, but I was usually overpowered by the creative team’s preference for pithy, cute, metaphoric, or irreverent labels that “better matched the brand.”

    In the worst incident, the label I proposed made sense to 9 of 10 users in a lab usability test of wireframes. The same content was tested again following development, but was now hidden behind a cute, branded label that made sense to 0 of 10 users. Ultimately, the client was convinced by the creative team that the lab test had biased it in this direction. Once we had a few months of analytics captured from the live site, we saw the problem was, in fact, real. It was the first time I’ve ever seen 0% of users click on a main navigation item.

    Seven years later, that label is still on the site and no users have ever clicked on it. The client hasn’t been able to prioritize the budget to fix it since they need to pay for campaign-based work (much of which is ironically hidden behind that cute but confusing label). This was the first time I fully understood how much of my job is to teach others to consider IA and not just listen to my recommendations around it.

    I fear that we have become lost in a war of dividing responsibility. Clarity is the victim in these battles. It doesn’t matter who comes up with the label or who decides how something is arranged. What matters is that someone thinks about it and decides a way forward that upholds clarity and intention.

    The web is too new—heck, software design is too new—for us to say there is a clear and easy answer when we design. Every time we make something, we are leaping out of an airplane and all the research in the world is just us packing our parachute carefully. The landing will still be felt.
    Christina Wodkte, Fear of Design (2002)

    There is more information swirling around in the world than ever before. There are more channels through which we disseminate content. There has never been such a pressing need for critical thinking about structure to ensure things make sense. Yet, I believe that the pain with no name is experiencing a second coming.

    In too many cases, educational programs in design and technology have stopped teaching or even talking about IA. Professionals in the web industry have stopped teaching their clients about its importance. Reasons for this include “navigation is dead,” “the web is bottom up, not top down,” and “search overthrew structure”—but these all frame IA as a pattern or fad that went out with tree controls being used as navigation.

    These misconceptions need to be addressed if we are going to deal with the reality of the impending “tsunami of information” approaching our shores. The need for clarity will never go out of style, and neither will the importance of language and structure. We will always need to have semantic and structural arguments to get good work done.

    I have worked with too many businesses with inherited “lacksonomies” that emerged from the sense that there’s only one way to organize an e-commerce site, mobile app, or marketing site. We forget that most of the interfaces out there are more experiment than proven pattern. In other words, be careful when copying from others.

    Many people believe that a large or popular brand has “probably” tested their architectural decisions, when in reality, that’s often not the case. The truth is that we never know if we’re looking at something being A/B tested or redesigned behind the scenes because it’s not working.

    How can we be sure that the patterns we’re copying are well-founded?

    The truth is that we can’t. Something that works for Amazon might not work for your business. Something Google did might be a terrible decision when applied to your context. I once had a client who wanted their product to be structured like iTunes, because Apple is so great at design.

    Changing requirements means changing IA, and that means the entire downstream process will need to be adjusted.
    Keith LaFerriere, Educating the Client on IA

    Only you can help the world to give this pain a name.

    When a structural or linguistic decision is being discussed, call it out as information architecture. Give people the label they’re searching for to describe the pain and anxiety being faced. If there is a semantic argument to be had, have it and make sure those you’re arguing with know the impact of leaving such things unresolved.

    Teach others about the ramifications of IA decision-making. Warn your coworkers and clients that IA is not a phase or process that can be set once and forgotten. It’s an ongoing discussion that can be impacted during any stage of the work.

    Share your IA struggles with colleagues and peers so our community can grow from collective experiences. If you want a venue for sharing and learning more about the global conversation happening around information architecture, find a World IA Day location near you.