EW Resource


There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.

A List Apart: The Full Feed
  • The Foundation of Technical Leadership 

    I’m a front-end architect, but I’m also known as a technical leader, subject matter expert, and a number of other things. I came into my current agency with five years of design and development management experience; yet when it came time to choose a path for my career with the company, I went the technical route.

    I have to confess I had no idea what a technical leader really does. I figured it out, eventually.

    Technical experts are not necessarily technical leaders. Both have outstanding technical skills; the difference is in how others relate to you. Are you a person that others want to follow? That’s the question that really matters. Here are some of the soft skills that set a technical leader apart from a technical expert.

    Help like it’s your job

    Your authority in a technical leadership position—or any leadership position—is going to arise from what you can do for (or to) other people. Healthy authority here stems from you being known as a tried-and-true problem-solver for everyone. The goal is for other people to seek you out, not for you to be chasing down people for code reviews. For this to happen, intelligence and skill are not enough—you need to make a point of being helpful.

    For the technical leader, if you’re too busy to help, you’re not doing your job—and I don’t just mean help someone when they come by and ask for help. You may have to set an expectation with your supervisor that helping others is a vital part of a technical leader’s job. But guess what? It might be billable time—check with your boss. Even if it’s not, try to estimate how much time it’s saving your coworkers. Numbers speak volumes.

    The true measure of how helpful you are is the technical know-how of the entire team. If you’re awesome but your team can’t produce excellent work, you’re not a technical leader—you’re a high-level developer. There is a difference. Every bit of code you write, every bit of documentation you put together should be suitable to use as training for others on your team. When making a decision about how to solve a problem or what technologies to use, think about what will help future developers.

    My job as front-end architect frequently involves not only writing clean code, but cleaning up others’ code to aid in reusability and comprehension by other developers. That large collection of functions might work better as an object, and it’ll probably be up to you to make that happen, whether through training or just doing it.

    Speaking of training, it needs to be a passion. Experience with and aptitude for training were probably the biggest factors in me landing the position as front-end architect. Public speaking is a must. Writing documentation will probably fall on you. Every technical problem that comes your way should be viewed as an opportunity to train the person who brought it to you.

    Helping others, whether they’re other developers, project managers, or clients, needs to become a passion for you if you’re an aspiring technical leader. This can take a lot of forms, but it should permeate into everything you do. That’s why this is rule number one.

    Don’t throw a mattress into a swimming pool

    An infamous prank can teach us something about being a technical leader. Mattresses are easy to get into swimming pools; but once they’re in there, they become almost impossible to get out. Really, I worked the math on this: a queen-sized mattress, once waterlogged, will weigh over 2000 pounds.

    A lot of things are easy to work into a codebase: frameworks, underlying code philosophies, even choices on what technology to use. But once a codebase is built on a foundation, it becomes nearly impossible to get that foundation out of there without rebuilding the entire codebase.

    Shiny new framework seem like a good idea? You’d better hope everyone on your team knows how to use that framework, and that the framework’s around in six months. Don’t have time to go back and clean up that complex object you wrote to handle all the AJAX functionality? Don’t be surprised when people start writing unneeded workarounds because they don’t understand your code. Did you leave your code in a state that’s hard to read and modify? I want you to imagine a mattress being thrown into a swimming pool…

    Failure to heed this command frequently results in you being the only person who can work on a particular project. That is never a good situation to be in.

    Here is one of the big differences between a technical expert and a technical leader: a technical expert could easily overlook that consideration. A technical leader would take steps to ensure that it never happens.

    As a technical expert, you’re an A player, and that expertise is needed everywhere; and as a technical leader, it’s your job to make sure you can supply it, whether that means training other developers, writing and documenting code to get other developers up to speed, or intentionally choosing frameworks and methodologies your team is already familiar with.

    Jerry Weinberg, in The Psychology of Computer Programming, said, “If a programmer is indispensable, get rid of him as quickly as possible!” If you’re in a position where you’re indispensable to a long-term project, fixing that needs to be a top priority. You should never be tied down to one project, because your expertise is needed across the team.

    Before building a codebase on anything, ask yourself what happens when you’re no longer working on the project. If the answer is they have to hire someone smarter than you or the project falls apart, don’t include it in the project.

    And as a leader, you should be watching others to make sure they don’t make the same mistake. Remember, technology decisions usually fall on the technical leader, no matter who makes them.

    You’re not the only expert in the room

    “Because the new program is written for OS 8 and can function twice as fast. Is that enough of a reason, Nancy Drew?”

    That’s the opening line of Nick Burns, Your Company’s Computer Guy, from the Saturday Night Live sketch with the same name. He’s a technical expert who shows up, verbally abuses you, fixes your computer, and then insults you some more before shouting, “Uh, you’re welcome!” It’s one of those funny-because-it’s-true things.

    The stereotype of the tech expert who treats everyone else as inferiors is so prevalent that it’s worked its way into comedy skits, television shows, and watercooler conversations in businesses across the nation.

    I’ve dealt with the guy (or gal). We all have. You know the guy, the one who won’t admit fault, who gets extremely defensive whenever others suggest their own ideas, who views his intellect as superior to others and lets others know it. In fact, everyone who works with developers has dealt with this person at some point.

    It takes a lot more courage and self-awareness to admit that I’ve been that guy on more than one occasion. As a smart guy, I’ve built my self esteem on that intellect. So when my ideas are challenged, when my intellect is called into question, it feels like a direct assault on my self esteem. And it’s even worse when it’s someone less knowledgeable than me. How dare they question my knowledge! Don’t they know that I’m the technical expert?

    Instead of viewing teammates as people who know less than you, try to view them as people who know more than you in different areas. Treat others as experts in other fields that you can learn from. That project manager may not know much about your object-oriented approach to the solution, but she’s probably an expert in how the project is going and how the client is feeling about things.

    Once again, in The Psychology of Computer Programming, Weinberg said, “Treat people who know less than you with respect, deference, and patience.” Take it a step further. Don’t just treat them that way—think of them that way. You’d be amazed how much easier it is to work with equals rather than intellectually inferior minions—and a change in mindset might be all that’s required to make that difference.

    Intelligence requires clarity

    It can be tempting to protect our expertise by making things appear more complicated than they are. But in reality, it doesn’t take a lot of intelligence to make something more complicated than it needs to be. It does, however, take a great deal of intelligence to take something complicated and make it easy to understand.

    If other developers, and non-technical people, can’t understand your solution when you explain it in basic terms, you’ve got a problem. Please don’t hear that as “All good solutions should be simple,” because that’s not the case at all—but your explanations should be. Learn to think like a non-technical person so you can explain things in their terms. This will make you much more valuable as a technical leader.

    And don’t take for granted that you’ll be around to explain your solutions. Sometimes, you’ll never see the person implementing your solution, but that email you sent three weeks ago will be. Work on your writing skills. Pick up a copy of Steven Pinker’s The Sense of Style and read up on persuasive writing. Start a blog and write a few articles on what your coding philosophies are.

    The same principle extends to your code. If code is really hard to read, it’s usually not a sign that a really smart person wrote it; in fact, it usually means the opposite. Speaker and software engineer Martin Fowler once said, “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

    Remember: clarity is key. The perception of your intelligence is going to define the reality of your work experience, whether you like it or not.

    You set the tone

    Imagine going to the doctor to explain some weird symptoms you’re having. You sit down on the examination bed, a bit nervous and a bit confused as to what’s actually going on. As you explain your condition, the doctor listens with widening eyes and shaking hands. And the more you explain, the worse it gets. This doctor is freaking out. When you finally finish, the doctor stammers, “I don’t know how to handle that!”

    How would you feel? What would you do? If it were me, I’d start saying goodbye to loved ones, because that’s a bad, bad sign. I’d be in a full-blown panic based on the doctor’s reaction.

    Now imagine a project manager comes to you and starts explaining the weird functionality needed for a particularly tricky project. As you listen, it becomes clear that this is completely new territory for you, as well as for the company. You’re not even sure if what they’re asking is possible.

    How do you respond? Are you going to be the crazy doctor above? If you are, I can assure you the project manager will be just as scared as you are, if not more so.

    I’m not saying you should lie and make something up, because that’s even worse. But learning to say “I don’t know” without a hint of panic in your voice is an art that will calm down project teams, clients, supervisors, and anyone else involved in a project. (Hint: it usually involves immediately following up with, “but I’ll check it out.”)

    As a technical leader, people will follow your emotional lead as well as your technical lead. They’ll look to you not only for the answers, but for the appropriate level of concern. If people leave meetings with you more worried than they were before, it’s probably time to take a look at how your reactions are influencing them.

    Real technical leadership

    Technical leadership is just as people-centric as other types of leadership, and knowing how your actions impact others can make all the difference in the world in moving from technical expert to technical leader. Remember: getting people to follow your lead can be even more important than knowing how to solve technical problems. Ignoring people can be career suicide for a technical leader—influencing them is where magic really happens.


  • This week's sponsor: Skillshare 

    ​SKILLSHARE. Explore 1000’s of online classes in design, business, and more! Get 3 months of unlimited access for $0.99.

  • The Future of the Web 

    Recently the web—via Twitter—erupted in short-form statements that soon made it clear that buttons had been pushed, sides taken, and feelings felt. How many feels? All the feels. Some rash words may have been said.

    But that’s Twitter for you.

    It began somewhat innocuously off-Twitter, with a very reasonable X-Men-themed post by Brian Kardell (one of the authors of the Extensible Web Manifesto). Brian suggests that the way forward is by opening up (via JavaScript) some low-level features that have traditionally been welded shut in the browser. This gives web developers and designers—authors, in the parlance of web standards—the ability to prototype future native browser features (for example, by creating custom elements).

    If you’ve been following all the talk about web components and the shadow DOM of late, this will sound familiar. The idea is to make standards-making a more rapid, iterative, bottom-up process; if authors have the tools to prototype their own solutions or features (poly- and prolly-fills), then the best of these solutions will ultimately rise to the top and make their way into the native browser environments.

    This sounds empowering, collaborative—very much in the spirit of the web.

    And, in fact, everything seemed well on the World Wide Web until this string of tweets by Alex Russell, and then this other string of tweets. At which point everyone on the web sort of went bananas.

    Doomsday scenarios were proclaimed; shadowy plots implied; curt, sweeping ideological statements made. In short, it was the kind of shit-show you might expect from a touchy, nuanced subject being introduced on Twitter.

    But why is it even touchy? Doesn’t it just sound kind of great?

    Oh wait JavaScript

    Whenever you talk about JavaScript as anything other than an optional interaction layer, folks seem to gather into two big groups.

    On the Extensible Web side, we can see the people who think JavaScript is the way forward for the web. And there’s some historical precedent for that. When Brendan Eich created JavaScript, he was aware that he was putting it all together in a hurry, and that he would get things wrong. He wanted JavaScript to be the escape hatch by which others could improve his work (and fix what he got wrong). Taken one step further, JavaScript gives us the ability to extend the web beyond where it currently is. And that, really, is what the Extensible Web Manifesto folks are looking to do.

    The web needs to compete with native apps, they assert. And until we get what we need natively in the browser, we can fake it with JavaScript. Much of this approach is encapsulated in the idea of progressive web apps (offline access, tab access, file system access, a spot on the home screen)—giving the web, as Alex Russell puts it, a fair fight.

    On the other side of things, in the progressive enhancement camp, we get folks that are worried these approaches will leave some users in the dust. This is epitomized by the “what about users with no JavaScript” argument. This polarizing question—though not the entire issue by far—gets at the heart of the disagreement.

    For the Extensible Web folks, it feels like we’re holding the whole web back for a tiny minority of users. For the Progressive Enhancement folks, it’s akin to throwing out accessibility—cruelly denying access to a subset of (quite possibly disadvantaged) users.

    During all this hubbub, Jeremy Keith, one of the most prominent torchbearers for progressive enhancement, reminded us that nothing is absolute. He suggests that—as always—the answer is “it depends.” Now this should be pretty obvious to anyone who’s spent a few minutes in the real world doing just about anything. And yet, at the drop of a tweet, we all seem to forget it.

    So if we can all take a breath and rein in our feelings for a second, how might we better frame this whole concept of moving the web forward? Because from where I’m sitting, we’re all actually on the same side.

    History and repetition

    To better understand the bigger picture about the future of the web, it’s useful (as usual) to look back at its past. Since the very beginning of the web, there have been disagreements about how best to proceed. Marc Andreessen and Tim Berners-Lee famously disagreed about the IMG tag. Tim didn’t get his way, Marc implemented IMG in Mosaic as he saw fit, and we all know how things spun out from there. It wasn’t perfect, but a choice had to be made and it did the job. History suggests that IMG did its job fairly well.

    A pattern of hacking our way to the better solution becomes evident when you follow the trajectory of the web’s development.

    In the 1990’s, webmasters and designers wanted layout like they were used to in print. They wanted columns, dammit. David Siegel formalized the whole tables-and-spacer-GIFs approach in his wildly popular book Creating Killer Web Sites. And thus, the web was flooded with both design innovation and loads of un-semantic markup. Which we now know is bad. But those were the tools that were available, and they allowed us to express our needs at the time. Life, as they say…finds a way.

    And when CSS layout came along, guess what it used as a model for the kinds of layout techniques we needed? That’s right: tables.

    While we’re at it, how about Flash? As with tables, I’m imagining resounding “boos” from the audience. “Boo, Flash!” But if Flash was so terrible, why did we end up with a web full of Flash sites? I’ll tell you why: video, audio, animation, and cross-browser consistency.

    In 1999? Damn straight I want a Flash site. Once authors got their hands on a tool that let them do all those incredible things, they brought the world of web design into a new era of innovation and experimentation.

    But again with the lack of semantics, linkability, and interoperability. And while we were at it, with the tossing out of an open, copyright-free platform. Whoops.

    It wasn’t long, though, before the native web had to sit up and take notice. Largely because of what authors expressed through Flash, we ended up with things like HTML5, Ajax, SVGs, and CSS3 animations. We knew the outcomes we wanted, and the web just needed to evolve to give us a better solution than Flash.

    In short: to get where we need to go, we have to do it wrong first.

    Making it up as we go along

    We authors express our needs with the tools available to help model what we really need at that moment. Best practices and healthy debate are a part of that. But please, don’t let the sort of emotions we attach to politics and religion stop you from moving forward, however messily. Talk about it? Yes. But at a certain point we all need to shut our traps and go build some stuff. Build it the way you think it should be built. And if it’s good—really good—everyone will see your point.

    If I said to you, “I want you to become a really great developer—but you’re not allowed to be a bad developer first,” you’d say I was crazy. So why would we say the same thing about building the web?

    We need to try building things. Probably, at first, bad things. But the lessons learned while building those “bad” projects point the way to the better version that comes next. Together we can shuffle toward a better way, taking steps forward, back, and sometimes sideways. But history tells us that we do get there.

    The web is a mess. It is, like its creators, imperfect. It’s the most human of mediums. And that messiness, that fluidly shifting imperfection, is why it’s survived this long. It makes it adaptable to our quickly-shifting times.

    As we try to extend the web, we may move backward at the same time. And that’s OK. That imperfect sort of progress is how the web ever got anywhere at all. And it’s how it will get where we’re headed next.

    Context is everything

    One thing that needs to be considered when we’re experimenting (and building things that will likely be kind of bad) is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? The file sizes inherent in the product pretty much exclude slow networks already, so maybe that condition can go out the window there, too.

    Context, as usual, is everything. There needs to be realistic assessment of the risk of exclusion against the potential gains of trying new technologies and approaches. We’re already doing this, anyway. Show me a perfectly progressively enhanced, perfectly accessible, perfectly performant project and I’ll show you a company that never ships. We do our best within the constraints we have. We weigh potential risks and benefits. And then we build stuff and assess how well it went; we learn and improve.

    When a new approach we’re trying might have aspects that are harmful to some users, it’s good to raise a red flag. So when we see issues with one another’s approaches, let’s talk about how we can fix those problems without throwing out the progress that’s been made. Let’s see how we can bring greater experiences to the web without leaving users in the dust.

    If we can continue to work together and consciously balance these dual impulses—pushing the boundaries of the web while keeping it open and accessible to everyone—we’ll know we’re on the right track, even if it’s sometimes a circuitous or befuddling one. Even if sometimes it’s kind of bad. Because that’s the only way I know to get to good.

  • Help One of Our Own: Carolyn Wood 

    One of the nicest people we’ve ever known and worked with is in a desperate fight to survive. Many of you remember her—she is a gifted, passionate, and tireless worker who has never sought the spotlight and has never asked anything for herself.

    Carolyn Wood spent three brilliant years at A List Apart, creating the position of acquisitions editor and bringing in articles that most of us in the web industry consider essential reading—not to mention more than 100 others that are equally vital to what we do today. Writers loved her. Since 1999, she has also worked on great web projects like DigitalWeb, The Manual, and Codex: The Journal of Typography.

    Think about it. What would the web look like if she hadn’t been a force behind articles like these:

    Three years ago, Carolyn was confined to a wheelchair. Then it got worse. From the YouCaring page:

    This April, after a week-long illness, she developed acute injuries to the tendons in her feet and the nerves in her right hand and arm. She couldn’t get out of her wheelchair, even to go to the bathroom. At the hospital, they discovered Carolyn had acute kidney failure. After a month in a hospital and a care facility she has bounced back from the kidney failure, but she cannot take painkillers to help her hands and feet.

    Carolyn cannot stand or walk or dress herself or take a shower. She is dependent on a lift, manned by two people, to transfer her. Without it she cannot leave her bed.

    She’s now warehoused in a home that does not provide therapy—and her insurance does not cover the cost. Her bills are skyrocketing. (She even pays rent on her bed for $200 a month!)

    Perhaps worst of all—yes, this gets worse—is that her husband has leukemia. He’s dealing with his own intense pain and fatigue and side effects from twice-monthly infusions. They are each other’s only support, and have been living apart since April. They have no income other than his disability, and are burning through their life savings.

    This is absolutely a crisis situation. We’re pulling the community together to help Carolyn—doing anything we possibly can. Her bills are truly staggering. She has no way to cover basic life expenses, much less raise the huge sums required to get the physical and occupational therapy she needs to be independent again.

    Please help by donating anything you can, and by sharing Carolyn’s support page with anyone in your network who is compassionate and will listen.


  • This week's sponsor: Bitbucket 

    BITBUCKET: Over 450,000 teams and 3 million developers love Bitbucket - it’s built for teams! Try it free.

  • Promoting a Design System Across Your Products 

    The scene: day one of a consulting gig with a new client to build a design and code library for a web app. As luck would have it, the client invited me to sit in on a summit of 25 design leaders from across their enterprise planning across platforms and lines of business. The company had just exploded from 30 to over 100 designers. Hundreds more were coming. Divergent product design was everywhere. They dug in to align efforts.

    From a corner, I listened quietly. I was the new guy, minding my own business, comfortable with my well-defined task and soaking up strategy. Then, after lunch, the VP of Digital Design pulled me into an empty conference room.

    “Can you refresh me on your scope?” she asked. So I drew an account hub on the whiteboard.

    Diagram showing an account hub

    “See, the thing is…” she responded, standing up and taking my pen. “We’re redesigning our web marketing homepage now.” She added a circle. “We’re also reinventing online account setup.” Another circle, then arrows connecting the three areas. “We’ve just launched some iOS apps, and more—plus Android—are coming.” She added more circles, arrows, more circles.

    Diagram showing an interconnected enterprise ecosystem: marketing, account setup, account hub, plus iOS apps

    “I want it all cohesive. Everything.” She drew a circle around the entire ecosystem. “Our design system should cover all of this. You can do that, right?”

    A long pause, then a deep breath. Our design system—the parts focused on, the people involved, the products reached—had just grown way more complicated.

    Our industry is getting really good at surfacing reusable parts in a living style guide: visual language like color and typography, components like buttons and forms, sophisticated layouts, editorial voice and tone, and so on. We’ve also awoken to the challenges of balancing the centralized and federated influence of the people involved. But there’s a third consideration: identifying and prioritizing the market of products our enterprise creates that our system will reach.

    As a systems team, we need to ask: what products will use our system and how will we involve them?

    Produce a product inventory

    While some enterprises may have an authoritative and up-to-date master list of products, I’ve yet to work with one. There’s usually no more than a loose appreciation of a constantly evolving product portfolio.

    Start with a simple product list

    A simple list is easy enough. Any whiteboard or text file will do. Produce the list quickly by freelisting as many products as you can think of with teammates involved in starting the system. List actual products (“Investor Relations” and “Careers”), not types of products (such as “Corporate Subsites”).

    Some simple product lists
    Large Corporate Web Site Small Product Company Large Enterprise
    5–15 products 10–25 products 20–100 products
    • Homepage
    • Products
    • Support
    • About
    • Careers
    • Web marketing site
    • Web support site
    • Web corporate site
    • Community site 1
    • Community site 2
    • Web app basic
    • Web app premium
    • Web app 3
    • Web app 4
    • Windows flagship client
    • Windows app 2
    • Web home
    • Web product pages
    • Web product search
    • Web checkout
    • Web support
    • Web rewards program
    • iOS apps (10+)
    • Android apps (10+)
    • Web account mgmt (5+)
    • Web apps (10+)

    Note that because every enterprise is unique, the longer the lists get, the more specific they become.

    For broader portfolios, gather more details

    If your portfolio is more extensive, you’ll need more deliberate planning and coordination of teams spanning an organization. This calls for a more structured, detailed inventory. It’s spreadsheet time, with products as rows and columns for the following:

    • Name, such as Gmail
    • Type / platform: web site, web app, iOS, Android, kiosk, etc.
    • Product owner, if that person even exists
    • Description (optional)
    • People (optional), like a product manager, lead designer or developer, or others involved in the product
    • Other metadata (optional): line of business, last redesigned, upcoming redesign, tech platform, etc.
    Screenshot showing a detailed product inventory
    A detailed product inventory.

    Creating such an inventory can feel draining for a designer. Some modern digital organizations struggle to fill out an inventory like this. I’m talking deer-in-headlights kind of struggling. Completely locked up. Can’t do it. But consider life without it: if you don’t know the possible players, you may set yourself up for failure, or at least a slower road to success. Therefore, take the time to understand the landscape, because the next step is choosing the right products to work with.

    Prioritize products into tiers

    A system effort is never equally influenced by every product it serves. Instead, the system must know which products matter—and which don’t—and then varyingly engage each in the effort. You can quickly gather input on product priorities from your systems team and/or leaders using techniques like cumulative voting.

    Your objective is to classify products into tiers, such as Flagship (the few, essential core products), Secondary (additional influential products), and The Rest to orient strategy and clarify objectives.

    1—Organize around flagships

    Flagship products are the limited number of core products that a system team deeply and regularly engages with. These products reflect a business’ core essence and values, and their adoption of a system signals the system’s legitimacy.

    Getting flagship products to participate is essential, but challenging. Each usually has a lot of individual power and operates autonomously. Getting flagships to share and realize a cohesive objective requires effort.

    Choose flagships that’ll commit to you, too

    When naming flagships, you must believe they’ll play nice and deliver using the system. Expect to work to align flagships: they can be established, complicated, and well aware of their flagship status. Nevertheless, if all flagships deliver using the system, the system is an unassailable standard. If any avoid or obstruct the system, the system lacks legitimacy.

    Takeaway: obtain firm commitments, such as “We will ship with the system by such and such a date” or “Our product MVP must use this design system.” A looser “Yes, we’ll probably adopt what we can” lacks specificity and fidelity.

    Latch onto a milestone, or make your own

    Flagship commitment can surface as a part of a massive redesign, corporate rebranding, or executive decree. Those are easy events to organize around. Without one, you’ll need to work harder bottom-up to align product managers individually.

    Takeaway: establish a reasonable adoption milestone you can broadcast, after which all flagships have shipped with the system.

    Choose wisely (between three and five)

    For a system to succeed, flagships must ship with it. So choose just enough. One flagship makes the system’s goals indistinguishable from its own self-interest. Two products don’t offer enough variety of voices and contexts to matter. Forming a foundation with six or more “equally influential voices” can become chaotic.

    Takeaway: three flagships is the magic minimum, offering sufficient range and incorporating an influential and sometimes decisive third perspective. Allowing for four or five flagships is feasible but will test a group’s ability to work together fluidly.

    A system for many must be designed by many

    Enterprises place top talent on flagship products. It would be naive to think that your best and brightest will absorb a system that they don’t influence or create themselves. It’s a team game, and getting all-stars working well together is part of your challenge.

    Takeaway: integrate flagship designers from the beginning, as you design the system, to inject the right blend of individual styles and shared beliefs.

    2—Blend in a secondary set

    More products—a secondary set— are also important to a system’s success. Such products may not be flagships because they are between major releases (making adoption difficult), not under active development, or even just slightly less valuable.

    Include secondary products in reference designs

    Early systems efforts can explore concept mockups—also known as reference designs—to assess a new visual language across many products. Reference designs reveal an emerging direction and serve as “before and after” roadshow material.

    Takeaway: include secondary products in early design concepts to acknowledge the value of those products, align the system with their needs, and invite their teams to adopt the system early.

    Welcome participation (but moderate contribution)

    Systems benefit from an inclusive environment, so bias behaviors toward welcoming input. Encourage divergent ideas, but know that it’s simply not practical to give everyone a voice in everything. Jon Wiley, an early core contributor to Google’s Material Design, shared some wisdom with me during a conversation: “The more a secondary product’s designer participated and injected value, the more latitude they got to interpret and extend the system for their context.”

    Takeaway: be open to—but carefully moderate—the involvement of designers on secondary products.

    3—Serve the rest at a greater distance

    The bigger the enterprise, the longer and more heterogeneous the long tail of other products that could ultimately adopt the system. A system’s success is all about how you define and message it. For example, adopting the core visual style might be expected, but perhaps rigorous navigational integration and ironclad component consistency aren’t goals.

    Documentation may be your primary—or only—channel to communicate how to use the system. Beyond that, your budding system team may not have the time for face-to-face meetings or lengthy discussions.

    Takeaway: early on, limit focus on and engagement with remaining products. As a system matures, gradually invest in lightweight support activities like getting-started sessions, audits, and triaging office-hour clinics.

    Adjust approach depending on context

    Every product portfolio is different, and thus so is every design system. Let’s consider the themes and dynamics from some archetypal contexts we face repeatedly in our work.

    Example 1: large corporate website, made of “properties”

    You know: the homepage-as-gateway-to-products hegemon (owned by Marketing) integrated with Training, Services, and About Us content (owned by less powerful fiefdoms) straddling a vast ocean of transactional features like Support/Account Management and Communities. All of these “properties” have drifted apart, and some trigger—the decision to go responsive, a rebranding, or an annoyed-enough-to-care executive—dictates that it’s “time to unify!”

    Diagram showing a typical web marketing sitemap overlaid with a product section team’s choices on spreading a system beyond its own section
    Typical web marketing sitemap, overlaid with a product section team’s choices on spreading a system beyond its own section.

    The get? Support

    System influence usually radiates from Marketing and Brand through to selling Products. But Support is where customers spend most of their time: billing, admin, downloading, troubleshooting. Support’s features are complicated, with intricate UI and longer release cycles across multiple platforms. It may be the most difficult section to integrate , but it’s essential.

    Takeaway: if your gets—in this case Home, Products, and Support—deliver, you win. Everyone else will either follow or look bad. That’s your flagship set.

    Minimize homepage distraction

    Achieving cohesive design is about suffusing an entire experience with it. Yet a homepage is often the part of a site that is most exposed to, and justifiably distinct from, otherwise reusable componentry. It has tons of cooks, unique and often complex parts, and changes frequently. Such qualities— indecisiveness, complexity, and instability—corrode systems efforts.

    Takeaway: don’t fall prey to the homepage distraction. Focus on stable fundamentals that you can confidently spread.

    Exploit navigational change to integrate a system hook

    As branding or navigation changes, so does a header. It appears everywhere, and changes to it can be propagated centrally. Get those properties—particularly those lacking full-time design support—to sync with a shared navigation service, and use that hook to open access to the greater goodies your system has to offer.

    Takeaway: exploit the connection! Adopters may not embrace all your parts, but since you are injecting your code into their environment, they could.

    Example 2: a modest product portfolio

    A smaller company’s strategic shifts can be chaotic, lending themselves to an unstable environment in which to apply a system. Nevertheless, a smaller community of designers—often a community of practice dispersed across a portfolio—can provide an opportunity to be more cohesive.

    Radiate influence from web apps

    Many small companies assemble portfolios of websites, web apps, and their iOS, Android, and Windows counterparts. Websites and native apps share little beyond visual style and editorial tone. However, web apps provide a pivot: they can share a far deeper overlap of components and tooling with websites, and their experiences often mirror what’s found on native apps.

    Takeaway: look for important products whose interests overlap many other products, and radiate influence from there.

    Diagram of product relationships within a portfolio, with web apps relating to both web sites and native apps.
    Diagram of product relationships within a portfolio, with web apps relating to both web sites and native apps.

    Demo value across the whole journey

    A small company’s flagship products should be the backbone of a customer’s journey, from reach and acquisition through service and loyalty. Design activities that express the system’s value from the broader user journey tend to reveal gaps, identify clunky handoffs, and trigger real discussions around cohesiveness.

    Takeaway: evoke system aspirations by creating before/after concepts and demoing cohesiveness across the journey, such as with a stitched prototype.

    A series of screenshots of the Marriott.com project showing how disparate design artifacts across products were stitched together into an interactive prototype
    For Marriott.com, disparate design artifacts across products (left) were stitched together into an interactive, interconnected prototype (right).

    Bridge collaboration beyond digital

    Because of their areas of focus, “non-digital” designers (working on products like trade-show booths, print, TV, and retail) tend to be less savvy than their digital counterparts when it comes to interaction. Nonetheless, you’ll share the essence of your visual language with them, such as making sure the system’s primary button doesn’t run afoul of the brand’s blue, and yet provides sufficient contrast for accessibility.

    Takeaway: encourage non-digital designers to do digital things. Be patient and inclusive, even if their concerns sometimes drift away from what you care about most.

    Example 3: a massive multiplatform enterprise

    For an enterprise as huge as Google, prioritizing apps was essential to Material Design’s success. The Verge’s “Redesigning Google: How Larry Page Engineered a Beautiful Revolution” suggests strong prioritization, with Search, Maps, Gmail, and later Android central to the emerging system. Not as much in the conversation, perhaps early on? Docs, Drive, Books, Finance. Definitely not SantaTracker.

    Broaden representation across platforms & businesses

    With coverage across a far broader swath of products, ensure flagship product selection spans a few platforms and lines of business. If you want it to apply everywhere, then the system—how it’s designed, developed, and maintained—will benefit from diverse influences.

    Takeaway: Strive for diverse system contribution and participation in a manner consistent with the products it serves.

    Mix doers & delegators

    Massive enterprise systems trigger influence from many visionaries. Yet you can’t rely on senior directors to produce meticulous, thoughtful concepts. Such leaders already direct and manage work across many products. Save them from themselves! Work with them to identify design talent with pockets of time. Even better, ask them to lend a doer they recommend for a month- or weeklong burst.

    Takeaway: defer to creative leaders on strategy, but redirect their instincts from doing everything to identifying and providing talent.

    Right the fundamentals before digging deep

    I confess that in the past, I’ve brought a too-lofty ambition to bear on quickly building huge libraries for organizations of many, many designers. Months later, I wondered why our team was still refining the “big three” (color, typography, and iconography) or the “big five” (the big three, plus buttons and forms). Um, what? Given the system’s broad reach, I had to adjust my expectations to be satisfied with what was still a very consequential shift toward cohesiveness.

    Takeaway: balance ambition for depth with spreading fundamentals wide across a large enterprise, so that everyone shares a core visual language.

    The long game

    Approach a design system as you would a marathon, not a sprint. You’re laying the groundwork for an extensive effort. By understanding your organization through its product portfolio, you’ll strengthen a cornerstone—the design system—that will help you achieve a stronger and more cohesive experience.

  • Making your JavaScript Pure 

    Once your website or application goes past a small number of lines, it will inevitably contain bugs of some sort. This isn’t specific to JavaScript but is shared by nearly all languages—it’s very tricky, if not impossible, to thoroughly rule out the chance of any bugs in your application. However, that doesn’t mean we can’t take precautions by coding in a way that lessens our vulnerability to bugs.

    Pure and impure functions

    A pure function is defined as one that doesn’t depend on or modify variables outside of its scope. That’s a bit of a mouthful, so let’s dive into some code for a more practical example.

    Take this function that calculates whether a user’s mouse is on the left-hand side of a page, and logs true if it is and false otherwise. In reality your function would probably be more complex and do more work, but this example does a great job of demonstrating:

    function mouseOnLeftSide(mouseX) {
        return mouseX < window.innerWidth / 2;
    document.onmousemove = function(e) {

    mouseOnLeftSide() takes an X coordinate and checks to see if it’s less than half the window width—which would place it on the left side. However, mouseOnLeftSide() is not a pure function. We know this because within the body of the function, it refers to a value that it wasn’t explicitly given:

    return mouseX < window.innerWidth / 2;

    The function is given mouseX, but not window.innerWidth. This means the function is reaching out to access data it wasn’t given, and hence it’s not pure.

    The problem with impure functions

    You might ask why this is an issue—this piece of code works just fine and does the job expected of it. Imagine that you get a bug report from a user that when the window is less than 500 pixels wide the function is incorrect. How do you test this? You’ve got two options:

    • You could manually test by loading up your browser and moving your mouse around until you’ve found the problem.
    • You could write some unit tests (Rebecca Murphey’s Writing Testable JavaScript is a great introduction) to not only track down the bug, but also ensure that it doesn’t happen again.

    Keen to have a test in place to avoid this bug recurring, we pick the second option and get writing. Now we face a new problem, though: how do we set up our test correctly? We know we need to set up our test with the window width set to less than 500 pixels, but how? The function relies on window.innerWidth, and making sure that’s at a particular value is going to be a pain.

    Benefits of pure functions

    Simpler testing

    With that issue of how to test in mind, imagine we’d instead written the code like so:

    function mouseOnLeftSide(mouseX, windowWidth) {
        return mouseX < windowWidth / 2;
    document.onmousemove = function(e) {
        console.log(mouseOnLeftSide(e.pageX, window.innerWidth));

    The key difference here is that mouseOnLeftSide() now takes two arguments: the mouse X position and the window width. This means that mouseOnLeftSide() is now a pure function; all the data it needs it is explicitly given as inputs and it never has to reach out to access any data.

    In terms of functionality, it’s identical to our previous example, but we’ve dramatically improved its maintainability and testability. Now we don’t have to hack around to fake window.innerWidth for any tests, but instead just call mouseOnLeftSide() with the exact arguments we need:

    mouseOnLeftSide(5, 499) // ensure it works with width < 500


    Besides being easier to test, pure functions have other characteristics that make them worth using whenever possible. By their very nature, pure functions are self-documenting. If you know that a function doesn’t reach out of its scope to get data, you know the only data it can possibly touch is passed in as arguments. Consider the following function definition:

    function mouseOnLeftSide(mouseX, windowWidth)

    You know that this function deals with two pieces of data, and if the arguments are well named it should be clear what they are. We all have to deal with the pain of revisiting code that’s lain untouched for six months, and being able to regain familiarity with it quickly is a key skill.

    Avoiding globals in functions

    The problem of global variables is well documented in JavaScript—the language makes it trivial to store data globally where all functions can access it. This is a common source of bugs, too, because anything could have changed the value of a global variable, and hence the function could now behave differently.

    An additional property of pure functions is referential transparency. This is a rather complex term with a simple meaning: given the same inputs, the output is always the same. Going back to mouseOnLeftSide, let’s look at the first definition we had:

    function mouseOnLeftSide(mouseX) {
        return mouseX < window.innerWidth / 2;

    This function is not referentially transparent. I could call it with the input 5 multiple times, resize the window between calls, and the result would be different every time. This is a slightly contrived example, but functions that return different values even when their inputs are the same are always harder to work with. Reasoning about them is harder because you can’t guarantee their behavior. For the same reason, testing is trickier, because you don’t have full control over the data the function needs.

    On the other hand, our improved mouseOnLeftSide function is referentially transparent because all its data comes from inputs and it never reaches outside itself:

    function mouseOnLeftSide(mouseX, windowWidth) {
        return mouseX < windowWidth / 2;

    You get referential transparency for free when following the rule of declaring all your data as inputs, and by doing this you eliminate an entire class of bugs around side effects and functions acting unexpectedly. If you have full control over the data, you can hunt down and replicate bugs much more quickly and reliably without chancing the lottery of global variables that could interfere.

    Choosing which functions to make pure

    It’s impossible to have pure functions consistently—there will always be a time when you need to reach out and fetch data, the most common example of which is reaching into the DOM to grab a specific element to interact with. It’s a fact of JavaScript that you’ll have to do this, and you shouldn’t feel bad about reaching outside of your function. Instead, carefully consider if there is a way to structure your code so that impure functions can be isolated. Prevent them from having broad effects throughout your codebase, and try to use pure functions whenever appropriate.

    Let’s take a look at the code below, which grabs an element from the DOM and changes its background color to red:

    function changeElementToRed() {
        var foo = document.getElementById('foo');
        foo.style.backgroundColor = "red";

    There are two problems with this piece of code, both solvable by transitioning to a pure function:

    1. This function is not reusable at all; it’s directly tied to a specific DOM element. If we wanted to reuse it to change a different element, we couldn’t.
    2. This function is hard to test because it’s not pure. To test it, we would have to create an element with a specific ID rather than any generic element.

    Given the two points above, I would rewrite this function to:

    function changeElementToRed(elem) {
        elem.style.backgroundColor = "red";
    function changeFooToRed() {
        var foo = document.getElementById('foo');

    We’ve now changed changeElementToRed() to not be tied to a specific DOM element and to be more generic. At the same time, we’ve made it pure, bringing us all the benefits discussed previously.

    It’s important to note, though, that I’ve still got some impure code—changeFooToRed() is impure. You can never avoid this, but it’s about spotting opportunities where turning a function pure would increase its readability, reusability, and testability. By keeping the places where you’re impure to a minimum and creating as many pure, reusable functions as you can, you’ll save yourself a huge amount of pain in the future and write better code.


    “Pure functions,” “side effects,” and “referential transparency” are terms usually associated with purely functional languages, but that doesn’t mean we can’t take the principles and apply them to our JavaScript, too. By being mindful of these principles and applying them wisely when your code could benefit from them you’ll gain more reliable, self-documenting codebases that are easier to work with and that break less often. I encourage you to keep this in mind next time you’re writing new code, or even revisiting some existing code. It will take some time to get used to these ideas, but soon you’ll find yourself applying them without even thinking about it. Your fellow developers and your future self will thank you.

  • This week's sponsor: FULLSTORY 

    FullStory, a pixel-perfect session playback tool that captures everything about your customer experience with one easy-to-install script.

  • Commit to Contribute 

    One morning I found a little time to work on nodemon and saw a new pull request that fixed a small bug. The only problem with the pull request was that it didn’t have tests and didn’t follow the contributing guidelines, which results in the automated deploy not running.

    The contributor was obviously extremely new to Git and GitHub and just the small change was well out of their comfort zone, so when I asked for the changes to adhere to the way the project works, it all kind of fell apart.

    How do I change this? How do I make it easier and more welcoming for outside developers to contribute? How do I make sure contributors don’t feel like they’re being asked to do more than necessary?

    This last point is important.

    The real cost of a one-line change

    Many times in my own code, I’ve made a single-line change that could be a matter of a few characters, and this alone fixes an issue. Except that’s never enough. (In fact, there’s usually a correlation between the maturity and/or age of the project and the amount of additional work to complete the change due to the growing complexity of systems over time.)

    A recent issue in my Snyk work was fixed with this single line change:

    lines of code

    In this particular example, I had solved the problem in my head very quickly and realized that this was the fix. Except that I had to then write the test to support the change, not only to prove that it works but to prevent regression in the future.

    My projects (and Snyk’s) all use semantic release to automate releases by commit message. In this particular case, I had to bump the dependencies in the Snyk command line and then commit that with the right message format to ensure a release would inherit the fix.

    All in all, the one-line fix turned into this: one line, one new test, tested across four versions of node, bump dependencies in a secondary project, ensure commit messages were right, and then wait for the secondary project’s tests to all pass before it was automatically published.

    Put simply: it’s never just a one-line fix.

    Helping those first pull requests

    Doing a pull request (PR) into another project can be pretty daunting. I’ve got a fair amount of experience and even I’ve started and aborted pull requests because I found the chain of events leading up to a complete PR too complex.

    So how can I change my projects and GitHub repositories to be more welcoming to new contributors and, most important, how can I make that first PR easy and safe?

    Issue and pull request templates

    GitHub recently announced support for issue and PR templates. These are a great start because now I can specifically ask for items to be checked off, or information to be filled out to help diagnose issues.

    Here’s what the PR template looks like for Snyk’s command line interface (CLI) :

    - [ ] Ready for review
    - [ ] Follows CONTRIBUTING rules
    - [ ] Reviewed by @remy (Snyk internal team)
     #### What does this PR do?
     #### Where should the reviewer start?
     #### How should this be manually tested?
     #### Any background context you want to provide?
     #### What are the relevant tickets?
     #### Screenshots
     #### Additional questions

    This is partly based on QuickLeft’s PR template. These items are not hard prerequisites on the actual PR, but it does help in getting full information. I’m slowly adding these to all my repos.

    In addition, having a CONTRIBUTING.md file in the root of the repo (or in .github) means new issues and PRs include the notice in the header:

    GitHub contributing notice

    Automated checks

    For context: semantic release will read the commits in a push to master, and if there’s a feat: commit, it’ll do a minor version bump. If there’s a fix: it’ll do a patch version bump. If the text BREAKING CHANGE: appears in the body of a commit, it’ll do a major version bump.

    I’ve been using semantic release in all of my projects. As long as the commit message format is right, there’s no work involved in creating a release, and no work in deciding what the version is going to be.

    Something that none of my repos historically had was the ability to validate contributed commits for formatting. In reality, semantic release doesn’t mind if you don’t follow the commit format; they’re simply ignored and don’t drive releases (to npm).

    I’ve since come across ghooks, which will run commands on Git hooks, in particular using a commit-msg hook validate-commit-msg. The installation is relatively straightforward, and the feedback to the user is really good because if the commit needs tweaking to follow the commit format, I can include examples and links.

    Here’s what it looks like on the command line:

    Git commit validation

    ...and in the GitHub desktop app (for comparison):

    Git commit validation

    This is work that I can load on myself to make contributing easier, which in turn makes my job easier when it comes to managing and merging contributions into the project. In addition, for my projects, I’m also adding a pre-push hook that runs all the tests before the push to GitHub is allowed. That way if new code has broken the tests, the author is aware.

    To see the changes required to get the output above, see this commit in my current tinker project.

    There are two further areas worth investigating. The first is the commitizenproject. Second, what I’d really like to see is a GitHub bot that could automatically comment on pull requests to say whether the commits are okay (and if not, direct the contributor on how to fix that problem) and also to show how the PR would affect the release (i.e., whether it would trigger a release, either as a bug patch or a minor version change).

    Including example tests

    I think this might be the crux of problem: the lack of example tests in any project. A test can be a minefield of challenges, such as these:

    • knowing the test framework
    • knowing the application code
    • knowing about testing methodology (unit tests, integration, something else)
    • replicating the test environment

    Another project of mine, inliner, has a disproportionately high rate of PRs that include tests. I put that down to the ease with which users can add tests.

    The contributing guide makes it clear that contributing doesn’t even require that you write test code. Authors just create a source HTML file and the expected output, and the test automatically includes the file and checks that the output is as expected.

    Adding specific examples of how to write tests will, I believe, lower the barrier of entry. I might link to some sort of sample test in the contributing doc, or create some kind of harness (like inliner does) to make it easy to add input and expected output.

    Fixing common mistakes

    Something I’ve also come to accept is that developers don’t read contributing docs. It’s okay, we’re all busy, we don’t always have time to pore over documentation. Heck, contributing to open source isn’t easy.

    I’m going to start including a short document on how to fix common problems in pull requests. Often it’s amending a commit message or rebasing the commits. This is easy for me to document, and will allow me to point new users to a walkthrough of how to fix their commits.

    What’s next?

    In truth, most of these items are straightforward and not much work to implement. Sure, I wouldn’t drop everything I’m doing and add them to all my projects at once, but certainly I’d include them in each active project as I work on it.

    1. Add issue and pull request templates.
    2. Add ghooks and validate-commit-msg with standard language (most if not all of my projects are node-based).
    3. Either make adding a test super easy, or at least include sample tests (for unit testing and potentially for integration testing).
    4. Add a contributing document that includes notes about commit format, tests, and anything that can make the contributing process smoother.

    Finally, I (and we) always need to keep in mind that when someone has taken time out of their day to contribute code to our projects—whatever the state of the pull request—it’s a big deal.

    It takes commitment to contribute. Let’s show some love for that.

  • This week's sponsor: JIRA 

    Thanks to our sponsor Try JIRA for free today.

  • Once Upon a Time 

    Once upon a time, I had a coworker named Bob who, when he needed help, would start the conversation in the middle and work to both ends. My phone would ring, and the first thing I heard was: “Hey, so, we need the spreadsheets on Tuesday so that Information Security can have them back to us in time for the estimates.”

    Spreadsheets? Estimates? Bob and I had never discussed either. As I had been “discouraged” from responding with “What the hell are you talking about now?” I spent the next 10 minutes of every Bob call trying to tease out the context of his proclamations.

    Clearly, Bob needed help—and not just with spreadsheets.

    Then there was Susan. When Susan wanted help, she gave me the entire life story of a project in the most polite, professional language possible. An email from Susan might go like this:

    Good morning,

    I’m working on the Super Bananas project, which we started three weeks ago and have been slowly working on since. We began with persona writing, then did some scenarios, and discussed a survey.

    [Insert two more paragraphs of the history of the project]

    I’m hoping—if you have the opportunity (due to your previous experience with [insert four of my last projects in chronological order])—you may be able to share a content-inventory template that would be appropriate for this project. If it isn’t too much trouble, when you get a chance, could you forward me the template at your earliest convenience?

    Thank you in advance for your cooperation,


    An email that said, “Hey do you have a content-inventory template I could use on the Super Bananas Project?” would have sufficed, but Susan wanted to be professional. She believed that if I had to ask a question, she had failed to communicate properly. And, of course, that failure would weigh heavy on all our heads.

    Bob and Susan were as opposite as the tortoise and the hare, but they shared a common problem. Neither could get over the river and through the woods effectively. Specifically, they were both lousy at establishing context and getting to the point.

    We all need the help of others to build effective tools and applications. Communication skills are so critical to that endeavor that we’ve seen article after article after article—not to mention books, training classes, and job postings—stressing the importance of communication skills. Without the ability to communicate, we can neither build things right, nor build the right things, for our clients and our users.

    Still, context-setting is a tricky skill to learn. Stray too far toward Bob, and no one knows what we’re talking about. Follow Susan’s example, and people get bored and wander off before we get to the point.

    Whether we’re asking a colleague for help or nudging an end user to take action, we want them to respond a certain way. And whether we’re writing a radio ad, publishing a blog post, writing an email, or calling a colleague, we have to set the proper level of context to get the result we want.

    The most effective technique I’ve found for beginners is a process I call “Once Upon a Time.”

    Fairy tales? Seriously?

    Fairy tales are one of our oldest forms of folklore, with evidence indicating that they may stretch back to the Roman Empire. The prelude “Once upon a time” dates to 1380 BCE, according to the Oxford English Dictionary. Wikipedia lists over 75 language variations of the stock story opener. It’s safe to say that the vast majority of us, regardless of language or culture, have heard our share of fairy tales, from the 1800s-era Brothers Grimm stories to the 1987 musical Into the Woods.

    We know how they go:

    Once upon a time, there was a [main character] living in [this situation] who [had this problem]. [Some person] knows of this need and sends the [main character] out to [complete these steps]. They [do things] but it’s really hard because [insert challenges]. They overcome [list of challenges], and everyone lives happily ever after.

    Fairy tales are effective oral storytelling techniques precisely because they follow a standard structure that always provides enough context to understand the story. Almost everything we do can be described with this structure.

    Once upon a time Anne lacked an ice cream sandwich. This forced her to get off the couch and go to the freezer, where food stayed amazingly cold. She was forced to put her hands in the icy freezer to dig the ice cream sandwich box out of the back. She overcame the cold and was rewarded with a tasty ice cream sandwich! And they all lived happily ever after.

    The structure of a fairy tale’s beginning has a lot of similarities to the journalistic Five Ws of basic information gathering: Who? What? When? Where? Why? How?

    In our communication construct, we are the main character whose situation and problem need to be succinctly described. We’ve been sent out to do a thing, we’ve hit a challenge, and now we need specific help to overcome the challenge.

    How does this help me if I’m a Bob or a Susan?

    When Bob wanted to tell his story, he didn’t start with “Once upon a time…” He started halfway through the story. If Bob was Little Red Riding Hood, he would have started by saying, “We need scissors and some rocks.” (Side note: the general lack of knowledge about how surgery works in that particular tale gives me chills.)

    When Susan wanted to tell her story, she started before “Once upon a time…” If she was Little Red Riding Hood, she started by telling you how her parents met, how long they dated, and so on, before finally getting around to mentioning that she was trapped in a wolf’s stomach.

    When we tell our stories, we have to start at the beginning—not too early, not too late. If we’re Bob, that means making sure we’ve relayed the basic facts: who we are, what our goal is, possibly who sent us, and what our challenge is. If we’re Susan, we need to make sure we limit ourselves to the facts we actually need.

    This is where we take the fairy-tale format and put it into the first person. Susan might write:

    Once upon a time, the Bananas team asked me to do the content strategy for their project. We made good progress until we had this problem: we don’t have a template for content inventories. Bob suggested I contact you. Do you have a template you can send us?

    Bob might say:

    Once upon a time, you and I were working on the data mapping of the new Information Security application. Then Information Security asked us to send the mapping to them so they could validate it. This is a problem because we only have until Tuesday to give them the unfinished spreadsheets. Otherwise we’ll hit an even bigger problem: we won’t be able to estimate the project size on Friday without the spreadsheet. Can you help me get the spreadsheet to them on time?

    Notice the parallels between the fairy tales and these drafts: we know the main character, their situation, who sent them or triggered their move, and what they need to solve their problem. In Bob’s case, this is much more information than he usually provides. In Susan’s, it’s probably much less. In both cases, we’ve distilled the situation and the request down to the basics. In both cases, the only edit needed is to remove “Once upon a time…” from the first sentence, and it’s ready to go.

    But what about…?

    Both the Bobs and the Susans I’ve worked with have had questions about this technique, especially since in both cases they thought they were already doing a pretty good job of providing context.

    The original Susan had two big concerns that led her to giving out too much information. The first was that she’d sound unprofessional if she didn’t include every last detail and nuance of business etiquette. The second was that if her recipient had questions, they’d consider her amateurish for not providing every bit of information up front.

    Susans of the world, let me assure you: clear, concise communication is professional. The message isn’t not to use “please” and “thank you”; it’s that “If it isn’t too much trouble, when you get a chance, could you please consider…” is probably overkill.

    Beyond that, no one can anticipate every question another person might have. Clear communication starts a dialogue by covering the basics and inviting questions. It also saves time; you only have to answer the questions your colleague or reader actually have. If you’re not sure whether to keep a piece of information in your story, take it out and see if the tale still makes sense.

    Bob was a tougher nut to crack, in part because he frequently didn’t realize he was starting in the middle. Bob was genuinely baffled that colleagues hadn’t read his mind to know what he was talking about. He thought he just needed the answer to one “quick” question. Once he was made aware that he was confusing—and sometimes annoying—coworkers, he could be brought back on track with gentle suggestions. “Okay Bob, let’s start over. Once upon a time you were…?”

    Begin at the beginning and stop at the end

    Using the age-old format of “Once upon a time…” gives us an incredibly sturdy framework to use for requesting action from people. We provide all of the context they need to understand our request, as well as a clear and concise description of that request.

    Clear, concise, contextual communication is professional, efficient, and much less frustrating to everyone involved, so it pays to build good habits, even if the basis of those habits seems a bit corny.

    Do you really need to start with “Once upon a time…” to tell a story or communicate a request? Well, it doesn’t hurt. The phrase is really a marker that you’re changing the way you think about your writing, for whom you’re writing it, and what you expect to gain. Soup doesn’t require stones, and business communication doesn’t require “Once upon a time…”

    But it does lead to more satisfying endings.

    And they all lived happily ever after.

  • This week's sponsor: ​FullStory 

    With our sponsor FULLSTORY, you get a pixel-perfect session playback tool that helps answer any question about your customer’s online experience.​ ​One easy-to-install script captures everything you need.

  • The Rich (Typefaces) Get Richer 

    There are over 1,200 font families available on Typekit. Anyone with a Typekit plan can freely use any of those typefaces, and yet we see the same small selection used absolutely everywhere on the web. Ever wonder why?

    The same phenomenon happens with other font services like Google Fonts and MyFonts. Google Fonts offers 708 font families, but we can’t browse the web for 15 minutes without encountering Open Sans and Lato. MyFonts has over 20,000 families available as web fonts, yet designers consistently reach for only a narrow selection of those.

    On my side project Typewolf, I curate daily examples of nice type in the wild. Here are the ten most popular fonts from 2015:

    1. Futura
    2. Aperçu
    3. Proxima Nova
    4. Gotham
    5. Brown
    6. Avenir
    7. Caslon
    8. Brandon Grotesque
    9. GT Walsheim
    10. Circular

    And here are the ten most popular from 2014:

    1. Brandon Grotesque
    2. Futura
    3. Avenir
    4. Aperçu
    5. Proxima Nova
    6. Franklin Gothic
    7. GT Walsheim
    8. Gotham
    9. Circular
    10. Caslon

    Notice any similarities? Nine out of the ten fonts from 2014 made the top ten again in 2015. Admittedly, Typewolf is a curated showcase, so there is bound to be some bias in the site selection process. But with 365 sites featured in a year, I think Typewolf is a solid representation of what is popular in the design community.

    Other lists of popular fonts show similar results. Or simply look around the web and take a peek at the CSS—Proxima Nova, Futura, and Brandon Grotesque dominate sites today. And these fonts aren’t just a little more popular than other fonts—they are orders of magnitude more popular.

    When it comes to typefaces, the rich get richer

    I don’t mean to imply that type designers are getting rich like Fortune 500 CEOs and flying around to type conferences in their private Learjets (although some type designers are certainly doing quite well). I’m just pointing out that a tiny percentage of fonts get the lion’s share of usage and that these “chosen few” continue to become even more popular.

    The rich get richer phenomenon (also known as the Matthew Effect) refers to something that grows in popularity due to a positive feedback loop. An app that reaches number one in the App Store will receive press because it is number one, which in turn will give it even more downloads and even more press. Popularity breeds popularity. For a cogent book that discusses this topic much more eloquently than I ever could, check out Nicholas Taleb’s The Black Swan.

    But back to typefaces.

    Designers tend to copy other designers. There’s nothing wrong with that—designers should certainly try to build upon the best practices of others. And they shouldn’t be culturally isolated and unaware of current trends. But designers also shouldn’t just mimic everything they see without putting thought into what they are doing. Unfortunately, I think this is what often happens with typeface selection.

    How does a typeface first become popular, anyway?

    I think it all begins with a forward-thinking designer who takes a chance on a new typeface. She uses it in a design that goes on to garner a lot of attention. Maybe it wins an award and is featured prominently in the design community. Another designer sees it and thinks, “Wow, I’ve never seen that typeface before—I should try using it for something.” From there it just cascades into more and more designers using this “new” typeface. But with each use, less and less thought goes into why they are choosing that particular typeface. In the end, it’s just copying.

    Or, a typeface initially becomes popular simply from being in the right place at the right time. When you hear stories about famous YouTubers, there is one thing almost all of them have in common: they got in early. Before the market is saturated, there’s a much greater chance of standing out; your popularity is much more likely to snowball. A few of the most popular typefaces on the web, such as Proxima Nova and Brandon Grotesque, tell a similar story.

    The typeface Gotham skyrocketed in popularity after its use in Obama’s 2008 presidential campaign. But although it gained enormous steam in the print world, it wasn’t available as a web font until 2013, when the company then known as Hoefler & Frere-Jones launched its subscription web font service. Proxima Nova, a typeface with a similar look, became available as a web font early, when Typekit launched in 2009. Proxima Nova is far from a Gotham knockoff—an early version, Proxima Sans, was developed before Gotham—but the two typefaces share a related, geometric aesthetic. Many corporate identities used Gotham, so when it came time to bring that identity to the web, Proxima Nova was the closest available option. This pushed Proxima Nova to the top of the bestseller charts, where it remains to this day.

    Brandon Grotesque probably gained traction for similar reasons. It has quite a bit in common with Neutraface, a typeface that is ubiquitous in the offline world—walk into any bookstore and you’ll see it everywhere. Brandon Grotesque was available early on as a web font with simple licensing, whereas Neutraface was not. If you wanted an art-deco-inspired geometric sans serif with a small x-height for your website, Brandon Grotesque was the obvious choice. It beat Neutraface to market on the web and is now one of the most sought-after web fonts. Once a typeface reaches a certain level of popularity, it seems likely that a psychological phenomenon known as the availability heuristic kicks in. According to the availability heuristic, people place much more importance on things that they are easily able to recall. So if a certain typeface immediately comes to mind, then people assume it must be the best option.

    For example, Proxima Nova is often thought of as incredibly readable for a sans serif due to its large x-height, low stroke contrast, open apertures, and large counters. And indeed, it works very well for setting body copy. However, there are many other sans serifs that fit that description—Avenir, FF Mark, Gibson, Texta, Averta, Museo Sans, Sofia, Lasiver, and Filson, to name a few. There’s nothing magical about Proxima Nova that makes it more readable than similar typefaces; it’s simply the first one that comes to mind for many designers, so they can’t help but assume it must be the best.

    On top of that, the mere-exposure effect suggests that people tend to prefer things simply because they are more familiar with them—the more someone encounters Proxima Nova, the more appealing they tend to find it.

    So if we are stuck in a positive feedback loop where popular fonts keep becoming even more popular, how do we break the cycle? There are a few things designers can do.

    Strive to make your brand identifiable by just your body text

    Even if it’s just something subtle, aim to make the type on your site unique in some way. If a reader can tell they are interacting with your brand solely by looking at the body of an article, then you are doing it right. This doesn’t mean that you should completely lose control and use type just for the sole purpose of standing out. Good type, some say, should be invisible. (Some say otherwise.) Show restraint and discernment. There are many small things you can do to make your type distinctive.

    Besides going with a lesser-used typeface for your body text, you can try combining two typefaces (or perhaps three, if you’re feeling frisky) in a unique way. Headlines, dates, bylines, intros, subheads, captions, pull quotes, and block quotes all offer ample opportunity for experimentation. Try using heavier and lighter weights, italics and all-caps. Using color is another option. A subtle background color or a contrasting subhead color can go a long way in making your type memorable.

    Don’t make your site look like a generic website template. Be a brand.

    Dig deeper on Typekit

    There are many other high-quality typefaces available on Typekit besides Proxima Nova and Brandon Grotesque. Spend some time browsing through their library and try experimenting with different options in your mockups. The free plan that comes with your Adobe Creative Cloud subscription gives you access to every single font in their library, so you have no excuse not to at least try to discover something that not everyone else is using.

    A good tip is to start with a designer or foundry you like and then explore other typefaces in their catalog. For example, if you’re a fan of the popular slab serif Adelle from TypeTogether, simply click the name of their foundry and you’ll discover gems like Maiola and Karmina Sans. Don’t be afraid to try something that you haven’t seen used before.

    Dig deeper on Google Fonts (but not too deep)

    As of this writing, there are 708 font families available for free on Google Fonts. There are a few dozen or so really great choices. And then there are many, many more not-so-great choices that lack italics and additional weights and that are plagued by poor kerning. So, while you should be wary of digging too deep on Google Fonts, there are definitely some less frequently used options, such as Alegreya and Fira Sans, that can hold their own against any commercial font.

    I fully support the open-source nature of Google Fonts and think that making good type accessible to the world for free is a noble mission. As time goes by, though, the good fonts available on Google Fonts will simply become the next Times New Romans and Arials—fonts that have become so overused that they feel like mindless defaults. So if you rely on Google Fonts, there will always be a limit to how unique and distinctive your brand can be.

    Try another web font service such as Fonts.com, Cloud.typography or Webtype

    It may have a great selection, but Typekit certainly doesn’t have everything. The Fonts.com library dwarfs the Typekit library, with over 40,000 fonts available. Hoefler & Co.’s high-quality collection of typefaces is only available through their Cloud.typography service. And Webtype offers selections not available on other services.

    Self-host fonts from MyFonts, FontShop or Fontspring

    Don’t be afraid to self-host web fonts. Serving fonts from your own website really isn’t that difficult and it’s still possible to have a fast-loading website if you self-host. I self-host fonts on Typewolf and my Google PageSpeed Insights scores are 90/100 for mobile and 97/100 for desktop—not bad for an image-heavy site.

    MyFonts, FontShop, and Fontspring all offer self-hosting kits that are surprisingly easy to set up. Self-hosting also offers the added benefit of not having to rely on a third-party service that could potentially go down (and take your beautiful typography with it).

    Explore indie foundries

    Many small and/or independent foundries don’t make their fonts available through the major distributors, instead choosing to offer licensing directly through their own sites. In most cases, self-hosting is the only available option. But again, self-hosting isn’t difficult and most foundries will provide you with all the sample code you need to get up and running.

    Here are some great places to start, in no particular order:

    What about Massimo Vignelli?

    Before I wrap this up, I think it’s worth briefly discussing famed designer Massimo Vignelli’s infamous handful-of-basic-typefaces advice (PDF). John Boardley of I Love Typography has written an excellent critique of Vignelli’s dogma. The main points are that humans have a constant desire for improvement and refinement; we will always need new typefaces, not just so that brands can differentiate themselves from competitors, but to meet the ever-shifting demands of new technologies. And a limited variety of type would create a very bland world.

    No doubt there were those in the 16th century who shared Vignelli’s views. Every age is populated by those who think we’ve reached the apogee of progress… Vignelli’s beloved Helvetica, . . . would never have existed but for our desire to do better, to progress, to create.
    John Boardley, “The Vignelli Twelve”

    Are web fonts the best choice for every website?

    Not necessarily. There are some instances where accessibility and site speed considerations may trump branding—in that case, it may be best just to go with system fonts. Georgia is still a pretty great typeface, and so are newer system UI fonts likes San Francisco, Roboto/Noto, and Segoe.

    But if you’re working on a project where branding is important, don’t ignore the importance of type. We’re bombarded by more content now than at any other time in history; having a distinctive brand is more critical than ever.

    90 percent of design is typography. And the other 90 percent is whitespace.
    Jeffrey Zeldman, “The Year in Design”

    As designers, ask yourselves: “Is this truly the best typeface for my project? Or am I just using it to be safe, or out of laziness? Will it make my brand memorable, or will my site blend in with every other site out there?” The choice is yours. Dig deep, push your boundaries, and experiment. There are thousands of beautiful and functional typefaces out there—go use them!

  • Never Show A Design You Haven’t Tested On Users 

    It isn’t hard to find a UX designer to nag you about testing your designs with actual users. The problem is, we’re not very good at explaining why you should do user testing (or how to find the time). We say it like it’s some accepted, self-explanatory truth that deep down, any decent human knows is the right thing to do. Like “be a good person” or “be kind to animals.” Of course, if it was that self-evident, there would be a lot more user testing in this world.

    Let me be very specific about why user testing is essential. As long as you’re in the web business, your work will be exposed to users.

    If you’re already a user-testing advocate, that may seem obvious, but we often miss something that’s not as clear: how user testing impacts stakeholder communication and how we can ensure testing is built into projects, even when it seems impossible.

    The most devilish usability issues are those that haven’t even occurred to you as potential problems; you won’t find all the usability issues just by looking at your design. User testing is a way to be there when it happens, to make sure the stuff you created actually works as you intended, because best practices and common sense will get you only so far. You need to test if you want to innovate, otherwise, it’s difficult to know whether people will get it. Or want it. It’s how you find out whether you’ve created something truly intuitive.

    How testing up front saves the day

    Last fall, I was going to meet with one of our longtime clients, the charity and NGO Plan International Norway. We had an idea for a very different sign-up form than the one they were using. What they already had worked quite well, so any reasonable client would be a little skeptical. Why fix it if it isn’t broken, right? Preparing for the meeting, we realized our idea could be voted down before we had the chance to try it out.

    We decided to quickly put together a usability test before we showed the design.

    At the meeting, we began by presenting the results of the user test rather than the design itself.

    We discussed what worked well, and what needed further improvement. The conversation that followed was rational and constructive. Together, we and our partners at Plan discussed different ways of improving the first design, rather than nitpicking details that weren’t an issue in the test. It turned out to be one of the best client meetings I’ve ever had.

    Panels of photos depicting the transition from hand-drawn sketch to digital mockup

    We went from paper sketch to Illustrator sketch to InVision in a day in order to get ready for the test.

    User testing gives focus to stakeholder feedback

    Naturally, stakeholders in any project feel responsible for the end result and want to discuss suggestions, solutions, and any concerns about your design. By testing the design beforehand, you can focus on the real issues at hand.

    Don’t worry about walking into your client meeting with a few unsolved problems. You don’t need to have a solution for every user-identified issue. The goal is to show your design, make clear what you think needs fixing, and ideally, bring a new test of the improved design to the next meeting.

    By testing and explaining the problems you’ve found, stakeholders can be included in suggesting solutions, rather than hypothesizing about what might be problems. This also means that they can focus on what they know and are good at. How will this work with our CRM system? Will we be able to combine this approach with our annual campaign?

    Since last fall, I’ve been applying this dogma in all the work that I do: never show a design you haven’t tested. We’ve reversed the agenda to present results first, then a detailed walkthrough of the design. So far, our conversations about design and UX have become a lot more productive.

    Making room for user testing: sell it like you mean it

    Okay, so it’s a good idea to test. But what if the client won’t buy it or the project owner won’t give you the resources? User testing can be a hard sell—I know this from experience. Here are four ways to move past objections.

    Don’t make it optional

    It’s not unusual to look at the total sum in a proposal, and go, Uhm, this might be a little too much.  So what typically happens? Things that don’t seem essential get trimmed. That usability lab test becomes optional, and we convince ourselves that we’ll somehow persuade the client later that the usability test is actually important.

    But how do you convince them that something you made optional a couple of months ago is now really important? The client will likely feel that we’re trying to sell them something they don’t really need.

    Describe the objective, not the procedure

    A usability lab test with five people often produces valuable—but costly—insight. It also requires resources that don’t go into the test itself: e.g., recruiting and rewarding test subjects, rigging your lab and observation room, making sure the observers from the client are well taken care of (you can’t do that if you’re the one moderating the test), and so on.

    Today, rather than putting “usability lab test with five people” in the proposal, I’ll dedicate a few days to: “Quality assurance and testing: We’ll use the methods we deem most suitable at different stages of the process (e.g., usability lab test, guerilla testing, click tests, pluralistic walkthroughs, etc.) to make sure we get it right.”

    I have never had a client ask me to scale down the “get it right” part. And even if they do ask you to scale it down, you can still pull it off if you follow the next steps.

    Scale down documentation—not the testing

    If you think testing takes too much time, it might be because you spend too much time documenting the test. In a lab test, it’s a good idea to have 20 to 30 minutes between each test subject. This gives you time to summarize (and maybe even fix) the things you found in each test before you move on to the next subject. By the end of the day, you have a to-do list. No need to document it any more than that.

    List of update notifications in the Slack channel

    When user testing the Norwegian Labour party’s new crowdsourcing site, we all contributed our observations straight into our shared Slack channel.

    I’ve also found InVision’s comment mode useful for documenting issues discovered in the tests. If we have an HTML and CSS prototype, screenshots of the relevant pages can be added to InVision, with comments placed on top of the specific issues. This also makes it easy for the client to contribute to the discussion.

    Screen capture of InVision mockup, with comments from team members attached to various parts of the design

    After the test is done, we’ve already fixed some of the problems. The rest ends up in InVision as a to-do on the relevant page. The prototype is actually in HTML, CSS, and JavaSCript, but the visual aspect of InVision’s comment feature make it much easier to avoid misunderstandings.

    Scale down the prototype—not the testing

    You don’t need a full-featured website or a polished prototype to begin testing.

    • If you’re testing text, you really just need text.
    • If you’re testing a form, you just need to prototype the form.
    • If you wonder if something looks clickable, a flat Photoshop sketch will do.
    • Even a paper sketch will work to see if you’re on the right track.

    And if you test at this early stage, you’ll waste much less time later on.

    Low-cost, low-effort techniques to get you started

    You can do this. Now, I’m going to show you some very specific ways you can test, and some examples from projects I’ve worked on.

    Pluralistic walkthrough

    • Time: 15 minutes and up
    • Costs: Free

    A pluralistic walkthrough is UX jargon for asking experts to go through the design and point out potential usability issues. But putting five experts in a room for an hour is expensive (and takes time to schedule). Fortunately, getting them in the same room isn’t always necessary.

    At the start of a project, I put sketches or screenshots into InVision and post it in our Slack channels and other internal social media. I then ask my colleagues to spend a couple of minutes critiquing it. As easy as that, you’ll be able to weed out (or create hypotheses about) the biggest issues in your design.

    Team member comments posted on InVision mockup

    Before the usability test, we asked colleagues to comment (using InVision) on what they thought would work or not.

    Hit the streets

    • Time: 1–3 hours
    • Costs: Snacks

    This is a technique that works well if there’s something specific you want to test. If you’re shy, take a deep breath and get over it. This is by far the most effective way of usability testing if you’re short on resources. In the Labour Party project, we were able to test with seven people and summarize our findings within two hours. Here’s how:

    1. Get a device that’s easy to bring along. In my experience, an iPad is most approachable.
    2. Bring candy and snacks. Works great to have a basket of snacks and put the iPad on the basket too.
    3. Go to a public place with lots of people, preferably a place where people might be waiting (e.g., a station of some sort).
    4. Approach people who look like they are bored and waiting; have your snacks (and iPad) in front of you, and say: “Excuse me, I’m from [company]. Could I borrow a couple of minutes from you? I promise it won’t take more than five minutes. And I have candy!” (This works in Norway, and I’m pretty sure food is a universal language). If you’re working in teams of two, one of you should stay in the background during the approach.
    5. If you’re alone, take notes in between each test. If there are two of you, one person can focus on taking notes while the other is moderating, but it’s still a good idea to summarize between each test.
    Two people standing in a public transportation hub, holding a large basket and an iPad

    Morten and Ida are about to go to the Central Station in Oslo, Norway, to test the Norwegian Labour Party’s new site for crowdsourcing ideas. Don’t forget snacks!

    Online testing tools

    • Time: 30 minutes and up
    • Costs: Most tools have limited free versions. Optimal Workshop charges $149 for one survey and has a yearly plan for $1990.

    There isn’t any digital testing tool that can provide the kind of insight you get from meeting real users face-to-face. Nevertheless, digital tools are a great way of going deeper into specific themes to see if you can corroborate and triangulate the data from your usability test.

    There are many tools out there, but my two favorites are Treejack and Chalkmark from Optimal Workshop. With Treejack, it rarely takes more than an hour to figure out whether your menus and information architecture are completely off or not. With click tests like Chalkmark, you can quickly get a feel for whether people understand what’s clickable or not.

    Screencapture of Illustrator mockup

    A Chalkmark test of an early Illustrator mockup of Plan’s new home page. The survey asks: “Where would you click to send a letter to your sponsored child?” The heatmap shows where users clicked.

    Diagram combining pie charts and paths

    Nothing kills arguments over menus like this baby. With Treejack, you recreate the information architecture within the survey and give users a task to solve. Here we’ve asked: “You wonder how Plan spends its funds. Where would you search for that?” The results are presented as a tree of the paths the users took.

    Using existing audience for experiments

    • Time: 30 minutes and up
    • Costs: Free (e.g., using Hotjar and Google Analytics).

    One of the things we designed for Plan was longform article pages, binding together a compelling story of text, images, and video. It struck us that these wouldn’t really fit in a usability test. What would the task be? Read the article? And what were the relevant criteria? Time spent? How far he or she scrolled? But what if the person recruited to the test wasn’t interested in the subject? How would we know if it was the design or the story that was the problem, if the person didn’t act as we hoped?

    Since we had used actual content and photos (no lorem ipsum!), we figured that users wouldn’t notice the difference between a prototype and the actual website. What if we could somehow see whether people actually read the article when they stumbled upon it in its natural context?

    The solution was for Plan to share the link to the prototyped article as if it were a regular link to their website, not mentioning that it was a prototype.

    The prototype was set up with Hotjar and Google Analytics. In addition, we had the stats from Facebook Insights. This allowed us to see whether people clicked the link, how much time they spent on the page, how far they scrolled, what they clicked, and even what they did on Plan’s main site if they came from the prototyped article. From this we could surmise that there was no indication of visual barriers (e.g., a big photo making the user think the page was finished), and that the real challenge was actually getting people to click the link in the first place.

    Side-by-side images showing the design and the heatmap resulting from user testing

    On the left is the Facebook update from Plan. On the right is the heat map from Hotjar, showing how far people scrolled, with no clear drop-out point.

    Did you get it done? Was this useful?

    • Time: A few days or a week to set up, but basically no time spent after that
    • Costs: No cost if you build your own; Task Analytics from $950 a month

    Sometimes you need harder, bigger numbers to be convincing. This often leads people to A/B testing or Google Analytics, but unless what you’re looking for is increasing a very specific conversion, even these tools can come up short. Often you’d gain more insight looking for something of a middle ground between the pure quantitative data provided by tools like Google Analytics, and the qualitative data of usability tests.

    “Was it helpful?” modules are one of those middle-ground options I try to implement in almost all of my projects. Using tools like Google Tag Manager, you can even combine the data, letting you see the pages that have the most “yes” and “no” votes on different parts of your website (content governance dream come true, right?). But the qualitative feedback is also incredibly valuable for suggesting specific things your design is lacking.

    Feedback submission buttons

    “Was this article helpful?” or “Did you find what you were looking for?” are simple questions that can give valuable insight.

    This technique falls short if your users weren’t able to find a relevant article. Those folks aren’t going to leave feedback—they’re going to leave. Google Analytics isn’t of much help there, either. That high bounce rate? In most cases you can only guess why. Did they come and go because they found their answer straight away, or because the page was a total miss? Did they spend a lot of time on the page because it was interesting, or because it was impossible to understand?

    My clever colleagues made a tool to answer those kinds of questions. When we do a redesign, we run a Task Analytics survey both before and after launch to figure out not only what the top tasks are, but whether or not people were able to complete their task.

    When the user arrives, they’re asked if they want to help out. Then they’re asked to do whatever they came for and let us know when they’re done. When they’re done, we ask a) “What task did you come to do?” and b) “Did you complete the task?”

    This gives us data that is actionable and easily understood by stakeholders. At our own website, the most common task people arrive for is to contact an employee, and we learned that one in five will fail. We can fix that. And afterward, we can measure whether or not our fix really worked.

    Desktop and mobile screenshots from Task Analytics dashboard

    Why do people come to Netlife Research’s website, and do they complete their task? Screenshot from Task Analytics dashboard.

    Set up a usability lab and have a weekly drop-in test day

    • Time: 6 hours per project tested + time spent observing the test
    • Costs: rewarding subjects + the minimal costs of setting up a lab

    Setting up a usability lab is basically free in 2016:

    • A modern laptop has a microphone and camera built in. No need to buy that.
    • Want to test on mobile? Get a webcam and a flexible tripod or just turn your laptop around
    • Numerous screensharing and video conference tools like Skype, Google Hangout, and GoToMeeting mean there’s no need for hefty audiovisual equipment or mirror windows.
    • Even eyetracking is becoming affordable

    Other than that, you just need a room that’s big enough for you and a user. So even as a UX team of one, you can afford your own usability lab. Setting up a weekly drop-in test makes sense for bigger teams. If you’re at twenty people or more, I’d bet it would be a positive return on investment.

    My ingenious colleague Are Halland is responsible for the test each week. He does the recruiting, the lab setup, and the moderating. Each test day consists of tests with four different people, and each person typically gets tasks from two to three different projects that Netlife is currently working on. (Read up on why it makes sense to test with so few people.)

    By testing two to three projects at a time and having the same person organize it, we can cut down on the time spent preparing and executing the test without cutting out the actual testing.

    As a consultant, all I have to do is to let Are know a few days in advance that I need to test something. Usually, I will send a link to the live stream of the test to clients to let them know we’re testing and that they’re welcome to pop in and take a look. A bonus is that clients find it surprisingly rewarding to see other client’s tests and getting other client’s views on their own design (we don’t put competitors in the same test).

    This has made it a lot easier to test work on short notice, and it has also reduced the time we have to spend on planning and executing tests.

    Two men sitting at a table and working on laptops, with a large screen in the background to display what they are collaborating on

    From a drop-in usability test with the Norwegian Labour Party. Eyetracking data on the screen, Morten (Labour Party) and Jørgen (front-end designer) taking notes (and instantly fixing stuff!) on the right.

    Testing is designing

    As I hope I’ve demonstrated, user testing doesn’t have to be expensive or time-consuming. So what stops us? Personally, I’ve met two big hurdles: building testing into projects to begin with and making a habit out of doing the work.

    The critical first step is to make sure that some sort of user testing is part of the approved project plan. A project manager will look at the proposal and make sure we tick that off the list. Eventually, maybe your clients will come asking for it: “But wasn’t there supposed to be some testing in this project?”.

    Second, you don’t have to ask for anyone’s permission to test. User testing improves not only the quality of our work, but also the communication within teams and with stakeholders. If you’re tasked with designing something, even if you have just a few days to do it, treat testing as a part of that design task. I’ve suggested a couple of ways to do that, even with limited time and funds, and I hope you’ll share even more tips, tricks, and tools in the comments.

  • Meaningful CSS: Style Like You Mean It 

    These days, we have a world of meaningful markup at our fingertips. HTML5 introduced a lavish new set of semantically meaningful elements and attributes, ARIA defined an entire additional platform to describe a rich internet, and microformats stepped in to provide still more standardized, nuanced concepts. It’s a golden age for rich, meaningful markup.

    Yet our markup too often remains a tangle of divs, and our CSS is a morass of classes that bear little relationship to those divs. We nest div inside div inside div, and we give every div a stack of classes—but when we look in the CSS, our classes provide little insight into what we’re actually trying to define. Even when we do have semantic and meaningful markup, we end up redefining it with CSS classes that are inherently arbitrary. They have no intrinsic meaning.

    We were warned about these patterns years ago:

    In a site afflicted by classitis, every blessed tag breaks out in its own swollen, blotchy class. Classitis is the measles of markup, obscuring meaning as it adds needless weight to every page.
    Jeffrey Zeldman, Designing with Web Standards, 1st ed.

    Along the same lines, the W3C weighed in with:

    CSS gives so much power to the “class” attribute, that authors could conceivably design their own “document language” based on elements with almost no associated presentation (such as DIV and SPAN in HTML) and assigning style information through the “class” attribute… Authors should avoid this practice since the structural elements of a document language often have recognized and accepted meanings and author-defined classes may not. (emphasis mine)

    So why, exactly, does our CSS abuse classes so mercilessly, and why do we litter our markup with author-defined classes? Why can’t our CSS be as semantic and meaningful as our markup? Why can’t both be more semantic and meaningful, moving forward in tandem?

    Building better objects

    A long time ago, as we emerged from the early days of CSS and began building increasingly larger sites and systems, we struggled to develop some sound conventions to wrangle our ever-growing CSS files. Out of that mess came object-oriented CSS.

    Our systems for safely building complex, reusable components created a metastasizing classitis problem—to the point where our markup today is too often written in the service of our CSS, instead of the other way around. If we try to write semantic, accessible markup, we’re still forced to tack on author-defined meanings to satisfy our CSS. Both our markup and our CSS reflect a time when we could only define objects with what we had: divs and classes. When in doubt, add more of both. It was safer, especially for older browsers, so we oriented around the most generic objects we could find.

    Today, we can move beyond that. We can define better objects. We can create semantic, descriptive, and meaningful CSS that understands what it is describing and is as rich and accessible as the best modern markup. We can define the elephant instead of saying things like .pillar and .waterspout.

    Clearing a few things up

    But before we turn to defining better objects, let’s back up a bit and talk about what’s wrong with our objects today, with a little help from cartoonist Gary Larson.

    Larson once drew a Far Side cartoon in which a man carries around paint and marks everything he sees. “Door” drips across his front door, “Tree” marks his tree, and his cat is clearly labelled “Cat”. Satisfied, the man says, “That should clear a few things up.”

    We are all Larson’s label-happy man. We write <table class="table"> and <form class="form"> without a moment’s hesitation. Looking at Github, one can find plenty of examples of <main class="main">. But why? You can’t have more than one main element, so you already know how to reference it directly. The new elements in HTML5 are nearly a decade old now. We have no excuse for not using them well. We have no excuse for not expecting our fellow developers to know and understand them.

    Why reinvent the semantic meanings already defined in the spec in our own classes? Why duplicate them, or muddy them?

    An end-user may not notice or care if you stick a form class on your form element, but you should. You should care about bloating your markup and slowing down the user experience. You should care about readability. And if you’re getting paid to do this stuff, you should care about being the sort of professional who doesn’t write redundant slop. “Why should I care” was the death rattle of those advocating for table-based layouts, too.

    Start semantic

    The first step to semantic, meaningful CSS is to start with semantic, meaningful markup. Classes are arbitrary, but HTML is not. In HTML, every element has a very specific, agreed-upon meaning, and so do its attributes. Good markup is inherently expressive, descriptive, semantic, and meaningful.

    If and when the semantics of HTML5 fall short, we have ARIA, specifically designed to fill in the gaps. ARIA is too often dismissed as “just accessibility,” but really—true to its name—it’s about Accessible Rich Internet Applications. Which means it’s chock-full of expanded semantics.

    For example, if you want to define a top-of-page header, you could create your own .page-header class, which would carry no real meaning. You could use a header element, but since you can have more than one header element, that’s probably not going to work. But ARIA’s [role=banner] is already there in the spec, definitively saying, “This is a top-of-page header.”

    Once you have <header role="banner">, adding an extra class is simply redundant and messy. In our CSS, we know exactly what we’re talking about, with no possible ambiguity.

    And it’s not just about those big top-level landmark elements, either. ARIA provides a way to semantically note small, atomic-level elements like alerts, too.

    A word of caution: don’t throw ARIA roles on elements that already have the same semantics. So for example, don’t write <button role="button">, because the semantics are already present in the element itself. Instead, use [role=button] on elements that should look and behave like buttons, and style accordingly:

    [role=button] {

    Anything marked as semantically matching a button will also get the same styles. By leveraging semantic markup, our CSS clearly incorporates elements based on their intended usage, not arbitrary groupings. By leveraging semantic markup, our components remain reusable. Good markup does not change from project to project.

    Okay, but why?


    • If you’re writing semantic, accessible markup already, then you dramatically reduce bloat and get cleaner, leaner, and more lightweight markup. It becomes easier for humans to read and will—in most cases—be faster to load and parse. You remove your author-defined detritus and leave the browser with known elements. Every element is there for a reason and provides meaning.
    • On the other hand, if you’re currently wrangling div-and-class soup, then you score a major improvement in accessibility, because you’re now leveraging roles and markup that help assistive technologies. In addition, you standardize markup patterns, making repeating them easier and more consistent.
    • You’re strongly encouraging a consistent visual language of reusable elements. A consistent visual language is key to a satisfactory user experience, and you’ll make your designers happy as you avoid uncanny-valley situations in which elements look mostly but not completely alike, or work slightly differently. Instead, if it looks like a duck and quacks like a duck, you’re ensuring it is, in fact, a duck, rather than a rabbit.duck.
    • There’s no context-switching between CSS and HTML, because each is clearly describing what it’s doing according to a standards-based language.
    • You’ll have more consistent markup patterns, because the right way is clear and simple, and the wrong way is harder.
    • You don’t have to think of names nearly as much. Let the specs be your guide.
    • It allows you to decouple from the CSS framework du jour.

    Here’s another, more interesting scenario. Typical form markup might look something like this (or worse):

    <form class="form" method="POST" action=".">
    	<div class="form-group">
    		<label for="id-name-field">What’s Your Name</label>
    		<input type="text" class="form-control text-input" name="name-field" id="id-name-field" />
    	<div class="form-group">
    		<input type="submit" class="btn btn-primary" value="Enter" />

    And then in the CSS, you’d see styles attached to all those classes. So we have a stack of classes describing that this is a form and that it has a couple of inputs in it. Then we add two classes to say that the button that submits this form is a button, and represents the primary action one can take with this form.

    Common vs. optimal form markup
    What you’ve been using What you could use instead Why
    .form form Most of your forms will—or at least should—follow consistent design patterns. Save additional identifiers for those that don’t. Have faith in your design patterns.
    .form-group form > p or fieldset > p The W3C recommends paragraph tags for wrapping form elements. This is a predictable, recommended pattern for wrapping form elements.
    .form-control or .text-input [type=text] You already know it’s a text input.
    .btn and .btn-primary or .text-input [type=submit] Submitting the form is inherently the primary action.

    Some common vs. more optimal form markup patterns

    In light of all that, here’s the new, improved markup.

    <form method="POST" action=".">
    		<label for="id-name-field">What’s Your Name</label>
    		<input type="text" name="name-field" id="id-name-field" />
    		<button type="submit">Enter</button>

    The functionality is exactly the same.

    Or consider this CSS. You should be able to see exactly what it’s describing and exactly what it’s doing:

    [role=tab] {
    	display: inline-block;
    [role=tab][aria-selected=true] {
    	background: tomato;
    [role=tabpanel] {
    	display: none;
    [role=tabpanel][aria-expanded=true] {
    	display: block;

    Note that [aria-hidden] is more semantic than a utility .hide class, and could also be used here, but aria-expanded seems more appropriate. Neither necessarily needs to be tied to tabpanels, either.

    In some cases, you’ll find no element or attribute in the spec that suits your needs. This is the exact problem that microformats and microdata were designed to solve, so you can often press them into service. Again, you’re retaining a standardized, semantic markup and having your CSS reflect that.

    At first glance, it might seem like this would fail in the exact scenario that CSS naming structures were built to suit best: large projects, large teams. This is not necessarily the case. CSS class-naming patterns place rigid demands on the markup that must be followed. In other words, the CSS dictates the final HTML. The significant difference is that with a meaningful CSS technique, the styles reflect the markup rather than the other way around. One is not inherently more or less scalable. Both come with expectations.

    One possible argument might be that ensuring all team members understand the correct markup patterns will be too hard. On the other hand, if there is any baseline level of knowledge we should expect of all web developers, surely that should be a solid working knowledge of HTML itself, not memorizing arcane class-naming rules. If nothing else, the patterns a team follows will be clear, established, well documented by the spec itself, and repeatable. Good markup and good CSS, reinforcing each other.

    To suggest we shouldn’t write good markup and good CSS because some team members can’t understand basic HTML structures and semantics is a cop-out. Our industry can—and should—expect better. Otherwise, we’d still be building sites in tables because CSS layout is supposedly hard for inexperienced developers to understand. It’s an embarrassing argument.

    Probably the hardest part of meaningful CSS is understanding when classes remain helpful and desirable. The goal is to use classes as they were intended to be used: as arbitrary groupings of elements. You’d want to create custom classes most often for a few cases:

    • When there are not existing elements, attributes, or standardized data structures you can use. In some cases, you might truly have an object that the HTML spec, ARIA, and microformats all never accounted for. It shouldn’t happen often, but it is possible. Just be sure you’re not sticking a horn on a horse when you’re defining .unicorn.
    • When you wish to arbitrarily group differing markup into one visual style. In this example, you want objects that are not the same to look like they are. In most cases, they should probably be the same, semantically, but you may have valid reasons for wanting to differentiate them.
    • You’re building it as a utility mixin.

    Another concern might be building up giant stacks of selectors. In some cases, building a wrapper class might be helpful, but generally speaking, you shouldn’t have a big stack of selectors because the elements themselves are semantically different elements and should not be sharing all that many styles. The point of meaningful CSS is that you know from your CSS that that button or [role=button] applies to all buttons, but [type=submit] is always the primary action item on the form.

    We have so many more powerful attributes at our disposal today that we shouldn’t need big stacks of selectors. To have them would indicate sloppy thinking about what things truly are and how they are intended to be used within the overall system.

    It’s time to up our CSS game. We can remain dogmatically attached to patterns developed in a time and place we have left behind, or we can move forward with CSS and markup that correspond to defined specs and standards. We can use real objects now, instead of creating abstract representations of them. The browser support is there. The standards and references are in place. We can start today. Only habit is stopping us.

  • Prototypal Object-Oriented Programming using JavaScript 

    Douglas Crockford accurately described JavaScript as the world’s most misunderstood language. A lot of programmers tend to think of it as not a “proper” language because it lacks the common object-oriented programming concepts. I myself developed the same opinion after my first JavaScript project ended up a hodgepodge, as I couldn’t find a way to organize code into classes. But as we will see, JavaScript comes packed with a rich system of object-oriented programming that many programmers don’t know about.

    Back in the time of the First Browser War, executives at Netscape hired a smart guy called Brendan Eich to put together a language that would run in the browser. Unlike class-based languages like C++ and Java, this language, which was at some point called LiveScript, was designed to implement a prototype-based inheritance model. Prototypal OOP, which is conceptually different from the class-based systems, had been invented just a few years before to solve some problems that class-based OOP presented and it fit very well with LiveScript’s dynamic nature.

    Unfortunately, this new language had to “look like Java” for marketing reasons. Java was the cool new thing in the tech world and Netscape’s executives wanted to market their shiny new language as “Java’s little brother.” This seems to be why its name was changed to JavaScript. The prototype-based OOP system, however, didn’t look anything like Java’s classes. To make this prototype-based system look like a class-based system, JavaScript’s designers came up with the keyword new and a novel way to use constructor functions. The existence of this pattern and the ability to write “pseudo class-based” code has led to a lot of confusion among developers.

    Understanding the rationale behind prototype-based programming was my “aha” moment with JavaScript and resolved most of the gripes I had with the language. I hope learning about prototype-based OOP brings you the same peace of mind it brought me. And I hope that exploring a technique that has not been fully explored excites you as much as it excites me.

    Prototype-based OOP

    Conceptually, in class-based OOP, we first create a class to serve as a “blueprint” for objects, and then create objects based on this blueprint. To build more specific types of objects, we create “child” classes; i.e., we make some changes to the blueprint and use the resulting new blueprint to construct the more specific objects.

    For a real-world analogy, if you were to build a chair, you would first create a blueprint on paper and then manufacture chairs based on this blueprint. The blueprint here is the class, and chairs are the objects. If you wanted to build a rocking chair, you would take the blueprint, make some modifications, and manufacture rocking chairs using the new blueprint.

    Now take this example into the world of prototypes: you don’t create blueprints or classes here, you just create the object. You take some wood and hack together a chair. This chair, an actual object, can function fully as a chair and also serve as a prototype for future chairs. In the world of prototypes, you build a chair and simply create “clones” of it. If you want to build a rocking chair, all you have to do is pick a chair you’ve manufactured earlier, attach two rockers to it, and voilà! You have a rocking chair. You didn’t really need a blueprint for that. Now you can just use this rocking chair for yourself, or perhaps use it as a prototype to create more rocking chairs.

    JavaScript and prototype-based OOP

    Following is an example that demonstrates this kind of OOP in JavaScript. We start by creating an animal object:

    var genericAnimal = Object.create(null);

    Object.create(null) creates a new empty object. (We will discuss Object.create() in further detail later.) Next, we add some properties and functions to our new object:

    genericAnimal.name = 'Animal';
    genericAnimal.gender = 'female';
    genericAnimal.description = function() {
    	return 'Gender: ' + this.gender + '; Name: ' + this.name;

    genericAnimal is a proper object and can be used like one:

    //Gender: female; Name: Animal

    We can create other, more specific animals by using our sample object as a prototype. Think of this as cloning the object, just like we took a chair and created a clone in the real world.

    var cat = Object.create(genericAnimal);

    We just created a cat as a clone of the generic animal. We can add properties and functions to this:

    cat.purr = function() {
    	return 'Purrrr!';

    We can use our cat as a prototype and create a few more cats:

    var colonel = Object.create(cat);
    colonel.name = 'Colonel Meow';
    var puff = Object.create(cat);
    puff.name = 'Puffy';

    You can also observe that properties/methods from parents were properly carried over:

    //Gender: female; Name: Puffy

    The new keyword and the constructor function

    JavaScript has the concept of a new keyword used in conjunction with constructor functions. This feature was built into JavaScript to make it look familiar to people trained in class-based programming. You may have seen JavaScript OOP code that looks like this:

    function Person(name) {
    	this.name = name;
    	this.sayName = function() {
    		return "Hi, I'm " + this.name;
    var adam = new Person('Adam');

    Implementing inheritance using JavaScript’s default method looks more complicated. We define Ninja as a sub-class of Person. Ninjas can have a name as they are a person, and they can also have a primary weapon, such as shuriken.

    function Ninja(name, weapon) {
      Person.call(this, name);
      this.weapon = weapon;
    Ninja.prototype = Object.create(Person.prototype);
    Ninja.prototype.constructor = Ninja;

    While the constructor pattern might look more attractive to an eye that’s familiar with class-based OOP, it is considered problematic by many. What’s happening behind the scenes is prototypal OOP, and the constructor function obfuscates the language’s natural implementation of OOP. This just looks like an odd way of doing class-based OOP without real classes, and leaves the programmer wondering why they didn’t implement proper class-based OOP.

    Since it’s not really a class, it’s important to understand what a call to a constructor does. It first creates an empty object, then sets the prototype of this object to the prototype property of the constructor, then calls the constructor function with this pointing to the newly-created object, and finally returns the object. It’s an indirect way of doing prototype-based OOP that looks like class-based OOP.

    The problem with JavaScript’s constructor pattern is succinctly summed up by Douglas Crockford:

    JavaScript’s constructor pattern did not appeal to the classical crowd. It also obscured JavaScript’s true prototypal nature. As a result, there are very few programmers who know how to use the language effectively.

    The most effective way to work with OOP in JavaScript is to understand prototypal OOP, whether the constructor pattern is used or not.

    Understanding delegation and the implementation of prototypes

    So far, we’ve seen how prototypal OOP differs from traditional OOP in that there are no classes—only objects that can inherit from other objects.

    Every object in JavaScript holds a reference to its parent (prototype) object. When an object is created through Object.create, the passed object—meant to be the prototype for the new object—is set as the new object’s prototype. For the purpose of understanding, let’s assume that this reference is called __proto__1. Some examples from the previous code can illustrate this point:

    The line below creates a new empty object with __proto__ as null.

    var genericAnimal = Object.create(null); 

    The code below then creates a new empty object with __proto__ set to the genericAnimal object, i.e. rodent.__proto__ points to genericAnimal.

    var rodent = Object.create(genericAnimal);
     rodent.size = 'S';

    The following line will create an empty object with __proto__ pointing to rodent.

    var capybara = Object.create(rodent);
    //capybara.__proto__ points to rodent
    //capybara.__proto__.__proto__ points to genericAnimal
    //capybara.__proto__.__proto__.__proto__ is null

    As we can see, every object holds a reference to its prototype. Looking at Object.create without knowing what exactly it does, it might look like the function actually “clones” from the parent object, and that properties of the parent are copied over to the child, but this is not true. When capybara is created from rodent, capybara is an empty object with only a reference to rodent.

    But then—if we were to call capybara.size right after creation, we would get S, which was the size we had set in the parent object. What blood-magic is that? capybara doesn’t have a size property yet. But still, when we write capybara.size, we somehow manage to see the prototype’s size property.

    The answer is in JavaScript’s method of implementing inheritance: delegation. When we call capybara.size, JavaScript first looks for that property in the capybara object. If not found, it looks for the property in capybara.__proto__. If it didn’t find it in capybara.__proto__, it would look in capybara.__proto__.__proto__. This is known as the prototype chain.

    If we called capybara.description(), the JavaScript engine would start searching up the prototype chain for the description function and finally discover it in capybara.__proto__.__proto__ as it was defined in genericAnimal. The function would then be called with this pointing to capybara.

    Setting a property is a little different. When we set capybara.size = 'XXL', a new property called size is created in the capybara object. Next time we try to access capybara.size, we find it directly in the object, set to 'XXL'.

    Since the prototype property is a reference, changing the prototype object’s properties at runtime will affect all objects using the prototype. For example, if we rewrote the description function or added a new function in genericAnimal after creating rodent and capybara, they would be immediately available for use in rodent and capybara, thanks to delegation.

    Creating Object.create

    When JavaScript was developed, its default way of creating objects was the keyword new. Then many notable JavaScript developers campaigned for Object.create, and eventually it was included in the standard. However, some browsers don’t support Object.create (you know the one I mean). For that reason, Douglas Crockford recommends including the following code in your JavaScript applications to ensure that Object.create is created if it is not there:

    if (typeof Object.create !== 'function') {
    	Object.create = function (o) {
    		function F() {}
    		F.prototype = o;
    		return new F();

    Object.create in action

    If you wanted to extend JavaScript’s Math object, how would you do it? Suppose that we would like to redefine the random function without modifying the original Math object, as other scripts might be using it. JavaScript’s flexibility provides many options. But I find using Object.create a breeze:

    var myMath = Object.create(Math);

    Couldn’t possibly get any simpler than that. You could, if you prefer, write a new constructor, set its prototype to a clone of Math, augment the prototype with the functions you like, and then construct the actual object. But why go through all that pain to make it look like a class, when prototypes are so simple?

    We can now redefine the random function in our myMath object. In this case, I wrote a function that returns random whole numbers within a range if the user specifies one. Otherwise, it just calls the parent’s random function.

    myMath.random = function() {
    	var uber = Object.getPrototypeOf(this);
    if (typeof(arguments[0]) === 'number' && typeof(arguments[1]) === 'number' && arguments[0] < arguments[1]) {
    		var rand = uber.random();
    		var min = Math.floor(arguments[0]);
    		var max = Math.ceil(arguments[1]);
    		return this.round(rand * (max - min)) + min;
    	return uber.random();

    There! Now myMath.random(-5,5) gets you a random whole number between −5 and 5, while myMath.random() gets the usual. And since myMath has Math as its prototype, it has all the functionality of the Math object built into it.

    Class-based OOP vs. prototype-based OOP

    Prototype-based OOP and class-based OOP are both great ways of doing OOP; both approaches have pros and cons. Both have been researched and debated in the academic world since before I was born. Is one better than the other? There is no consensus on that. But the key points everyone can agree on are that prototypal OOP is simpler to understand, more flexible, and more dynamic.

    To get a glimpse of its dynamic nature, take the following example: you write code that extensively uses the indexOf function in arrays. After writing it all down and testing in a good browser, you grudgingly test it out in Internet Explorer 8. As expected, you face problems. This time it’s because indexOf is not defined in IE8.

    So what do you do? In the class-based world, you could solve this by defining the function, perhaps in another “helper” class which takes an array or List or ArrayList or whatever as input, and replacing all the calls in your code. Or perhaps you could sub-class the List or ArrayList and define the function in the sub-class, and use your new sub-class instead of the ArrayList.

    But JavaScript and prototype-based OOP’s dynamic nature makes it simple. Every array is an object and points to a parent prototype object. If we can define the function in the prototype, then our code will work as is without any modification!

    if (!Array.prototype.indexOf) {
    	Array.prototype.indexOf = function(elem) {
    		//Your magical fix code goes here.

    You can do many cool things once you ditch classes and objects for JavaScript’s prototypes and dynamic objects. You can extend existing prototypes to add new functionality—extending prototypes like we did above is how the well known and aptly named library Prototype.js adds its magic to JavaScript’s built-in objects. You can create all sorts of interesting inheritance schemes, such as one that inherits selectively from multiple objects. Its dynamic nature means you don’t even run into the problems with inheritance that the Gang of Four book famously warns about. (In fact, solving these problems with inheritance was what prompted researchers to invent prototype-based OOP—but all that is beyond our scope for this article.)

    Class-based OOP emulation can go wrong

    Consider the following very simple example written with pseudo-classes:

    function Animal(){
    Animal.prototype.makeBaby = function(){ 
        var baby = new Animal();
        return baby;
    //create Cat as a sub-class of Animal
    function Cat() {
    //Inherit from Animal
    Cat.prototype = new Animal();
    var puff = new Cat();
    var colonel = new Cat();

    The example looks innocent enough. This is an inheritance pattern that you will see in many places all over the internet. However, something funny is going on here—if you check colonel.offspring and puff.offspring, you will notice that each of them contains the same two babies! That’s probably not what you intended—unless you are coding a quantum physics thought experiment.

    JavaScript tried to make our lives easier by making it look like we have good old class-based OOP going on. But it turns out it’s not that simple. Simulating class-based OOP without completely understanding prototype-based OOP can lead to unexpected results. To understand why this problem occurred, you must understand prototypes and how constructors are just one way to build objects from other objects.

    What happened in the above code is very clear if you think in terms of prototypes. The variable offspring is created when the Animal constructor is called—and it is created in the Cat.prototype object. All individual objects created with the Cat constructor use Cat.prototype as their prototype, and Cat.prototype is where offspring resides. When we call makeBaby, the JavaScript engine searches for the offspring property in the Cat object and fails to find it. It then finds the property in Cat.prototype—and adds the new baby in the shared object that both individual Cat objects inherit from.

    So now that we understand what the problem is, thanks to our knowledge of the prototype-based system, how do we solve it? The solution is that the offspring property needs to be created in the object itself rather than somewhere in the prototype chain. There are many ways to solve it. One way is that makeBaby ensures that the object on which the function is called has its own offspring property:

    	var baby=new Animal(); 
    		this.offspring=[]; }
    	return baby;

    Backbone.js runs into a similar trap. In Backbone.js, you build views by extending the base Backbone.View “class.” You then instantiate views using the constructor pattern. This model is very good at emulating class-based OOP in JavaScript:

    //Create a HideableView "sub-class" of Backbone.View
    var HideableView = Backbone.View.extend({
        el: '#hideable', //the view will bind to this selector
        events : {
            'click .hide': 'hide'
        //this function was referenced in the click handler above
        hide: function() {
          //hide the entire view
    var hideable = new HideableView();

    This looks like simple class-based OOP. We inherited from the base Backbone.View class to create a HideableView child class. Next, we created an object of type HideableView.

    Since this looks like simple class-based OOP, we can use this functionality to conveniently build inheritance hierarchies, as shown in the following example:

    var HideableTableView = HideableView.extend({
        //Some view that is hideable and rendered as a table.
    var HideableExpandableView = HideableView.extend({
        initialize: function() {
            //add an expand click handler. We didn’t create a separate
            //events object because we need to add to the
            //inherited events.
            this.events['click .expand'] = 'expand';
        expand: function () {
        	//handle expand
    var table = new HideableTableView();
    var expandable = new HideableExpandableView();

    This all looks good while you’re thinking in class-based OOP. But if you try table.events['click .expand'] in the console, you will see “expand”! Somehow, HideableTableView has an expand click handler, even though it was never defined in this class.

    You can see the problem in action here: http://codepen.io/anon/pen/qbYJeZ

    The problem above occurred because of the same reason outlined in the earlier example. In Backbone.js, you need to work against the indirection created by trying to make it look like classes, to see the prototype chain hidden in the background. Once you comprehend how the prototype chain would be structured, you will be able to find a simple fix for the problem.

    In conclusion

    Despite prototypal OOP underpinning one of the most popular languages out there today, programmers are largely unfamiliar with what exactly prototype-based OOP is. JavaScript itself may be partly to blame because of its attempts to masquerade as a class-based language.

    This needs to change. To work effectively with JavaScript, developers need to understand the how and why of prototype-based programming—and there’s much more to it than this article. Beyond mastering JavaScript, in learning about prototype-based programming you can also learn a lot of things about class-based programming as you get to compare and contrast the two different methods.

    Further Reading

    Douglas Crockford’s note on protoypal programming was written before Object.create was added to the standard.

    An article on IBM’s developerWorks reinforces the same point on prototypal OOP. This article was the prototypal “aha” moment for me.

    The following three texts will be interesting reads if you’re willing to dive into the academic roots of prototype-based programming:

    Henry Lieberman of MIT Media Labs compares class-based inheritance with prototype-based delegation and argues that prototype-based delegation is the more flexible of the two concepts.

    Classes versus Prototypes in Object-Oriented Languages is a proposal to use prototypes instead of classes by the University of Washington’s Alan Borning.

    Lieberman’s and Borning’s work in the 1980s appears to have influenced the work that David Ungar and Randall Smith did to create the first prototype-based programming language: Self. Self went on to become the basis for the prototype-based system in JavaScript. This paper describes their language and how it omits classes in favor of prototypes.



    • 1. The __proto__ property is used by some browsers to expose an object’s prototype, but it is not standard and is considered obsolete. Use Object.getPrototypeOf() as a standards-compliant way of obtaining an object’s prototype in modern browsers.
  • OOUX: A Foundation for Interaction Design 

    There’s a four-year story behind my current design process, something I introduced last year on A List Apart—“Object-Oriented UX.” The approach advocates designing objects before actions. Now it’s time to get into the deeper benefits of OOUX and the smooth transition it can set up while shifting from object-based system design to interaction design.

    The “metaphor,” once found, is a perfectly definite thing: a collection of objects, actions on objects, and relationships between objects.
    Dave Collins, Designing Object-Oriented User Interfaces (1995)

    Imagine you’re designing a social network that helps chefs trade recipes requiring exotic ingredients. With good ol’ fashioned research, you develop a solid persona (Pierre, the innovator-chef, working in a gourmet restaurant) and you confirm the space in the market. You understand the industry and the project goals. Now it’s time to put marker to whiteboard.

    Where would you start designing?

    Would you start by sketching out an engaging onboarding process for chefs? We do need chefs to make this thing successful—no chefs, no network! So maybe we start by making sure their first interaction is amazing.

    Or maybe you start with one of the most frequent activities: how a chef posts a new recipe. And that could easily lead you to sketching the browsing experience—how will other chefs find new recipes?

    Three or four years ago, I’d start by storyboarding a critical user path. I’d start with the doing.

    Pre-OOUX, my initial design-thinking would look something like this. I’d figure out the interaction design while figuring out what a recipe actually should be.

    Pre-OOUX, my initial design-thinking would look something like this. I’d figure out the interaction design while figuring out what a recipe actually should be.

    I imagine many other user experience designers begin the same way, by designing how someone would use the thing. One interaction flow leads to the design of another interaction flow. Soon, you have a web of flows. Iterate on those flows, add some persistent navigation, and voilà!—you have a product design.

    But there is a problem with this action-first approach. We are designing our actions without a clear picture of what is being acted on. It’s like the sentence, “Sally kicked.” We’ve got our subject (the user) and we’ve got our verb (the action).  But where’s the object? Sally kicked what? The ball? Her brother? A brain-hungry zombie?

    When we jump right into actions, we run the risk of designing a product with a fuzzy reflection of the user’s mental model. By clearly defining the objects in our users’ real-world problem domain, we can create more tangible and relatable user experiences.

    These days, a lot happens before I begin sketching user flows (in this article, I use “user flow” and “interaction flow” interchangeably). I first define my user, asking, “Who’s Sally?” Next, I figure out her mental model, meaning all the things (objects) that the problem is made of, all the things she sees as part of the solution, and how they relate to one another. Finally, I design the interactions. Once I understand that Sally is a ninja armed with only a broomstick, and that she is faced with a team of zombies, I can better design the actions she’ll take.

    In retrospect, I feel like I was doing my job backwards for the first two-thirds of my career, putting interaction flows before building an object-oriented framework. Now, I would figure out the system of chefs, recipes, and ingredients before worrying about the chef onboarding process or how exactly a chef posts a recipe. How do the objects relate to one another? What content elements comprise each object? Which objects make up my MVP and which objects can I fold in later? Finally, what actions does a user take on each object?

    That’s what Object Oriented UX is all about—thinking in terms of objects before actions. In my previous article, we learned how to define objects and design a framework based on those objects. This time, we’re exploring how to smoothly transition from big-picture OOUX to interaction design by using a very simple tool: the CTA Inventory.

    What’s a CTA Inventory, and why is it important?

    Calls to action (CTAs) are the main entry points to interaction flows. If an interaction flow is a conversation between the system and the user, the CTA is a user’s opening line to start that conversation. Once you have an object framework, you can add possible CTAs to your objects, basically putting a stake in the ground that says, “Interaction design might go here.” These stakes in the ground—the CTAs—can be captured using a CTA Inventory.

    A CTA Inventory is a bridge from big-picture OOUX to detailed interaction design.

    A CTA Inventory is a bridge from big-picture OOUX to detailed interaction design.

    A CTA Inventory is just a fancy list of potential CTAs organized around your objects. Since most (all?) interactions involve creating, manipulating, or finding an object, we create this inventory by thinking about what a user wants to do in our system—specifically, what a user wants to do to objects in our system.

    Creating a CTA Inventory does two things. First, it helps us shift gears between the holistic nature of system design to the more compartmentalized work of interaction design. Second, it helps us:

    1. think about interactions creatively;
    2. validate those interactions;
    3. and ultimately write project estimates with greater accuracy.

    Let’s explore these three benefits a little more before creating our own CTA Inventory.

    Creative constraints improve brainstorming

    Simply understanding your objects will help you determine the things that a user might do with them. We know that Sally wants to destroy zombies—but it’s only after we’ve figured out that these are the fast, smart, light-averting zombies that we can be prepared to design exactly how she’ll do it.

    When we think about interactions in the context of an object, we give ourselves a structure for brainstorming. When we apply the constraints of the object framework, we’re likely to be more creative—and more likely to cover all of our bases. Brainstorm your actions object by object so that innovative features are less likely to fall through the cracks.

    For example, let’s think about the object “ingredient” in our Chef Network app. What are all the things that Pierre might want to do to an ingredient?

    • Mark the ingredient as a favorite.
    • Claim he’s an expert on the ingredient.
    • Add the ingredient to a shopping list.
    • Check availability of the ingredient at local stores.
    • Follow the ingredient to see new recipes that are posted using this ingredient.
    • Add a tip for using this ingredient.

    By using the object framework, I might uncover functionality I wouldn’t otherwise have considered if my brainstorming was too broad and unconstrained; structure gives creative thinking more support than amorphous product goals and squishy user objectives.

    Validate actions early

    Good news. You can user-test your system of objects and the actions a user might take on them before spending long hours on interaction design. Create a prototype that simply lets users navigate from one object to another, exploring the framework (which is a significant user goal in itself). Through observation and interviews, see if your system resonates with their mental model. Do you have the right objects and do their relationships make sense? And are the right “buttons” on those objects?

    Armed with a simple prototype of your interconnected objects and their associated CTAs, you now have a platform to discuss functionality with users—without all the hard work of prototyping the actual interactions. In a nutshell: talk to your users about the button before designing what happens when they click it.

    Interaction design can be some of the most difficult, time-consuming, devil-in-the-details design work. I personally don’t want to sweat through designing a mechanism for following chefs, managing alerts from followed chefs, and determining how the dreaded unfollow will work…if it turns out users would rather follow ingredients.

    Estimate with interaction design in mind

    As we’ve established, interaction design is a time- and resources-devouring monster. We have to design a conversation between the system and the user—an unpredictable user who requires us to think about error prevention, error handling, edge cases, animated transitions, and delicate microinteractions. Basically, all the details that ensure they don’t feel dumb or think that the system is dumb.

    The amount and complexity of interaction design your product requires will critically impact your timeline, budget, and even staffing requirements, perhaps more than any other design factor. Armed with a CTA Inventory, you can feel confident knowing you have solid insight into the interaction design that will be handled by your team. You can forecast the coming storm and better prepare for it.

    So, do you love this idea of better brainstorming, early validation, and estimating with better accuracy? Awesome! Let’s look at how to create your amazing CTA Inventory. First, we will discuss the low-fidelity initial pass (which is great to do collaboratively with your team). Next, we will set up a more formal and robust spreadsheet version.

    CTA Inventory: low-fidelity

    If you haven’t read my primer on object mapping, now would be a great time to go and catch up! I walk you through my methodology for:

    • extracting objects from product goals;
    • defining object elements (like core content, metadata, and nested objects);
    • and prioritizing elements.

    The walk-through in the previous article results in an object map similar to this:

    An object map before layering on a CTA Inventory.

    An object map before layering on a CTA Inventory.

    I’ve used outlined blue stickies to represent objects; yellow stickies to represent core content; pink stickies to indicate metadata; and additional blue stickies to represent nested objects.

    A low-fidelity CTA Inventory is quite literally an extension of the object mapping exercise; once you’ve prioritized your elements, switch gears and begin thinking about the CTAs that will associate with each object. I use green stickies for my CTAs (green for go!) and stack them on top of their object.

    An object map with a quick, low-fidelity CTA Inventory tacked on. Potential CTAs are on green stickies placed next to each object.

    An object map with a quick, low-fidelity CTA Inventory tacked on. Potential CTAs are on green stickies placed next to each object.

    This initial CTA brainstorming is great to do while workshopping with a cross-functional team. Get everyone’s ideas on how a user might act on the objects. You might end up with dozens of potential CTAs! In essence, you and your team will have a conversation about the features of the product, but within the helpful framework of objects and their CTAs. Essentially, you are taking that big, hairy process of determining features, then disguising it as a simple, fun, and collaborative activity: “All we’re doing is brainstorming what buttons need to go on our objects! That’s all! It’s easy!” 

    Each object might need roughly 10–15 minutes, so block out an hour or two to discuss CTAs if your system has three to five objects. You’ll be surprised at the wealth of ideas that emerge! You and your team will gain clarity about what your product should actually do, not to mention where you disagree (which is valuable in its own right).

    In our chef example, something pretty interesting happened while the team was hashing out ideas. During the CTA conversation about “ingredient,” we thought that perhaps it would be useful if chefs could suggest a substitute ingredient (see circled green sticky below). “Fresh out of achiote paste? Try saffron instead!” But with that in mind, those “suggested substitute ingredients” need to become part of the ingredient object. So, we updated the object map to reflect that (circled blue sticky).

    After brainstorming CTAs, we needed to add a nested object on 'ingredient' for 'ingredients that could be substituted.'

    After brainstorming CTAs, we needed to add a nested object on “ingredient” for “ingredients that could be substituted.”

    Although I always begin with my objects and their composition, CTA brainstorming tends to loop me back around to rethinking my objects. As always, be prepared to iterate!

    CTA Inventory: high-fidelity

    CTAs can get complicated; how and when they display might be conditional on permissions, user types, or states of your object. Even in our simple example above, some CTAs will only be available to certain users.

    For example, if I’m a chef on an instance of one of my own recipe objects, I will see “edit” and “delete” CTAs, but I might not be able to “favorite” my own recipe. Conversely, if I’m on another chef’s recipe, I won’t be able to edit or delete it, but I will definitely want the option to “favorite” it.

    In the next iteration of our CTA Inventory, we move into a format that allows us to capture more complexities and conditions. After a first pass of collaborative, analogue brainstorming about CTAs, you might want to get down to business with a more formal, digitized CTA Inventory.

    A detailed CTA Inventory for our chef network example.

    A detailed CTA Inventory for our chef network example. Dig in deeper on the actual Google Sheet.

    Using a Google spreadsheet, I create a matrix (see above) that lets me capture thoughts about each object-derived CTA and the inevitable interaction flows for each one:

    • Why do we even have this CTA? What’s the purpose, and what user or business goal does it ladder up to?
    • Who will trigger this CTA? A certain persona or user type? Someone with a special permission or role?
    • Where will the CTAs live? Where are the obvious places a user will trigger this interaction flow? And are there other creative places we should consider putting it, based on user needs?
    • How much complexity is inherent in the interaction flow triggered by this CTA? This can help us estimate level of effort.
    • What is the priority of this interaction flow? Is this critical to launch, slated for a later phase, or a concept that needs to be researched and validated?
    • What questions and discussion points does this CTA raise?

    Before you start designing the interactions associated with each of your CTAs, get comfortable with the answers to these questions. Build an object-oriented prototype and validate the mental model with users. Talk to them and make sure that you’ve included the right doorways to interaction. Then you will be perfectly positioned to start sketching and prototyping what happens when a user opens one of those doors.

    A solid foundation for designing functionality

    You’ve collaboratively mapped out an elegant object-oriented design system and you’ve created a thorough CTA Inventory. You built a rough, clickable prototype of your system. With real users, you validated that the system is a breeze to navigate. Users pivot gracefully from object to object and the CTAs on those objects make sense for their needs. Life is good.

    But OOUX and a CTA Inventory will not help you design the interactions themselves. You still have to do that hard work! Now, though, as you begin sketching out interaction flows, you can feel confident that the functionality you are designing is rooted in solid ground. Because your CTA Inventory is a prioritized, team-endorsed, IxD to-do list, you’ll be more proactive and organized than ever.

    Most important, users getting things done within your system will feel as if they are manipulating tangible things. Interacting will feel less abstract, less fuzzy. As users create, favorite, add, remove, edit, move, and save, they will know what they’re doing—and what they’re doing it to. When you leverage an object-based CTA Inventory, your product designs and your design process will become more elegant, more streamlined, and more user-friendly.

  • Looking for &#8220;Trouble&#8221; 

    I know a colleague who keeps a “wall of shame” for emails he gets from clients—moments of confusion on their end that (for better or worse) are also funny. The thing is, we know how to answer these questions because we’ve heard them all before: Why does this look different when I print it? How do people know to scroll? To a certain extent, making light of the usual “hard questions” is a way of blowing off steam—but it’s an attitude poisonous for an agency.

    So, why do we disregard these humans that we interact with daily? Why do we condescend?

    I think it’s because we’re “experts.”

    As director of user experience at a digital agency, I’m prey to a particular kind of cognitive dissonance: I’m paid for my opinion; therefore, it should be right. After all, I’m hired as a specialist and therefore “prized” for my particular knowledge. Clients expect me to be right, which leads me to expect it, too. And that makes it difficult to hear anything that says otherwise.

    As consultants, we tend to perceive feedback from a client as feedback on our turf—a non-designer giving direction on a design or a non-tech trying to speak tech. As humans, we tend to ignore information that challenges our beliefs.

    This deafness to clients is akin to deafness to users, and equally detrimental. With users, traffic goes down as they abandon the site. With clients, the relationship becomes strained, acrimonious, and ultimately can endanger your livelihood. We wouldn’t dream of ignoring evidence from users, but we so readily turn a deaf ear to clients who interject, who dare to disrupt our rightness.

    When a client hires us, they should come away with more than a website. They should gain a better understanding of how websites are designed, how they work, and what makes them succeed. We are the ones equipped to create this hospitable environment. For every touchpoint our clients have with us, we could be asking the same questions that we do of our users:

    • How do clients interact with our products, e.g., a wireframe, design, or staging site?
    • What knowledge do they have when they arrive, and what level must we help them reach?
    • What are the common stumbling blocks on the way there?

    Thinking back to our wall of shame, suddenly those cries of frustration from clients we’ve branded “difficult” are no longer so funny. Those are now kinks to change in our process; culture problems to address head-on; and product features that need an overhaul. In other words: from user experience, client experience. It means embracing “the uncomfortable luxury of changing your mind.

    I now go out of my way to look for these moments of client confusion, searching my inbox and Basecamp threads for words like “confused,” “can’t,” and “trouble.”

    These examples are just a few pleas and complaints I’ve found along the way, plus the changes my agency has made as a result. It’s helped us revamp our workflow, team, and culture to enhance the “Blenderbox client experience.”

    Make deliverables easy to find

    “Hey guys…I’m having trouble figuring out which version of the white paper is the final version. Could someone attach it to this chain? Thanks.”

    This one was easy. When we asked our clients about the problem—always the first step—we learned that they had trouble finding recent files when they saved deliverables locally. We were naming our files inconsistently, and (surprise!) that inconsistency was coming back at us in the form of confused clients.

    I’ve seen this at every company I’ve been a part of, and it only gets worse outside the office; if you don’t believe me, go home tonight and look at your personal Documents folder. If I can’t keep my own filenames straight, how could we expect 20 of us to do it in unison? Clearly, we needed some rules.

    Our first step was to bring uniformity to our naming structure. We had a tendency to start with the client’s name‚ which is of little use to them. Now, all deliverables at Blenderbox use this style:


    The other point of confusion was over which file was “final.” In the digital world, the label “final” is usually wishful thinking. Instead, the best bet is to append the date in the filename. (We found that more reliable than using the “last edited” date in a file’s metadata, which can be changed inadvertently when printing or opening a file.) Write dates in YMD format, so they sort chronologically.

    Next came version control—or do we call that rounds, or sprints? Unfortunately, there’s no single answer for this, as it depends on whether a contract stipulates a fixed number of rounds or a more iterative process. We gave ourselves some variations to use, as necessary:

    • Blenderbox.ClientName.DocName.Round#.filetype
    • Blenderbox.ClientName.DocName.YYYYMMDD.filetype
    • Blenderbox.ClientName.DocName.Consolidated.YYYYMMDD.filetype

    When a number of rounds is stipulated, the round number is appended. For Agile or other iterative projects, we use only the date. And when compiling months of iterative work (usually for documentation), we call it “Consolidated.” That’s as close to final as we can promise, and of course, that gets a date stamp as well.

    Show how details become the big picture

    “See the attached pic for a cut-and-paste layout”

    Here, the client cut-and-pasted from our design to create their own. Why? It’s not because they were feeling creative. They had a variety of content and they wanted to know that every page on their site was accommodated by the design. Of course, we had already planned for every page, but we needed to better explain how websites work.

    Websites are not magic, nor are they rocket science. We can teach clients at least the basics of how they work. When we step back and take the time to explain what we do, they better understand our role and the value we bring to their business, which results in happier clients and more work for us down the road.

    Prompted by this particular client, we incorporated an explanation of reusable templates and modules right into our wireframes. On page one, we describe how they work and introduce an icon for each template. These icons then appear on every wireframe, telling the client which template is assigned to the page shown.

    Visual example of a template legend in documentation

    Since implementing this technique, we’ve seen our clients start talking more like us—that is, using the language of how websites work. With improved communication, better ideas come out of both sides. They also give feedback that is usable and precise, which makes for more efficient projects, and our clients feel like they’ve learned something.

    Compromise on comfort zone

    “can u please send over the pdf of it so we can print them out and show a&a? tx”

    This is my favorite quote, and we hear this message over and over; clients want to print our deliverables. They want to hold them, pass them around, and write on them, and no iPad is going to take that away from them. Paper and pen are fun.

    It’s a frustrating trend to lay out work on 11″×17″ paper, which is massive, beautiful, and only useful for designers and landscape artists. Who has a printer that size? Certainly not the nonprofits, educators, and cultural institutions we work with. So, we set about making our wireframes printable, and made for trusty 8.5″×11″.

    This was tougher than expected because popular Omnigraffle stencils such as Konigi tend to be large-format, which is a common complaint. (Other programs, like Axure, also face this problem.)

    Since no existing stencils would do, we made our own set (which you can download on our site).

    We also fixed a flaw with common wireframe design that was confusing our clients: the notes. Go do an image search for “annotated wireframes.” Does anyone want to play “find the number on the right”?

    Can you imagine assembling furniture this way? In our new layout, the notes point directly to what they mean. The screen is also smaller, deemphasizing distracting Latin text while giving primacy to the annotation. As a result, we find that our clients are more likely to read the notes themselves, which saves time we’d spend explaining functionality in meetings.

    Visual example of annotations in documentation

    Figure out the format

    “I know I am being dense, but I am finding myself still confused about the Arts Directory. How does that differ from the next two subsections?”

    Here, a client was struggling (and rightly so) with a large set of designs that showed some small differences in navigation over multiple screens. By the end of the design phase, we often rack up a dozen or more screens to illustrate minor differences between templates, on-states, rollovers, different lengths of text, and the other variations that we try to plan for as designers. We also illustrate complex, multistep interactions by presenting a series of screens—somewhat like a flip book. Regardless of whether you present designs as flat files or prototypes, there are usually a few ways to enhance clarity.

    If your designs are flat (that is, just image files) compile them into a PDF. This sounds obvious, but JPG designs require clients to scroll in their browser, and it’s easy to get lost that way. Because PDFs are paginated, it’s easier for clients to track their location and return to specific points. As a bonus, using the left and right arrows to flick through pages will keep repeated elements like the header visually in place. Another reason to use PDFs: some file types are less common than you’d think. For example, one government client of ours couldn’t even open PNG files on their work machine.

    More and more, we’re using prototypes as our default for presenting designs. There is an astounding number of prototyping tools today (and choosing one is a separate article), but we’ve found that prototypes are best for explaining microinteractions, like how a mobile nav works. Even if you don’t have the time or need to demonstrate interactions, putting your designs in a prototype ensures that clients will view them right in their browser, and at the proper zoom level.

    Make time to celebrate

    Clients shouldn’t be the “forgotten user.” We create great products for them by focusing on their end users—while forgetting that clients experience us twice over, meaning their user experience with the product and their user experience with us. Writing off a flustered client as out of touch means we’re disregarding our role as designers who think about real people. When these biases surface, they reveal things that we could be doing better. It’s shortsighted to think our roles make us infallible experts.

    Searching client communications for keywords like “trouble” and other forms of subtle distress can help us identify moments of confusion that passed us by. It forces us to address problems that we didn’t know existed (or didn’t want to see). At Blenderbox, the results have been good for everyone. Our clients are more confident, receptive, and better educated, which empowers them to provide sharp, insightful feedback—which in turn helps our team design and build more efficiently. They’re happier, too, which has helped us gain their trust and earn more work and referrals.

    We’re getting so desensitized to the word, but we all understand that there’s value in empathy. And, like any other ideal, we forget to practice it in the bustle of daily work. Because empathy is a formal part of UX, we don’t get to use the “busy” excuse. Even mundane design activities should be daily reminders to listen to the people around you, like a sticky note on your monitor to “Put yourself in their shoes.” In other words, we can’t overlook that our clients are people, too. When we stop and think about user experience, we might just be doing our job, but we’re also saying that we choose sensitivity to others as our primary professional mission. And that is the first step to making great things happen.

  • The User&#8217;s Journey 

    A note from the editors: We’re pleased to share an excerpt from Chapter 5 of Donna Lichaw ’s new book, The User’s Journey: Storymapping Products That People Love, available now from Rosenfeld Media.

    Both analytics funnels and stories describe a series of steps that users take over the course of a set period of time. In fact, as many data scientists and product people will tell you, data tells a story, and it’s our job to look at data within a narrative structure to piece together, extrapolate, troubleshoot, and optimize that story.

    In the case of FitCounter, our gut-check analysis and further in-person testing with potential users uncovered that the reason our analytics showed a broken funnel with drop-off at key points was because people experienced a story that read something like this:

    • Exposition: The potential user is interested in getting fit or training others.
    • Inciting Incident: She sees the “start training” button and gets started.
    • Rising Action:
      • She enters her username and password. (A tiny percentage of people would drop off here, but most completed this step.)
      • She’s asked to “follow” some topics, like running and basketball. She’s not really sure what this means or what she gets out of doing this. She wants to train for a marathon, not follow things. (This is where the first drop-off happened.)
    • Crisis: This is where the cliffhanger happens. She’s asked to “follow” friends. She has to enter sensitive Gmail or Facebook log-in credentials to do this, which she doesn’t like to do unless she completely trusts the product or service and sees value in following her friends. Why would she follow them in this case? To see how they’re training? She’s not sure she totally understands what she’s getting into, and at this point, has spent so much brain energy on this step that she’s just going to bail on this sign-up flow.
    • Climax/Resolution: If she does continue on to the next step, there would be no climax.
    • Falling Action: Eh. There is no takeaway or value to having gotten this far.
    • End: If she does complete the sign-up flow, she ends up home. She’d be able to search for videos now or browse what’s new and popular. Searching and browsing is a lot of work for someone who can’t even remember why they’re there in the first place. Hmmm…in reality, if she got this far, maybe she would click on something and interact with the product. The data told us that this was unlikely. In the end, she didn’t meet her goal of getting fit, and the business doesn’t meet its goal of engaging a new user.

    Why was it so important for FitCounter to get people to complete this flow during their first session? Couldn’t the business employ the marketing team to get new users to come back later with a fancy email or promotion? In this case, marketing tried that. For months. It barely worked.

    With FitCounter, as with most products and services, the first session is your best and often only chance to engage new users. Once you grab them the first time and get them to see the value in using your product or service, it’s easier to get them to return in the future. While I anecdotally knew this to be true with consumer-facing products and services, I also saw it in our data.

    Those superfans I told you about earlier rarely became superfans without using the product within their first session. In fact, we found a sweet spot: most of our superfans performed at least three actions within their first session. These actions were things like watching or sharing videos, creating playlists, and adding videos to lists. These were high-quality interactions and didn’t include other things you might do on a website or app, such as search, browse, or generally click around.

    With all of our quantitative data in hand, we set out to fix our broken usage flow. It all, as you can imagine, started with some (more) data…oh, and a story. Of course.

    The Plan

    At this point, our goals with this project were two-fold:

    • To get new users to complete the sign-up flow;
    • To acquire more “high-quality” users who were more likely to return and use the product over time.

    As you can see, getting people to pay to upgrade to premium wasn’t in our immediate strategic roadmap or plan. We needed to get this product operational and making sense before we could figure out how to monetize. We did, however, feel confident that our strategy was headed in the right direction because the stories we were designing and planning were ones that we extrapolated from actual paying customers who loved the product. We had also been testing our concept and origin stories and knew that we were on the right track, because when we weren’t, we maneuvered and adapted to get back on track. So what, in this case, did the data tell us that we should do to transform this story of use from a cliffhanger, with drop-off at the crisis moment, to a more complete and successful story?

    Getting to “Why”

    While our quantitative analytics told us a “what” (that people were dropping off during our sign-up funnel), it couldn’t tell us the “why.” To better answer that question, we used story structure to figure out why people might drop off when they dropped off. Doing so helped us better localize, diagnose, and troubleshoot the problem. Using narrative structure as our guide, we outlined a set of hypotheses that could explain why there was this cliffhanger.

    For example, if people dropped off when we asked them to find their friends, did people not want to trust a new service with their login credentials? Or did they not want to add their friends? Was training not social? We thought it was. To figure this out better, once we had a better idea of what our questions were, we talked to existing and potential customers first about our sign-up flow and then about how they trained (for example, alone or with others). We were pretty sure training was social, so we just needed to figure out why this step was a hurdle.

    What we found with our sign-up flow was similar to what we expected. Potential users didn’t want to follow friends because of trust, but more so because it broke their mental model of how they could use this product. “Start training” was a strong call to action that resonated with potential users. In contrast, “follow friends,” was not. Even something as seemingly minute as microcopy has to fit a user’s mental model of what the narrative structure is. Furthermore, they didn’t always think of training as social. There were a plethora of factors that played into whether or not they trained alone or with others.

    What we found were two distinct behaviors: people tend to train alone half the time and with others half the time. Training alone or with others depended on a series of factors:

    • Activity (team versus solitary sport, for example)
    • Time (during the week versus weekend, for example)
    • Location (gym versus home, for example)
    • Goals (planning to run a 5k versus looking to lose pounds, for example).

    This was too complex of a math equation for potential users to do when thinking about whether or not they wanted to “follow” people. Frankly, it was more math than anyone should have to do when signing up for something. That said, after our customer interviews, we were convinced of the value of keeping the product social and giving people the opportunity to train with others early on. Yes, the business wanted new users to invite their friends so that the product could acquire new users. And, yes, I could have convinced the business to remove this step in the sign-up process so that we could remove the crisis and more successfully convert new users. However, when people behave in a certain way 50% of the time, you typically want to build a product that helps them continue to behave that way, especially if it can help the business grow its user base.

    So instead of removing this troublesome cliffhanger-inducing step in the sign-up flow, we did what any good filmmaker or screenwriter would do: we used that crisis to our advantage and built a story with tension and conflict. A story that we hoped would be more compelling than what we had.

    The Story

    In order to determine how our new sign-up flow would work, we first mapped it out onto a narrative arc. Our lead designer and engineer wanted to jump straight into screen UI sketches and flow charts and our CEO wanted to see a fully clickable prototype yesterday, but we started the way I always make teams and students start: with a story diagram. As a team, we mapped out a redesigned sign-up flow on a whiteboard as a hypothesis, brick by brick (see Figure 5.20).

    Photo of a story map (sticky notes arranged on a board, with hand-drawn graphics surrounding them).

    Fig. 5.20 A story map from a similar project with the storyline on top and requirements below.

    This was the story, we posited, that a new user and potential customer should have during her first session with our product (see Figure 5.21). As you can see, we tried to keep it much the same as before so that we could localize and troubleshoot what parts were or weren’t working.

    • Exposition: She’s interested in getting fit or training others. (Same as before.)
    • Inciting Incident: She sees the “start training” button and gets started. (Same as before.)
    • Rising Action:
      • She enters her username and password. (This step performed surprisingly great, so we kept it.)
      • Build a training plan. Instead of “following” topics, she answers a series of questions so that the system can build her a customized training plan. Many questions—ultimately extending the on-boarding flow by 15 screens. 15! There is a method to this madness. Even though there are now many more questions, they get more engaging, and more relevant, question by question, screen by screen. The questions start broad and get more focused as they progress, feeling more and more relevant and personal. Designing the questionnaire for rising action prevents what could be two crises: boredom and lack of value.
    • Crisis: One of the last questions she answers is whether or not she wants to use this training plan to train with or help train anyone else. If so, she can add them to the plan right then and there. And if not, no problem—she can skip this step and always add people later.
    • Climax/Resolution: She gets a personalized training plan. This is also the point at which we want her to experience the value of her new training plan. She sees a graph of what her progress will look like if she sticks with the training plan she just got.
    • Falling Action: Then what? What happens after she gets her plan and sees how she might progress if she uses FitCounter? This story isn’t complete unless she actually starts training. So…
    • End: She’s home. Now she can start training. This initially involves watching a video, doing a quick exercise, and logging the results. She gets a taste of what it’s like to be asked to do something, to do it, and to get feedback in the on-boarding flow and now she can do it with her body and not just a click of the mouse. Instead of saying how many sit-ups she can do by answering a questionnaire, she watches a short video that shows her how to best do sit-ups, she does the exercise, and she logs her results. While humanly impossible to fully meet her goal of getting fit in one session, completing the story with this ending gets her that much closer to feeling like she will eventually meet her goal. Our hope was that this ending would function as a teaser for her next story with the product, when she continued to train. We wanted this story to be part of a string of stories, also known as a serial story, which continued and got better over time.

    Once we plotted out this usage story, we ran a series of planning sessions to brainstorm and prioritize requirements, as well as plan a strategic roadmap and project plan. After we had our requirements fleshed out, we then sketched out screens, comics, storyboards, and even role-played the flow internally and in person with potential customers. We did those activities to ideate, prototype, and test everything every step of the way so that we could minimize our risk and know if and when we were on the right path.

    We were quite proud of our newly crafted narrative sign-up flow. But before we could celebrate, we had to see how it performed.

    The Results

    On this project and every project since, we tested everything. We tested our concept story, origin story, and everything that came after and in between. While we were very confident about all of the work we did before we conceived of our new usage story for the sign-up flow, we still tested that. Constantly. We knew that we were on the right path during design and in-person testing because at the right point in the flow, we started getting reactions that sounded something like: “Oh, cool. I see how this could be useful.”

    Once we heard that from the third, fourth, and then fifth person during our in-person tests, we started to feel like we had an MVP that we were not only learning from, but also learning good things from. During our concept-testing phase, it seemed like we had a product that people might want to use. Our origin story phase and subsequent testing told us that the data supported that story. And now, with a usage story, we actually had a product that people not only could use, but wanted to use. Lots.

    Arc representing the progression of events in a usage story

    Fig. 5.21 The story of what we wanted new users to experience in their first session with FitCounter.

    As planned, that reaction came during our in-person tests, unprompted, near the end of the flow, right after people received their training plan. What we didn’t expect was that once people got the plan and went to their new home screen, they started to tap and click around. A lot. And they kept commenting on how they were surprised to learn something new. And they would not only watch videos, but then do things with them, like share them or add and remove them from plans.

    But this was all in person. What about when we launched the new sign-up flow and accompanying product. This new thing that existed behind the front door. The redesign we all dreaded to do, but that had to be done.

    I wish I could say that something went wrong. This would be a great time to insert a crisis moment into this story to keep you on the edge of your seat.

    But the relaunch was a success.

    The story resonated not just with our in-person testers, but also with a broader audience. So much so that the new sign-up flow now had almost double the completion rate of new users. This was amazing, and it was a number that we could and would improve on with further iterations down the line. Plus, we almost doubled our rate of new user engagement. We hoped that by creating a sign-up flow that functioned like a story, the result would be more engagement among new users, and it worked. We not only had a product that helped users meet their goals, but it also helped the business meet its goals of engaging new users. What we didn’t expect to happen so soon was the side effect of this increased, high-quality engagement: these new users were more likely to pay to use the product. Ten times more likely.

    We were ecstatic with the results. For now.

    A business cannot survive on first-time use and engagement alone. While we were proud of the product we built and the results it was getting, this was just one usage story: the first-time usage story. What about the rest? What might be the next inciting incident to kick off a new story? What would be the next beginning, middle, and end? Then what? What if someone did not return? Cliffhangers can happen during a flow that lasts a few minutes or over a period of days, months, or years. Over time, we developed stories big and small, one-offs and serials, improving the story for both customers and the business. Since we started building story-first, FitCounter has tripled in size and tripled its valuation. It is now a profitable business and recently closed yet another successful round of financing so that it can continue this growth.

  • Design for Real Life 

    A note from the editors: We’re pleased to share an excerpt from Chapter 7 of Eric A. Meyer and Sara Wachter-Boettcher’s new book, Design for Real Life, available now from A Book Apart.

    You’ve seen the fallout when digital products aren’t designed for real people. You understand the importance of compassion. And you’ve learned how to talk with users to uncover their deepest feelings and needs. But even with the best intentions, it’s still easy for thoughtful design teams to get lost along the way.

    What you and your team need is a design process that incorporates compassionate practices at every stage—a process where real people and their needs are reinforced and recentered from early explorations through design iterations through launch.

    Create Realistic Artifacts

    In Chapter 3, we talked about the importance of designing for worst-case scenarios, and how bringing stress cases into audience artifacts like personas and user-journey maps can help. Now let’s talk about creating those materials.

    Imperfect personas

    The more users have opened up to you in the research phase, the more likely you are to have a wealth of real, human emotion in your data to draw from: marriage difficulties or bad breakups, accidents, a friend who committed suicide, or a past of being bullied. The point isn’t to use your interviewees’ stories directly, but to allow them to get you thinking about the spectrum of touchy subjects and difficult experiences people have. This will help you include realistic details about your personas’ emotional states, triggers, and needs—and lend them far more depth than relying solely on typical stats like age, income, location, and education.

    These diverse inputs will also help you select better persona images. Look for, or shoot your own, images of people who don’t fit the mold of a cheerful stock photo.  Vary their expressions and clothing styles. If you can imagine these personas saying the kinds of things you heard in your user interviews, you’re on the right track. 

    More realistic personas make it much easier to imagine moments of crisis, and to test scenarios that might trigger a user’s stressors. Remember that “crisis” doesn’t have to mean a natural disaster or severe medical emergency. It can be a situation where an order has gone horribly wrong, or where a user needs information while rushing to the airport.

    As you write your personas and scenarios, don’t drain the life from them: be raw, bringing in snippets of users’ anecdotes, language, and emotion wherever you can. Whoever picks these personas up down the line should feel as compelled to help them as you do.

    User-journey maps

    In Chapter 3, we mentioned a technique Sara used with a home-improvement chain: user-journey mapping. Also referred to as customer-experience mapping, this technique is well established in many design practices, such as Adaptive Path, the San Francisco-based design consultancy (recently acquired by Capital One).

    In 2013, Adaptive Path turned its expertise into a detailed guide, available free at mappingexperiences.com. The guide focuses on how to research the customer experience, facilitate a mapping workshop, and apply your insights. The process includes documenting:

    • The lens: which persona(s) you’re mapping, and what their scenario is
    • Touchpoints: moments where your user interacts with your organization
    • Channels: where those interactions happen—online, over the phone, or elsewhere
    • Actions: what people are doing to meet their needs
    • Thoughts: how people frame their experience and define their expectations
    • Feelings: the emotions people have along their journey—including both highs and lows

    Constructing a journey map usually starts, as so many UX processes do, with sticky notes. Working as a team, you map out a user’s journey over time, with the steps extending horizontally. Below each step, use a different-colored sticky note to document touchpoints and channels, as well as what a user is doing, thinking, and feeling. The result will be a big (and messy) grid with bands of color, stretching across the wall (Fig 7.1).

    Photo of sticky notes organized on a wall

    Fig 7.1: A typical journey mapping activity, where participants use sticky notes to show a user progress through multiple stages and needs over time.

    Journey mapping brims with benefits. It helps a team to better think from a user’s point of view when evaluating content, identify gaps or disconnects across touchpoints or channels, and provide a framework for making iterative improvements to a major system over time. But we’ve found this technique can also be a powerful window into identifying previously unrealized, or unexamined, stress cases—if you think carefully about whose journey you’re mapping.

    Make sure you use personas and scenarios that are realistic, not idealized. For example, an airline might map out experiences for someone whose flight has been canceled, or who is traveling with a disabled relative, or who needs to book last-minute tickets to attend a funeral. A bank might map out a longtime customer who applies for a mortgage and is declined. A university might map out a user who’s a first-generation college student from a low-income family. The list goes on. 

    In our experience, it’s also important to do this work with as many people from your organization as possible—not only other web folk like developers or writers, but also groups like marketing, customer service, sales, and business or product units. This collaboration across departments brings diverse viewpoints to your journey, which will help you better understand all the different touchpoints a user might have and prevent any one group from making unrealistic assumptions. The hands-on nature of the activity—physically plotting out a user’s path—forces everyone to truly get into the user’s mindset, preventing participants from reverting back to organization-centric thinking, and increasing the odds you’ll get support for fixing the problems you find. 

    In addition to determining an ideal experience, also take time to document where the real-life experience doesn’t stack up. This might include:

    • Pain points: places where you know from research or analytics that users are currently getting hung up and have to ask questions, or are likely to abandon the site or app.
    • Broken flows: places where the transition between touchpoints, or through a specific interaction on a site (like a form), isn’t working correctly.
    • Content gaps: places where a user needs a specific piece of content, but you don’t have it—or it’s not in the right place at the right time.

    Just as you can map many things in your journey—channels, questions, feelings, actions, content needs and gaps, catalysts, and more—you can also visualize your journey in many different ways. Sometimes, you might need nothing more than sticky notes on a conference room wall (and a few photos to refer back to later). Other times, you’ll want to spend a couple of days collaborating, and create a more polished document after the fact. It all depends on the complexity of the experience you’re mapping, the fidelity you need in the final artifact, and, of course, how much time you can dedicate to the process.

    If journey maps are new to your team, a great way to introduce them is to spend an hour or two during a kickoff or brainstorm session working in small groups, with each group roughing out the path of a different user. If they’re already part of your UX process, you might just need to start working from a wider range of personas and scenarios. Either way, building journey maps that highlight stress cases will help you see:

    • How to prioritize content to meet the needs of urgent use cases, without weakening the experience for others. That’s what the home-improvement store did: walking through stress cases made it easier for the team to prioritize plain language and determine what should be included in visually prominent, at-a-glance sections.
    • Places where copy or imagery could feel alienating or out of sync with what a user might be thinking and feeling at that moment. For example, imagine if Glow, the period-tracking app, had mapped out a user journey for a single woman who simply has trouble remembering to buy tampons. The designers would have seen how, at each touchpoint, the app’s copy assumed something about this woman’s needs and feelings that wasn’t true—and they could have adjusted their messaging to fit a much broader range of potential users.
    • Whether any gaps exist in content for stress-case users. For example, if the Children’s Hospital of Philadelphia had created a journey map for a user in crisis, it might have prevented the content gap Eric experienced: no information about rushing to the hospital in an emergency existed online.

    Strengthen Your Process

    With more realistic representations of your audience in hand, it’s time to build checks and balances into your process that remind the team of these humans, and ward against accidentally awful outcomes. Here are some techniques to get you started.

    The WWAHD test

    In many cases, the easiest way to stress-test any design decision is to ask, “WWAHD?”—“What would a human do?” When you’re designing a form, try reading every question out loud to an imagined stranger, listening to how it sounds and imagining the questions they might have in response.

    Kate Kiefer Lee of MailChimp recommends this for all copy, regardless of where and how it’s used, because it can help you catch errors, improve your flow, and soften your sentences. She says:

    As you read aloud, pretend you’re talking to a real person and ask yourself “Would I say this to someone in real life?” Sometimes our writing makes us sound stodgier or colder than we’d like.

    Next time you publish something, take the time to read it out loud. It’s also helpful to hear someone else read your work out loud. You can ask a friend or coworker to read it to you, or even use a text-to-speech tool. (http://bkaprt.com/dfrl/07-01/)

    That last point is an excellent tip as well, because you’ll gain a better sense of how your content might sound to a user who doesn’t have the benefit of hearing you speak. If a synthesized voice makes the words fall flat or says something that makes you wince, you’ll know you have more work to do to make your content come to life on the screen.

    The premortem

    In design, we create biases toward our imagined outcomes: increased registrations or sales, higher visit frequency, more engaged users. Because we have a specific goal in mind, we become invested in it. This makes us more likely to forget about, or at least minimize, the possibility of other outcomes.

    One way to outsmart those biases early on is to hold a project premortem. As the name suggests, a premortem evaluates the project before it happens—when it “can be improved rather than autopsied,” says Gary Klein, who first wrote about them in 2007 in Harvard Business Review:

    The leader starts the exercise by informing everyone that the project has failed spectacularly. Over the next few minutes those in the room independently write down every reason they can think of for the failure. (http://bkaprt.com/dfrl/07-02/)

    According to Klein, this process works because it creates “prospective hindsight”—a term researchers from Wharton, Cornell, and the University of Colorado used in a 1989 study, where they found that imagining “an event has already occurred increases a team’s ability to correctly identify reasons for future outcomes by 30%.” 

    For example, say you’re designing a signup process for an exercise- and activity-tracking app. During the premortem, you might ask: “Imagine that six months from now, our signup abandonment rates are up. Why is that?” Imagining answers that could explain the hypothetical—it’s too confusing, we’re asking for information that’s too personal, we accidentally created a dead end—will help guide your team away from those outcomes, and toward better solutions.

    The question protocol

    Another technique for your toolkit is Caroline Jarrett’s question protocol, which we introduced in Chapter 4. To recap, the question protocol ensures every piece of information you ask of a user is intentional and appropriate by asking:

    • Who within your organization uses the answer
    • What they use them for
    • Whether an answer is required or optional
    • If an answer is required, what happens if a user enters any old thing just to get through the form

    You can’t just create a protocol, though—you need to bring it to life within your organization. For example, Jarrett has worked the approach into the standard practices of the UK’s Government Digital Service. GDS then used its answers to create granular, tactical guidelines for designers and writers to use while embedded in a project—such as this advice for titles:

    We recommend against asking for people’s title.

    It’s extra work for users and you’re forcing them to potentially reveal their gender and marital status, which they may not want to do. There are appropriate ways of addressing people in correspondence without using titles.

    If you have to implement a title field, make it an optional free-text field, not a drop-down list. Predicting the range of titles your users will have is impossible, and you’ll always end up upsetting someone. (http://bkaprt.com/dfrl/07-03/)

    By making recommendations explicit—and explaining why GDS recommends against asking for titles—this guide puts teams on the right path from the start.

    If user profiles are a primary part of your product’s experience, you might also want to adapt and extend the question protocol to account not just for how a department uses the data collected, but for how your product itself uses it. For example, a restaurant recommendation service can justify asking for users’ locations; the service needs it to prioritize results based on proximity. But we’ve seen countless sites that have no reason to collect location information: business magazines, recipe curators, even municipal airports. If these organizations completed a question protocol, it might be difficult for them to justify their actions.

    You don’t even have to call it a “protocol”—in some organizations, that label sounds too formal, and trying to add it to an established design process will be challenging. Instead, you might roll these questions and tactics into your functional specs, or make them discussion points in meetings. However you do it, though, look for ways to make it a consistent, ingrained part of your process, not an ad hoc “nice to have.”

    The Designated Dissenter

    Working in teams is a powerful force multiplier, enabling a group to accomplish things each individual could never have managed alone. But any team is prone to “groupthink”: the tendency to converge on a consensus, often without meaning to. This can lead teams to leave their assumptions unchallenged until it’s far too late. Giving one person the explicit job of questioning assumptions is a way to avoid this.

    We call this the “Designated Dissenter”—assigning one person on every team the job of assessing every decision underlying the project, and asking how changes in context or assumptions might subvert those decisions. This becomes their primary role for the lifetime of the project. It is their duty to disagree, to point out unconsidered assumptions and possible failure states.

    For example, back in Chapter 1 we talked about the assumptions that went into Facebook’s first Year in Review product. If the project had had a Designated Dissenter, they would have gone through a process much like we did there. They would ask, “What is the ideal user for this project?” The answer would be, “Someone who had an awesome year and wants to share memories with their friends.” That answer could lead to the initial questions, “What about people who had a terrible year? Or who have no interest in sharing? Or both?”

    Beyond such high-level questions, the Designated Dissenter casts a critical eye on every aspect of the design. They look at copy and design elements and ask themselves, “In which contexts might this come off as ridiculous, insensitive, insulting, or just plain hurtful? What if the assumptions in this error message are wrong?” At every step, they find the assumptions and subvert them. (The tools we discussed in the previous sections can be very useful in this process.)

    For the next project, however, someone else must become the Designated Dissenter. There are two reasons for this:

    1. By having every member of the team take on the role, every member of the team has a chance to learn and develop that skill.
    2. If one person is the Designated Dissenter for every project, the rest of the team will likely start to tune them out as a killjoy.

    Every project gets a new Dissenter, until everyone’s had a turn at it. When a new member joins the team, make them the Designated Dissenter on their second or third project, so they can get used to the team dynamics first and see how things operate before taking on a more difficult role.

    The goal of all these techniques is to create what bias researchers Jack B. Soll, Katherine L. Milkman, and John W. Payne call an “outside view,” which has tremendous benefits:

    An outside view also prevents the “planning fallacy”—spinning a narrative of total success and managing for that, even though your odds of failure are actually pretty high. (http://bkaprt.com/dfrl/07-04/)

    Our narratives are usually about total success—indeed, that’s the whole point of a design process. But that very aim makes us more likely to fall victim to planning fallacies in which we only envision the ideal case, and thus disregard other possibilities.

    Stress-Test Your Work

    Usability testing is, of course, important, and testing usability in stress cases even more so. The problem is that in many cases, it’s impossible to find testers who are actually in the midst of a crisis or other stressful event—and, even if you could, it’s ethically questionable whether you should be taxing them with a usability test at that moment. So how do we test for such cases?

    We’ve identified two techniques others have employed that may be helpful here: creating more realistic contexts for your tests, and employing scenarios where users role-play dramatic situations.

    More realistic tests

    In Chapter 3, we shared an experiment where more difficult mental exercises left participants with reduced cognitive resources, which affected their willpower—so they were more likely to choose cake over fruit.

    Knowing this, we can make our usability tests more reflective of real-life cognitive drain by starting each test with an activity that expends cognitive resources—for example, asking participants to read an article, do some simple logic puzzles, play a few rounds of a casual video game like Bejeweled, or complete a routine task like replying to emails.

    After the tester engages in these activities, you can move on to the usability test itself. Between the mental toll of the initial task and the shift of context, the testers will have fewer cognitive resources available—more like they would in a “real-life” use of the product.

    In a sense, you’re moving a little bit of field testing into the lab. This can help identify potential problems earlier in the process—and, if you’re able to continue into actual field testing, make it that much more effective and useful.

    Before you start adding stressors to your tests, though, make sure your users are informed. This means:

    • Be clear and transparent about what they’ll be asked to do, and make sure participants give informed consent to participate.
    • Remember, and communicate to participants, that you’re not evaluating them personally, and that they can call off the test at any time if it gets too difficult or draining.

    After all, the goal is to test the product, not the person.

    Stress roleplays

    Bollywood films are known for spectacular plot lines and fantastical situations— and, according to researcher Apala Lahiri Chavan, they’re also excellent inspiration for stress-focused usability testing.

    In many Asian cultures, it’s culturally impolite to critique a design, and embarrassing to admit you can’t find something. To get valuable input despite these factors, Chavan replaced standard tasks in her tests with fantasy scenarios, such as asking participants to imagine they’d just found out their niece is about to marry a hit man who is already married. They need to book a plane ticket to stop the wedding immediately. These roleplays allowed participants to get out of their cultural norms and into the moment: they complained about button labels, confusing flows, and extra steps in the process. (For more on Chavan’s method and results, see Eric Schaffer’s 2004 book, Institutionalization of Usability: A Step-by-Step Guide, pages 129–130.)

    This method isn’t just useful for reaching Asian markets. It can also help you see what happens when people from any background try to use your site or product in a moment of stress. After all, you can’t very well ask people who are in the midst of a real-life crisis to sit down with your prototype. But you can ask people to roleplay a crisis situation: needing to interact with your product or service during a medical emergency, or after having their wallet stolen, or when they’ve just been in an accident.

    This process probably won’t address every possible crisis scenario, but it will help you identify places where your content is poorly prioritized, your user flows are unhelpful, or your messaging is too peppy—and if you’re already doing usability testing, adding in a crisis scenario or two won’t take much extra time.

    Compassion Takes Collaboration

    One thing you may have noticed about each of these techniques is that they’re fundamentally cross-discipline: design teams talking and critiquing one another’s work through the lens of compassion; content strategists and writers working with designers and developers to build better forms and interactions. Wherever we turn, we find that the best solutions come from situations where working together isn’t just encouraged, but is actively built into a team’s structure. Your organization might not be ready for that quite yet—but you can help them get there. Our next chapter will get you started.