EW Resource


There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.

A List Apart: The Full Feed
  • Resurrecting Dead Personas 

    Being a user-centered designer means that you deliberately seek out the stories, data, and rationale behind your users’ motivations. You endeavor to keep user concerns at the forefront of every design decision, and regularly conduct research and collect data.

    But collecting facts about users isn’t the same as knowing your users. Research and data need to be regularly aggregated, analyzed, and synthesized into a format that is both understandable and accessible at critical moments. You need to turn user facts into user wisdom, and one of the most common methods for doing this is to develop user personas.

    Type “how to build user personas” into your favorite search engine and you will get thousands of results outlining different templates and examples of personas. Across the tech industry, personas “put a human face on aggregated data,” and help design and product teams focus on the details that really matter. Studies have shown that companies can see 4x the return on investment in personas, which explains why some firms spend upwards of $120,000 on these design tools.

    However, while it is common for design teams to spend considerable amounts of time and money developing personas, it is almost as common to see those personas abandoned and unused after a while. Everett McKay, Principal at UX Design Edge, has pointed out that user personas can fail for a number of reasons, such as:

    • They do not reflect real target users.
    • They are not developed with product goals in mind.
    • They are not embedded into team processes.

    I agree with everything McKay suggests, but I would add that personas fail largely because of one common misconception: the false idea that once you build a persona, you’re done. As designers, we know that the first version of a product is never perfect, but with multiple rounds of design and research it can be made better. Personas are no different.

    To recover personas that have become lifeless, here’s how you can iterate on them with periodic research and use them to achieve tangible goals. The following steps will help ensure you see value from the investment you made developing them in the first place. Let’s put your personas (back) to work and incorporate them into your design and development process.

    How a persona dies

    Let’s imagine you work at a company called Amazing Childcare that creates tools to help parents find childcare options for their children. Let’s also say you have the following data and statistics for AmazingChildcare.com:

    • 82% of customers are between the ages of 30 and 35, and 73% of those are female.
    • The most common concerns around finding childcare (as reported in user interviews) are cost and quality of care received.
    • AmazingChildcare.com has a homepage bounce rate of 40%.
    • Customer satisfaction survey shows an average satisfaction rating of 6.5 (out of 10).

    While this data is interesting, it is hard to process and assimilate into your design practice. You still need to go through the arduous work of understanding why the majority of users are who they are, what problems they are trying to solve, and how you can better meet their needs. So, you decide to create a persona.

    The persona you create is Susan, a 34-year old working mother of a two-year-old. She is interested in finding a qualified nanny that has passed a background check. Susan, like all freshly made personas, is a much more thought-provoking platform for crafting design solutions than a spreadsheet of numbers. She is someone we can imagine, remember, and empathize with.

    This is the point in the story when Susan dies.

    At first, the design team enjoys thinking about and designing for Susan. Having her “in the room” is thought provoking and interesting, but over time, Susan is talked about less and less. She starts to feel irrelevant to the products you’re building. You realize that Susan has “died,” leaving a lifeless, zombie Susan sitting in her place. You consider all the research and work your team put into creating Susan and wonder “what went wrong?”

    The problem is that your personas remained static and unmoving while the company, Amazing Childcare, grew and changed.

    Review, research, repeat

    As your product and marketing strategies change over time, so do your target users. In our example, Amazing Childcare may have started with a large user base of parents looking for full-time childcare options for their toddlers, but over time, the demographic changed. Now, it’s most frequently used by parents of school-age children looking for one-time, “date night” babysitters. When this happens, your original personas—like Susan—are no longer useful for thinking through design problems. Unless you periodically validate your personas, you’ll be responding to old assumptions (based on your outdated personas) rather than who your customers really are. In other words, your real-world users changed, but Susan didn’t.

    To remedy this, you should regularly conduct persona research, using a variety of methods to evaluate whether your personas still reflect:

    • The most common demographic, budget, and purchase scenarios of your users
    • The main behavior patterns of your users
    • The motivations and goals of your users.

    You can conduct your persona research on a schedule, such as once a quarter, or you can opportunistically work it into the usability research you already do. Either way, you need to make a commitment to keeping your personas relevant.

    If we go back to our example at Amazing Childcare, your personas would change based on the new research. Susan may still be a valid persona for your company, but your research would show that she no longer represents its core users, and should therefore no longer be your primary persona. Based on the updated research, you could develop a new persona named Beverly. Beverly is a 42-year-old mother of a 10-year-old boy and 7-year-old girl. Unlike Susan, Beverly is interested in finding an inexpensive babysitter for occasional date nights with her husband. You would use Beverly to think about the needs of the core user base, but only use Susan when you’re designing tools that directly cater to the demographic she represents.

    It is natural and necessary for personas to evolve and change; personas like Susan can drift out of the limelight of “primary persona” and make room for new friends like Beverly. Your ecosystem of personas should be as dynamic as your ecosystem of users, and regular persona research will ensure that they evolve in sync.

    Set goals

    Personas can help you do more than think about and design for target users. They can (and should) be used to help you reach real, tangible goals. Goals that reflect ways of increasing business, creating better user experiences, or both, will help you update your personas and develop your product. Even if you are not sure what is possible to achieve with personas, you should make an attempt at setting goals. Goals (even unachievable ones) provide a means for tracking the return on investment of your efforts.

    To get started, try using this format from Tamara Adlin and John Pruitt.

    The Persona Lifecycle
      Goal or issue How things are today How we want things to be tomorrow Ways to measure change
    Description A problem you would like your personas to solve. A description of the current state of affairs. A description of the “first step” in achieving your goal. A description of analytics, research, or other methods you can use to measure progress.

    Figure 1: Tamara Adlin and John Pruitt’s Essential Persona Lifecycle format

    For each goal, you will need to identify how you’ll measure progress toward that objective. You may need to create surveys and interview scripts for some, while for others, you may need analytics tools.

    Here is an example of a persona goal we could set at Amazing Childcare.

    Amazing Childcare Persona Goal
    Goal or issue How things are today How we want things to be tomorrow Ways to measure change
    Use our primary persona to drive feature development. We have just started our business and believe users like “Susan” (our primary persona) will want certain features (like nanny search and background checks) to be truly satisfied. However, the Susan persona needs to be validated and tested. We want to thoroughly research and validate our Susan persona and better understand how Amazing Childcare can meet our primary users’ needs. We can validate the Susan persona and measure customer satisfaction through a series of surveys and interviews. We will know we’ve succeeded when the next feature release increases customer satisfaction with Amazing Childcare.

    Figure 2: Example persona goal for Amazing Childcare

    Once you have created a set of goals for your personas, you can evaluate them as part of your regular research plan. If you find that you’re falling behind on any of your goals, you can research and recalibrate your personas based on the metrics you care about.

    For instance, if we evaluated the Susan persona in the ways we’ve outlined above, the data we would uncover indicates that Susan doesn’t actually represent the majority of our users. We would then reevaluate our personas and ultimately develop our new primary persona, Beverly.

    Putting personas (back) to work

    While research and goal setting are good practices, in and of themselves, the real benefit of personas can be seen when you put them to use. Here are some suggestions for how to incorporate personas into your design practice:

    • Start putting the face of your target persona at the top of every sketch, wireframe, and prototype. Encourage others to do the same.
    • Put a comment in every product story or ticket that states the target persona for that feature.
    • Shake up regular design meetings by asking a few people to roleplay as your personas. Throughout the rest of the meeting, have them look at every new design through the lens of their assigned persona.
    • Conduct a workshop. Activities such as Persona Empathy Mapping reinvigorate and add detail to personas.

    One of my favorite ways to utilize personas is to write scenarios in which they are the main character, then use them to explain research results. For example, let’s say we’re evaluating a new interface for the sign-up and login process on our website. Instead of presenting raw numbers (e.g., “10% of new users couldn’t find the sign-up interface”), we can present the data in a scenario, providing a way to understand a design problem that goes beyond statistics. Here is an example:

    Beverly came to the Amazing Childcare website to evaluate whether the company would actually be useful in helping her find reliable babysitters for her family. She decides that she would like to try the product and wonders if there is a free trial available. She searches the content of the web page for the words “free trial” or “sign-up,” but is unsuccessful. She does not think the “login” button applies to her, since she is a new user and does not yet have an account. She does not think to click on the “login” button, so she fails to find the new-member sign-up interface.

    In the example above, we’re using Beverly to describe feature requirements, usage statistics, and study results. The benefits of using personas to explain these components is that you are simultaneously making messy and complex details easier to understand, and forcing yourself to deeply consider who you’re really designing for. According to Alan Cooper, you should “[d]esign each interface for a single, primary persona.” Focusing on a persona like Beverly forces us to define the parameters of what our design should accomplish and helps us ultimately evaluate its success.

    Keeping personas alive

    Developing personas and keeping them alive can be difficult. Without regular care and feeding, they can waste away and your investment in them will be lost. In The User Is Always Right, Steve Mulder described it best:

    “It’s very easy to create personas, then think your work is done. But just having personas doesn’t mean people will accept them. Just accepting the personas doesn’t mean people will remember them. Just remembering the personas doesn’t mean people will actually use them. Your job is to keep the personas alive so they show their worth.”

    To ensure your personas are accepted, remembered, and used, you need to be the persona advocate on your team. As the persona advocate, you need to:

    • Regularly conduct persona research.
    • Set goals.
    • Make sure there is always a place for your personas at the design table.

    With creativity and persistence, you can cultivate a suite of well-researched, battle-tested user personas.

    While being a persona’s advocate may seem like a lot of work, it’s worth doing. Personas are more than just a document, they are an experience. Taking the time to draft a set of user personas, use them, evaluate them, research them, and refresh them, forces you to consider who your users are, what their goals are, and how your product fits into their lives.

    If you’re ready to become the persona advocate on your team, here are some additional resources to help you along:



  • Adapting to Input 

    Jeremy Keith once observed that our fixed-width, non-responsive designs were built on top of a consensual hallucination. We knew the web didn’t have a fixed viewport size, but we willfully ignored that reality because it made our jobs easier.

    The proliferation of mobile devices forced us into the light. Responsive web design gave us the techniques to design for the rediscovered reality that the web comes in many sizes.

    And yet there is another consensual hallucination—the idea that desktop equals keyboard and mouse, while phones equal touch.

    It’s time to break free of our assumptions about input and form factors. It’s time to reveal the truth about input.

    Four truths about input

    1. Input is exploding — The last decade has seen everything from accelerometers to GPS to 3D touch.
    2. Input is a continuum — Phones have keyboards and cursors; desktop computers have touchscreens.
    3. Input is undetectable — Browser detection of touch‚ and nearly every other input type, is unreliable.
    4. Input is transient — Knowing what input someone uses one moment tells you little about what will be used next.

    Being adaptable

    In the early days of mobile web we created pitfalls for ourselves such as “mobile context.” We’ve since learned that mobile context is a myth. People use their phones everywhere and for any task, “especially when it’s their only or most convenient option.”

    When it comes to input, there is a danger of making a similar mistake. We think of a physical keyboard as being better suited to complex tasks than an onscreen keyboard.

    But there are many people whose primary access to the internet is via mobile devices. Those same people are comfortable with virtual keyboards, and we shouldn’t ask them to switch to a physical keyboard to get the best experience.

    Even for those of us who spend our days on computers, sometimes a virtual keyboard is better. Perhaps we’re on a plane that has started to descend. In that moment, being able to detach a keyboard and work on a touchscreen is the difference between continuing our task or stowing our laptop for landing.

    So who are we to judge what input is better? We have no more control over the input someone uses than we do the size of their screen.

    Becoming flexible

    Confronting the truth about input can be overwhelming at first. But we’ve been here before. We’ve learned how to design for a continuum of screen sizes; we can learn how to adapt to input—starting with these seven design principles.

    Design for multiple concurrent inputs

    The idea that we’re either designing for desktop-with-a-mouse or touch-on-mobile is a false dichotomy. People often have access to multiple inputs at the same time. Someone using a Windows 10 laptop or a Chromebook Pixel may be able to use the trackpad and touchscreen concurrently.

    There are many web pages that detect touch events and then make incorrect assumptions. Some see the touch events and decide to deliver a mobile experience regardless of form factor. Others have different branches of their code for touch and mouse and once you’re in one branch of the code, you cannot switch to the other.

    At minimum, we need to ensure that our web pages don’t prevent people from using multiple types of input.

    Ideally, we would look for ways to take advantage of multiple inputs used together to create better experiences and enable behavior that otherwise wouldn’t be possible.

    Make web pages that are accessible

    When someone uses a remote control’s directional pad to interact with a web page on a TV, the browser sends arrow key events behind the scenes. This is a pattern that new forms of input use repeatedly—they build on top of the existing forms of input.

    Because of this, one of the best ways to ensure that your web application will be able to support new forms of input is to make sure that it is accessible.

    The information provided to help assistive devices navigate web pages is also used by new types of input. In fact, many of the new forms of input had their beginnings as assistive technology. Using Cortana to navigate the web on an Xbox One is not so different than using voice to control Safari on a Mac.

    Design for the largest target size by default

    A mouse is more precise than our fingers for selecting items on a screen. Buttons and other controls designed for a mouse can be smaller than those designed for touch. That means something designed for a mouse may be unusable by someone using a touchscreen.

    However, something designed for touch is not only usable by mouse, but is often easier to select due to Fitts’s Law, which says that “the time to acquire a target is a function of the distance to and size of the target.”

    Plus, larger targets are easier for users with lower dexterity, whether that is a permanent condition or a temporary one caused by the environment. At the moment, the largest target size is touch, so this means designing touch first.

    As Josh Clark once said, “when any desktop machine could have a touch interface, we have to proceed as if they all do.”

    Design for modes of interaction instead of input types

    Gmail’s display density settings illustrate the benefit of designing for user interaction instead of input types.

    Gmail Interface

    By default, Gmail uses a comfortable display density setting. If someone wants to fit more information on the screen, they can switch to the compact display density setting.

    It so happens that these two settings map well to different types of input. The comfortable setting is touch-friendly. And compact is well suited for a mouse.

    But Gmail doesn’t confine these options to a particular input. Someone using a touchscreen laptop could choose to use the compact settings. Doing so sacrifices the utility of the laptop’s touchscreen, but the laptop owner gets to make that choice instead of the developer making it for her.

    Vimeo made a similar choice with their discontinued feature called Couch Mode. Couch Mode was optimized for the 10ft viewing experience and supported remote controls. But there was nothing that prevented someone from using it on their desktop computer. Or for that matter, using the standard Vimeo experience on their TV.

    In both cases, the companies designed for use cases instead of a specific form factor or input. Or worse, designing for a specific input inferred from a form factor.

    Abstract baseline input

    When we’re working on responsive web designs at Cloud Four, we’ve found that the labels “mobile,” “tablet,” and “desktop” are problematic. Those labels create images in people’s minds that are often not true. Instead, we prefer “narrow,” “wide,” “tall,” and “short” to talk about the screens we’re designing for.

    Similarly, words like “click” and “tap” betray assumptions about what type of input someone might use. Using more general terms such as “point” and “select” helps prevent us from inadvertently designing for a particular input.

    We should also abstract baseline input in our code. Mouse and touch events are entirely different JavaScript APIs, which makes it difficult to write applications that support both without duplicating a lot of code.

    The Pointer Events specification normalizes mouse, touch, and stylus events into a single API. This means for basic input, you only have to write your logic once.

    Pointer events map well to existing mouse events. Instead of mousedown, use pointerdown. And if you need to tailor an interaction to a specific type of input, you can check the pointerType() and provide alternate logic—for example, to support gestures for touchscreens.

    Pointer Events are a W3C standard and the jQuery team maintains a Pointer Events Polyfill for browsers that don’t yet support the standard.

    Progressively enhance input

    After baseline input has been wrangled, the fun begins. We need to start exploring what can be done with all the new input types available to us.

    Perhaps you can find some innovative uses for the gyroscope like Warby Parker’s product page, which uses the gyroscope to turn the model’s head. And because the feature is built using progressive enhancement, it also works with mouse or touch.

    Warby Parker UI

    The camera can be used to scan credit cards on iOS or create a photo booth in browsers that support getUserMedia. Normal input forms can be enhanced with the accept attribute to capture images or video via the HTML Media Capture specification:

    <input type="file" accept="image/*">
    <input type="file" accept="video/*;capture=camcorder">
    <input type="file" accept="audio/*;capture=microphone">

    Make your forms easier to complete by ensuring they work with autofill. Google has found that users complete forms up to 30 percent faster when using autofill. And keep an eye on the Payment Request API, which will make collecting payment simple for customers.

    Or if you really want to push the new boundaries of input, the Web Speech API can be used to enhance form fields in browsers that support it. And Physical Web beacons can be combined with Web Bluetooth to create experiences that are better than native.

    Make input part of your test plans

    Over the last few years, test plans have evolved to include mobile and tablet devices. But I have yet to see a test plan that includes testing for stylus support.

    It makes intuitive sense that people check out faster when using autofill, but none of the ecommerce projects that I’ve worked on have verified that their checkout forms support autofill.

    We need to incorporate input in our test plans. If you have a device testing lab, make input one of the criteria you use to determine what new devices to purchase. And if you don’t have a device testing lab, look for an open device testing lab near you and consider contributing to the effort.

    The way of the web

    Now is the time to experiment with new forms of web input. The key is to build a baseline input experience that works everywhere and then progressively enhance to take advantage of new capabilities of devices if they are available.

    With input, as with viewport size, we must be adaptable. It is the way of the web.

  • The Itinerant Geek 

    This spring I spent almost a month on the road, and last year I delivered 26 presentations in eight different countries, spending almost four months traveling. While doing all of this I am also running a business. I work every day that I am on the road, most days putting in at least six hours in addition to my commitments for whichever event I am at. I can only keep up this pace because travel is not a huge stressor in my life. Here are some things I have learned about making that possible, in the hope they are useful to anyone setting off on their first long trip. Add your own travel tips in the comments.

    Before you go

    During the run-up to going away, I stay as organized as possible. Otherwise I would lose a lot of time just preparing for the trips. I have a Trello board set up with packing list templates. I copy a list and remove or add anything specific to that trip. Then I can just grab things without thinking about it and check them off. I also use Trello to log the status of plans for each trip; for example, do I have a hotel room and flights booked? Is the slide deck ready? Do I know how I am getting from the airport to the hotel? This way I have instant access to the state of my plans and can also share this information if needed.

    It is easy to think you will always have access to your information in its original form. However, it is worth printing a copy of your itinerary to keep with you just in case you can’t get online or your phone battery runs out. For times when you don’t have physical access to something at the moment, take photos of your passport and car insurance (if it covers rentals), and upload them somewhere secure.

    Your travel may require a visa. If your passport is expiring within six months of your trip, you may want to get a new one — some countries won’t issue a visa on a passport that is due to expire soon. You can in some cases obtain pre-authorization, such as through the American ESTA form for participating in its Visa Waiver Program. This might have changed since your last trip. For example, Canada has introduced an eTA system as of March 2016. I’ve traveled to Canada for ConFoo for the last four years - if I attend next year, I’ll need to remember to apply for this beforehand.

    Tell your bank and credit card company that you are traveling to try and avoid their blocking your card as soon as you make a purchase in your destination.

    Make sure you have travel insurance that covers not only your possessions but yourself as well. Be aware that travel insurance will not pay out if you become sick or injured due to an existing condition that you didn’t tell them about first. You will have to pay an increased premium for cover of an existing issue, but finding yourself with no cover and far from home is something you want to avoid.

    Make sure that you have sufficient of any medicine that you need. Include some extra in case of an unscheduled delay in returning home. I also usually pack a few supplies of common remedies - especially if I am going somewhere that is not English speaking. I have a vivid memory of acting out an allergic reaction to a Polish pharmacist to remind me of this!

    I also prepare for the work I’ll be doing on the road. In addition to preparing for the talks or workshops I might be giving, I prepare for work on Perch or for the business. I organize my to-do list to prioritize tasks that are difficult to do on the road, and make sure they are done before I go. I push tasks into the travel period that I find easier on the small screen of my laptop, or that I can complete even in a distracting environment.

    When booking travel, give yourself plenty of time. If you are short of time then every delay becomes stressful, and stress is tiring. Get to the airport early. Plan longer layovers than the 70 minutes your airline believes it will take you to deplane from the first flight and make it round a labyrinthine nightmare from the 1980s to find the next one. On the way home from Nashville, my first plane was delayed due to the inbound flight having to change equipment. The three-hour layover I had chosen meant that even with almost two hours of delay I still made my transatlantic leg home in time. Travel is a lot less stressful if you allow enough time for things to go wrong.

    Air travel tips

    Try to fly with the same airline or group in order to build up your frequent flyer status. Even a little bit of “status” in an airline miles program will give you some perks, and often priority for upgrades and standby tickets.

    If you want to take anything of significant size onto the aircraft as hand luggage, the large roller bags are often picked out to be gate-checked on busy flights. I travel with a Tom Bihn Aeronaut bag, which I can carry as a backpack. It is huge, but the gate staff never spot it and due to being soft-sided, it can squash into the overhead compartments on the smaller planes that are used for internal U.S. flights.

    Have in your carry-on an overnight kit in case your checked luggage does not make it to your destination at the same time as you do. Most of the time you’ll find your bag comes in on the next flight and will be sent to your hotel, but if you need to get straight to an event it adds stress to be unable to change or brush your teeth.

    If you plan to work on the flight, charge your laptop and devices whenever you can. More and more planes come with power these days - even in economy - but it can’t be relied on. I have a BatteryBox, a large external battery. It’s a bit heavy but means I can work throughout a 10-hour flight without needing to plug in.

    On the subject of batteries, airlines are becoming increasingly and understandably concerned about the fire risk posed by lithium ion batteries. Make sure you keep any spare batteries in your hand luggage and remove them if your bag is gate-checked. Here is the guide issued by British Airways on the subject.

    A small flat cool bag, even without an icepack, works for a good amount of time to cool food you are bringing from airside as an alternative to the strange offerings onboard. I usually pop a cold water bottle in with it. London Heathrow T5 has a Gordon Ramsay “Plane Food” restaurant that will make you a packed lunch in a small cool bag to take on the plane!

    Get lounging

    Airport lounges are an oasis. Something I didn’t realize when I started traveling is that many airport lounges are pay on entry rather than being reserved for people with higher class tickets or airline status. If you have a long layover then the free drinks, wifi, power, and snacks will be worth the price - and if it means you can get work done you can be making money. The LoungeBuddy app can help you locate lounges that you can access whether you have airline status or not.

    There is another secret to airline lounges: they often have a hotline to the airline and can sort out your travel issues if your flight is delayed or canceled. With the delayed flight in my last trip I checked myself into the American Airlines lounge, mentioning my delay and concern for the ongoing leg of the flight. The member of staff on the desk had the flight status checked and put me on standby for another flight “just in case.” She then came to let me know - while I happily sat working in the lounge - that it all looked as if it would resolve in time for me to make my flight. Once again, far less stressful than trying to work this out myself or standing in a long line at the desk in the airport.

    Looking after yourself

    If you do one or two trips a year then you should just relax and enjoy them - eat all the food, drink the drinks, go to the parties and forget about your regular exercise routine. If you go to more than 20, you won’t be able to do that and also do anything else. I quickly learned how to pace myself and create routines wherever I am that help to bring a sense of normal life to hotel living.

    I try as much as possible to eat the same sort of food I usually eat for the majority of the time - even if it does mean I’m eating alone rather than going out for another dinner. Hotel restaurants are used to the fussiest of international travelers and will usually be able to accommodate reasonable requests. I do a quick recce of possible food options when I arrive in a location, including places I can cobble together a healthy packed lunch if the conference food is not my thing. I’ll grab a sparkling water from the free bar rather than another beer, and I’ll make use of the hotel gym or go for a run to try and keep as much as possible to the training routine I have at home. I do enjoy some great meals and drinks with friends - I just try not to make that something that happens every night, then I really enjoy those I do get to.

    I’m fortunate to not need a lot of sleep, however I try to get the same amount I would at home. I’ve also learned not to stress the time differences. If I am doing trips that involve the East and West Coast of America I will often just remain on East Coast time, getting up at 4am rather than trying to keep time-shifting back and forth. If you are time-shifting, eating at the right time for where you are and getting outside into the light can really help. The second point is not always easy given the hotel-basement nature of many conference venues. I tend to run in the morning to remind myself it is daytime, but just getting out for a short walk in the daylight before heading into the event can make a huge difference.

    I take care to wash my hands after greeting all those conference-goers and spending time in airports and other places, and am a liberal user of wet wipes to clean everything from my plane tray table to the hotel remote control. Yes, I look like a germaphobe, however I would hate to have to cancel a talk because I got sick. Taking a bit of care with these things does seem to make a huge difference in terms of the number of minor illnesses I pick up.

    Many of us in this industry are introverts and find constant expectation to socialize and be available tiring. I’m no exception and have learned to build alone time into my day, which helps me to be more fully present when I am spending time with other speakers and attendees. Even as a speaker at an event, when I believe it is very important for me to be available to chat to attendees and not to just vanish, this is possible. Being at a large number of events I often have seen the talks given by other speakers, or know I can catch them at the next event. So I will take some time to work or relax during a few sessions in order to make myself available to chat during the breaks.

    If you are taking extended trips of two weeks or more these can be hugely disruptive to elements of your life that are important to your wellbeing. That might be in terms of being unable to attend your place of worship, meet with a therapist, or attend a support group meeting. With some thought and planning you may be able to avoid this becoming an additional source of stress - can you find a congregation in your location, use Skype to meet with your therapist, or touch base with someone from your group?

    Working on the road

    Once at your destination, getting set up to work comfortably makes a huge difference to how much you can get done. Being hunched over a laptop for days will leave you tired and in pain. My last trip was my first with the new and improved Roost Stand, along with an external Apple keyboard and trackpad. The Roost is amazing; it is incredibly light and allowed me to get the laptop to a really great position to work properly.

    Plan your work periods in advance and be aware of what you can do with no, or limited internet connectivity. In OmniFocus I have a Context to flag up good candidates for offline work, and I also note what I need to have in order to do that work. I might need to ensure I have a copy of some documentation, or to have done a git pull on a repository before I head into the land of no wifi. I use Dash for technical documentation data sets when offline. On a ten-hour flight with no wifi you soon realize just how much stuff you look up every day!

    If traveling to somewhere that is going to be horribly expensive for phone data, do some research in advance and find out how to get a local pay-as-you-go sim card. If you want to switch that in your phone, you need to have an unlocked phone (and also the tools to open your phone). My preferred method is to put the card into a mobile broadband modem, then connect my phone to that with the wifi. This means I can still receive calls on my usual number.

    The possibility of breaking, losing, or having your laptop stolen increases when it isn’t safely on your desk in the office. Have good insurance, but also good backups. During conferences, we often switch off things like Dropbox or our backup service in order to preserve the wifi for everyone - don’t forget you have done this! As soon as you are able, make sure your backups run. My aim is always to be in a position where if I lost my laptop, I could walk into a store, buy a new one and be up and running within a few hours without losing my work, and especially the things I need to present.

    Enjoy the world!

    Don’t forget to also plan a little sightseeing in the places you go. I would hate to feel that all I ever saw of these places was the airport, hotel, and conference room. I love to book myself on a walking tour. You can discover a lot about a city in a few hours the morning before your flight out, and there are always other lone business travelers on these tours. I check Trip Advisor for reviews to find a good tour. Lonely Planet have “Top things to do in…” guides for many cities: here is the guide for Paris. I’ll pick off one item that fits into the time I have available and head out for some rapid tourism. As a runner I’m also able to see many of the sights by planning my runs around them!

    Those of us to get to travel, who have the privilege of doing a job that can truly be done from anywhere, are very lucky. With a bit of planning you can enjoy travel, be part of events, and still get work done and remain healthy. By reducing stressful events you do have control over, you can be in better shape to deal with the inevitable times you do not.

  • Strategies for Healthier Dev 

    Not too long ago, I was part of a panel at the launch event for TechLadies, an initiative that encourages women to learn to code. Along the way, I mentioned a bit about my background as an athlete. As we were leaving to go home, the woman next to me jokingly asked if I was a better basketball player or a better developer. Without missing a beat, I said I was a better basketball player. After all, I’ve been playing basketball for over half my life; I’ve only been coding for two and a half years.

    We’ve probably all come across the stereotype of the nerdy programmer who is all brains and no brawn. I’m a counterexample of that cliché, and I personally know developers who are avid cyclists or marathon runners—even a mountain climber (the kind who scales Mount Everest). And yet a stereotype, “a widely held but fixed and oversimplified image,” often comes into existence for a reason. Think of Douglas Coupland’s Microserfs. Think of any number of mainstream dramas featuring wan (usually white, usually male) programmers staring at screens. Many so-called knowledge workers are too sedentary. Our lives and work stand to benefit if we become less so.

    Now, no one likes to suffer. And yet when it comes to exercise or training, it’s too easy for us to think that fitness is all about self-discipline—that we just need to have the willpower to persevere through the agony. But that’s not a good strategy for most people. Unless you genuinely find pleasure in pain and suffering, you have to want something badly enough to endure pain and suffering. Ask any athlete if they enjoy running extra sprints or lifting extra weights. Even Olympic medalists will tell you they don’t. They do it because they want to be the best.

    My point is this: forcing yourself to do something you don’t enjoy is not sustainable. I’ll be the first to admit that I’m not a big fan of running. A little ironic coming from someone who used to play basketball full-time, maybe, but the only reason I did any running at all, ever, was because competitive basketball required me to. When I stopped training full-time, I simply couldn’t muster the energy or motivation to get up and run every day (or even every week, for that matter).

    So I had to come up with a different game plan—one that required minimal effort, near-zero effort, and minor effort. You can do it, too. No excuses. Ready?

    Minimal effort

    I’m lazy.

    I’m pretty good at talking myself out of doing things that require extra effort to get ready for. For example, going swimming requires that I pack toiletries, a fresh set of clothes, and goggles. Then I actually need to make it to the pool after work before it closes, which means I have to plan to leave the office earlier than I usually might, and so on. Guess what? Eight out of ten times, I end up telling myself to go swimming next time.

    By contrast, I commute to work on my bicycle. Yes, it helps that I love to ride. I thoroughly enjoy swimming, too—just not enough to overcome my laziness. But because cycling is my main mode of transportation, I don’t even think about it as exercise. It’s just something I do as part of my day, like brushing my teeth.

    The “while-you’re-at-it” technique works very well for me, and maybe it’ll work for you, too. In a nutshell: build healthy habits into things you already do. Kind of how parents hide vegetables in more palatable stuff to get their kids to eat them.

    Near-zero effort

    Let me list some simple activities that involve minimal effort, but have significant returns on investment. Consider these the minimum viable products (MVPs) of healthy habits.

    Drink more water

    Most of us have been told to drink eight glasses of water a day, but how many of us actually drink that much? The real amount of water people need on a daily basis seems debatable, but I’m going to make the bold assumption that most of us don’t drink more than one liter (or around four glasses) of water a day. And no, coffee doesn’t count.

    This means that most of us operate in a mildly dehydrated state throughout the day. Studies done on both men and women have shown that mild dehydration negatively impacts one’s mood and cognitive function. Given that our work requires significant mental acuity, upping our water intake is a minimal-effort lifehack with significant benefits.

    Note that people often mistake thirst for hunger. Studies have shown that we’re notoriously bad at distinguishing the two. Assuming that most of us probably don’t drink enough water throughout the day, odds are that you’re not really hungry when you reach for a snack. In fact, you’re probably thirsty. Don’t grab a can of soda, though—drink water.

    Move more

    A study done on the effects of sedentary behavior revealed that long periods of inactivity increase one’s risk of diabetes and heart disease. The study also mentioned that encouraging individuals simply to sit less and move more, regardless of intensity level, may improve the effectiveness of diabetes-prevention programs.

    Think about how you can incorporate more movement into your routine. Try drinking water throughout the day. Not only will this reinforce the “drink more water” habit, but you’ll also find that you need to get up to go to the bathroom more often. And going to the bathroom is…movement. Note: do not refuse to go to the bathroom because you think you’re “on the brink” of solving a bug. That’s a lie you tell yourself.

    Since you’re getting up and sitting down more often, you might as well sneak some exercise in while you’re at it. Instead of plonking down in your seat when you get back, lower yourself slowly over the course of five seconds until your butt touches your chair. You’re building leg muscles! Who needs a gym? The point is, all the little things you do to increase movement add up.

    Don’t eat while you work

    It might surprise you to know that being aware of what you put in your mouth—and when you put it there—makes a difference. I know many people, not only developers, who eat lunch at their desks, balancing a spoonful of food in one hand while continuing to type with the other. Lunch becomes something that’s shoveled into our mouths and (maybe, if we have time) swallowed. That’s no way to appreciate a meal. Make lunchtime a logical break between your coding sessions. Some folks may protest that there’s just no time to eat: we have to code 20 hours a day!

    First of all, it’s impossible to be efficient that way. A study (PDF) from the University of Illinois at Urbana-Champaign has shown that taking a deliberate break can reboot focus on the task at hand. It offsets our brain’s tendency to fall into autopilot, which explains why we can’t come up with good solutions after continuously staring at a bug for hours. Tom Gibson wrote a beautiful post explaining how human beings are not linear processes. We are still operating on an industrial model where emphasis is placed on hours worked, not output achieved.

    We need to aim for a healthy “Work Rate Variability” and develop models of working that stop making us ill, and instead let us do our best.
    Tom Gibson

    Also, by actually bothering to chew your food before swallowing, you eat more slowly. Research has shown that eating slowly leads to lower hunger ratings and increased fullness ratings. Chances are you’ll feel healthier overall and gain a fresh sense of perspective, too, by giving yourself a proper lunch break. Such is the power of minimal effort.

    Use a blue-light filter at night

    Personally, I’m a morning person, but most of my developer friends are night owls. Everybody functions best at different times of the day, but if you’re someone who operates better at night, I recommend installing f.lux on your desktop and mobile devices. It’s a tiny application that makes the color of your computer’s display adapt to ambient light and time of day.

    Melatonin is a hormone that helps maintain the body’s circadian rhythms, which determine when we sleep and wake up. Normally, our bodies produce more melatonin when it gets dark. Scientists have found that exposure to room light in the evening suppresses melatonin during normal sleep hours. Research on the effects of blue light has shown that blue light suppresses sleep-associated delta brainwaves while stimulating alertness. Because it doesn’t make sense, given socioeconomic realities, to ask people to stop working at night, the best alternative is to reduce exposure to blue light.

    Minor effort required

    If you’ve already started incorporating zero-effort health habits into your life, and feel like putting in a bit more effort, this section outlines tactics that take a little more than zero effort.


    When I started writing code, I found myself glued to my chair for hours on end. You know that feeling when you’re debugging something and obstinately refuse to let that bug get the better of you? But I realized that my efficiency decreased the longer I worked on something without stopping. I can’t tell you how many times I worked on a bug till I threw my hands up in frustration and went for a walk, only to have the solution come to me as I strolled outside enjoying the breeze and a change of scenery.

    Walking doesn’t require any additional planning or equipment. Most of us, if we’re lucky, can do it without thinking. The health benefits accrued include a reduction of chronic diseases like stroke and heart disease. Try this: as part of your attempt to have a better lunch break, take a walk after you’ve properly chewed and swallowed your lunch. It limits the increase of blood sugar levels immediately after a meal. You’ll get fitter while you’re at it.


    I don’t know about you, but sitting for long periods of time makes my hips feel tight and my back tense up. The scientific research on the exact effects of sitting on the structural integrity of your hip flexors seems to be inconclusive, but I know how I feel. A lot of us tend to slouch in our chairs, too, which can’t be good for our overall posture.

    If you find yourself craning your neck forward at your desk, with your shoulders up near your ears and back rounded forward, news flash! You have terrible posture. So what can you do about it? Well, for starters, you can refer to a handy infographic from the Washington Post that summarizes the ills of bad posture. The TL;DR: bad posture negatively affects your shoulders, neck, hips, and especially your back.

    Slouching for prolonged periods causes the soft discs between our vertebrae to compress unevenly. If you take a sponge and place a weight on one side of it and leave it there for hours, the sponge will warp. And that’s exactly what happens to our discs. As someone who has suffered from a prolapsed disc, I can tell you that back trouble no fun at all.

    Here’s another thing you can do: stretch at your desk. You don’t have to do all of these exercises at once—just sprinkle them throughout your work day. The improved blood circulation will be a boon for your brain, too.


    Most of us don’t get enough sleep. I hardly know anyone under the age of 12 who goes to bed before 11 p.m. Maybe that’s just the company I keep, but there are lots of reasons for not getting enough sleep these days. Some of us work late into the night; some of us game late into the night. Some of us care for children or aging parents, or have other responsibilities that keep us up late. I live in Singapore, which ranks third on the list of cities clocking the fewest hours of sleep: six hours and 32 minutes.

    Sleep deprivation means more than just yawning all the time at work. Research has shown that the effects of sleep deprivation are equivalent to being drunk. Insufficient sleep affects not only your motor skills, but also your decision-making abilities (PDF) and emotional sensitivity (PDF). You become a dumb, angry troll when sleep-deprived.

    Changing your sleep habits takes some effort. The general advice is to sleep and wake up at the same time each day, and to try to aim for seven and a half hours of sleep. According to Professor Richard Wiseman, a psychology professor at the University of Hertfordshire, our sleep cycles run in 90-minute intervals. Waking up in the middle of those cycles makes us groggy. Wiseman offers tips on how to sleep better.

    Resistance training

    By “resistance training,” I don’t mean hefting iron plates and bars at the gym (though if you like to do that, more power to you). If you enjoy the privilege of able-bodiedness, try to make vigorous physical movement part and parcel of your daily life. Ideally, you’ll have the basic strength and coordination to run and jump. And to be able to get right up without much effort after falling down. You don’t have to be an elite athlete—that’s a genetic thing—but with luck, you’ll be able to perform at least some basic movements.

    Our own body weight is plenty for some rudimentary exercises. And it doesn’t matter if the heaviest weight you’re willing to lift is your laptop and you couldn’t do a push-up if your life depended on it. There are progressions for everyone. Can’t do a push-up on the ground? Do a wall push-up instead. Can’t do a basic squat? Practice sitting down on your chair very slowly. Can’t run? Take a walk. (Yes, walking is a form of resistance training). And so on.

    There are two websites I recommend checking out if you’re interested in learning more. The first is Nerd Fitness by Steve Kamb. He and I share a similar philosophy: small changes add up to big results. He covers topics ranging from diet to exercise and offers lots of resources to help you on your journey. Another site I really love is GMB fitness. It teaches people how to move better, and to better understand and connect with their bodies.

    Wrapping up: slow & steady

    There is only one way to build new habits: consistency over time. That’s why it’s so important to do things that take minimal effort. The less effort an action requires, the more likely you are to do it consistently. Also: try not to make drastic changes to all aspects of your life at once (though that may be effective for some). Regardless of whether you mind change in your life or not, almost any change introduces stress to your system. And even constant low-grade stress is detrimental. It’s better to start small, with minor changes that you barely feel; once that becomes a habit, move on to the next change.

    We spend hours maintaining our code and refactoring to make it better and more efficient. We do the same for our computers, optimizing our workflows and installing tweaks to eke out those extra seconds of performance. So it’s only right that we put a little effort into keeping our bodies reasonably healthy. Fixing health problems usually costs more than fixing bugs or machines—and often the damage is irreversible. If we want to continue to write great code and build cool products, then we should take responsibility for our health so that we can continue to do what we love for decades to come.

  • Create an Evolutionary Web Strategy with a Digital MRO Plan 

    Many organizations, large and small, approach creating their web presence as if it’s a one-time project. They invest an enormous amount of time and money in a great web design, content strategy, and technical implementation; and then they let the website sit there for months and even years without meaningful updates or enhancements. When the web presence becomes so out of date it’s barely functional, it becomes clear to them that the site needs a refresh (or more likely another full redesign).

    Redesigns are great. But there’s a better way: ensure your client has a website that continually adapts to their needs.

    Equip your client with a framework that helps them with ongoing management of their web presence. This plan also ensures you continue to build a strong relationship over the long term. It’s called an MRO plan.

    MRO stands for Maintenance, Repair, and Overhaul. It’s a term most often used with building facilities or machinery.

    A house is a machine for living in.
    Le Corbusier

    Everyone knows that a building or a piece of heavy machinery needs a regular maintenance plan. Buildings and machines are complex systems that need tuning and maintenance. Websites are also complex systems. You could say, “A website is a machine for engagement.” To keep that engagement running smoothly, your client needs a plan that includes regular maintenance along with content and feature updates.

    The problem with the curve

    Typically, websites undergo waves of full redesign, neglect, failure, full redesign. Think of it as a series of bell curves dipping into the negative between revolutionary overhauls.

    The revolution approach to managing your web presence.

    The revolution approach to managing your web presence.

    Your client comes to you with an initial big push to deliver a new web design and content strategy, something that they will be able to manage without your assistance. And you provide that. But once you walk away, the website stops evolving.

    During this time, the client’s products or services may evolve, and they may adapt their product-based content to changes in their market—but they don’t touch the website. Like old bread, their website gets stale until the day comes when it’s clear that it needs to be fixed ASAP. That’s when you get the call. There’s a huge drive to do a website redesign, and a big new project is kicked off.

    You finish the project and walk away. Again.

    But this is a mistake. It’s smarter to show your client how to implement a plan that protects their investment in their website. It’s smarter for the client, and it’s smarter for you too because it allows you to develop an ongoing relationship that ensures you have recurring revenue over a longer period.

    Convince your client to break this endless cycle of big, expensive redesign projects every few years. Show them that they need to manage their website the same way they manage product development–by consistently and regularly monitoring and managing their web experience, focusing on ongoing maintenance, interim updates, and major overhauls when needed.

    Think evolution not revolution

    A digital MRO plan provides continual investment so websites can evolve in a more consistent manner over time–evolution versus revolution. The evolutionary approach requires your client to regularly update their website based on how their company, the industry, and their customer data is changing.

    The revolution approach to managing your web presence.

    An MRO program for a web presence–the evolution approach.

    Define an MRO framework for your client with three phases:

    1. Maintenance: This is the phase that occurs over a long period, with regular monitoring of web pages, content assets, and other resources in addition to functionality. The maintenance phase is about fixing small things, making small changes or updates that don’t require major work on the website. How you can help: Outline a regular maintenance plan where issues are documented and then packaged together into maintenance updates. In some cases, these fixes are content-based, in other cases they are functionality bugs or small updates that need to be applied. You can work on these maintenance updates monthly or more often depending on the situation, delivering regular changes to the website to keep it up to date.
    2. Repair: Repairs are like interim updates. They may require a fair amount of changes to the website to fix a problem or implement a new concept or idea, but they don’t require a full redesign. Some examples include updating or removing a section of the website not visited often, rewriting an outdated key whitepaper, or improving the resources section. They could also include rewrites to web pages for a new version of a product, or the addition of a set of new web pages. How you can help: Whether it’s a set of web pages for a new product, or a redesign of the resources section of the website, recommend quarterly reviews of the website where you can discuss new content or functionality that can be added to the site to improve it for customers and prospects. This requires that you follow trends in both content marketing and design/development, as well as trends in the industry of the client (and their competition). Recommend “mini” projects to implement these interim updates for your client.
    3. Overhaul: During an overhaul phase it’s time for that full redesign. Maybe the client is implementing a new brand, and they need to update their website to reflect it. Maybe they need to implement a modern CMS. Overhaul projects take time and big budgets, and typically take place every five or more years. How you can help: Working with the client on a regular basis on maintenance and small repairs enables you to demonstrate your understanding of the client, their needs and their customers’ needs, proving that you are the right one to run the redesign project. Your knowledge of the industry, along with your experience with the website and the technology it lives on makes you the right choice. Recommend a full website review every four to five years to determine if a redesign is necessary, and to demonstrate how you are in the best position to complete the project successfully.

    Your digital MRO plan should prioritize and align work based on the evolution of the customer’s organization or business, as well as the feedback visitors are giving on the website. Incorporating customer feedback and analytics into your MRO plan provides the insight you need to streamline engagement and helps your customer validate the return on investment from their website. You can use surveys, A/B tests, session cams, heat maps, and web analytics reports to focus on the areas of the site that need updating and prioritize projects into each phase of the MRO plan.

    The benefits of an MRO program for web presence

    With a solid MRO plan you can help your client manage their website like they would their products and services: with regular, consistent updates. Creating a digital MRO plan enables you to show your client how they can get more consistent, predictable ROI from their website and other digital channels and streamline their budget.

    When pitching an MRO program to your client, focus on the following benefits:

    • Budget management: By following an MRO program, costs are spread over a longer period instead of a big outlay of time and money for a large project.
    • Improved customer experience: Implementing web analytics, listening posts, surveys, and feedback programs ensures the client is listening to its customers and delivering on customer needs consistently, improving website engagement.
    • Content is never out of date: Product-based content assets are updated in line with product/service improvements, ensuring the most current information is available on the website. You can also help your client plan additions to marketing content assets or add news in line with product updates.
    • Reduced costs and increased ROI: The website is a primary value driver for every business. It’s the best salesperson, the digital storefront, the manifestation of a brand, and a hub for customer services and support. Keeping the website working well will increase digital ROI and lower costs.

    Perhaps the biggest benefit of an MRO plan is more successful redesigns. With an MRO program in place, clients can take the guesswork out of large redesign projects. They will have the results of years of optimization to build upon, ensuring that when they do launch the big redesign they will have real data and experience to know what will work.

    Be an integral part of an MRO plan

    It’s one thing to recommend and sell a client on following an MRO plan, but it’s another to ensure that you and/or your team are an integral part of that plan. Here are some suggestions on how you can build your time and budget into an MRO plan.

    1. Recommend a dedicated cross-functional digital team with time and resources allocated for the website. The team should include capabilities such as a writer, designer, and web developer. Depending on your relationship with the client, one or two of those capabilities, such as content writing/analysis or design and development, should be provided by you or your team.
    2. Schedule monthly cross-functional meetings to brainstorm, research, and validate requirements and ideas for website updates and changes. You should have access to website analytics so you can stay informed about the performance of the website. Based on these meetings, help the client package changes into maintenance or interim updates.
    3. Suggest a process and budget to handle maintenance updates based on your experience with this client and similar clients.
    4. Provide a budget for regular website design and enhancement implementation by you or your team. The scope and regularity of these enhancements will vary based on the needs of the business or organization, but plan for no less than once per quarter. Build in enough time to monitor the client’s industry and competition, as well as review website analytics and content management trends.
    5. Recommend a process for completing a full website review driven by you. This takes the burden off the client to plan and coordinate the review and ensures you are part of the review and recommendations for a redesign.

    A proactive approach

    For many organizations, the easy route is revolution. It seems easier because it happens only once every few years. But this tactic takes more time and costs much more money up front.

    An MRO program ensures businesses are strategically managing their web presence and putting in place the ongoing resources to keep it up to date and relevant for their prospects and customers.

    One of those ongoing resources is you. Build your role into the MRO program, indicating where you can provide services that support different phases of the program. Being involved on a regular basis with maintenance and interim updates demonstrates your understanding of the clients’ needs and ensures you will be the one they come to when the big redesign project happens (and it will happen).

    Whether you are a single freelancer, a two-person team, or part of a larger agency, the key to building long-term, revenue-generating relationships with clients is getting them to see the value of a proactive approach for website management. An MRO program can help you do that.

  • The Foundation of Technical Leadership 

    I’m a front-end architect, but I’m also known as a technical leader, subject matter expert, and a number of other things. I came into my current agency with five years of design and development management experience; yet when it came time to choose a path for my career with the company, I went the technical route.

    I have to confess I had no idea what a technical leader really does. I figured it out, eventually.

    Technical experts are not necessarily technical leaders. Both have outstanding technical skills; the difference is in how others relate to you. Are you a person that others want to follow? That’s the question that really matters. Here are some of the soft skills that set a technical leader apart from a technical expert.

    Help like it’s your job

    Your authority in a technical leadership position—or any leadership position—is going to arise from what you can do for (or to) other people. Healthy authority here stems from you being known as a tried-and-true problem-solver for everyone. The goal is for other people to seek you out, not for you to be chasing down people for code reviews. For this to happen, intelligence and skill are not enough—you need to make a point of being helpful.

    For the technical leader, if you’re too busy to help, you’re not doing your job—and I don’t just mean help someone when they come by and ask for help. You may have to set an expectation with your supervisor that helping others is a vital part of a technical leader’s job. But guess what? It might be billable time—check with your boss. Even if it’s not, try to estimate how much time it’s saving your coworkers. Numbers speak volumes.

    The true measure of how helpful you are is the technical know-how of the entire team. If you’re awesome but your team can’t produce excellent work, you’re not a technical leader—you’re a high-level developer. There is a difference. Every bit of code you write, every bit of documentation you put together should be suitable to use as training for others on your team. When making a decision about how to solve a problem or what technologies to use, think about what will help future developers.

    My job as front-end architect frequently involves not only writing clean code, but cleaning up others’ code to aid in reusability and comprehension by other developers. That large collection of functions might work better as an object, and it’ll probably be up to you to make that happen, whether through training or just doing it.

    Speaking of training, it needs to be a passion. Experience with and aptitude for training were probably the biggest factors in me landing the position as front-end architect. Public speaking is a must. Writing documentation will probably fall on you. Every technical problem that comes your way should be viewed as an opportunity to train the person who brought it to you.

    Helping others, whether they’re other developers, project managers, or clients, needs to become a passion for you if you’re an aspiring technical leader. This can take a lot of forms, but it should permeate into everything you do. That’s why this is rule number one.

    Don’t throw a mattress into a swimming pool

    An infamous prank can teach us something about being a technical leader. Mattresses are easy to get into swimming pools; but once they’re in there, they become almost impossible to get out. Really, I worked the math on this: a queen-sized mattress, once waterlogged, will weigh over 2000 pounds.

    A lot of things are easy to work into a codebase: frameworks, underlying code philosophies, even choices on what technology to use. But once a codebase is built on a foundation, it becomes nearly impossible to get that foundation out of there without rebuilding the entire codebase.

    Shiny new framework seem like a good idea? You’d better hope everyone on your team knows how to use that framework, and that the framework’s around in six months. Don’t have time to go back and clean up that complex object you wrote to handle all the AJAX functionality? Don’t be surprised when people start writing unneeded workarounds because they don’t understand your code. Did you leave your code in a state that’s hard to read and modify? I want you to imagine a mattress being thrown into a swimming pool…

    Failure to heed this command frequently results in you being the only person who can work on a particular project. That is never a good situation to be in.

    Here is one of the big differences between a technical expert and a technical leader: a technical expert could easily overlook that consideration. A technical leader would take steps to ensure that it never happens.

    As a technical expert, you’re an A player, and that expertise is needed everywhere; and as a technical leader, it’s your job to make sure you can supply it, whether that means training other developers, writing and documenting code to get other developers up to speed, or intentionally choosing frameworks and methodologies your team is already familiar with.

    Jerry Weinberg, in The Psychology of Computer Programming, said, “If a programmer is indispensable, get rid of him as quickly as possible!” If you’re in a position where you’re indispensable to a long-term project, fixing that needs to be a top priority. You should never be tied down to one project, because your expertise is needed across the team.

    Before building a codebase on anything, ask yourself what happens when you’re no longer working on the project. If the answer is they have to hire someone smarter than you or the project falls apart, don’t include it in the project.

    And as a leader, you should be watching others to make sure they don’t make the same mistake. Remember, technology decisions usually fall on the technical leader, no matter who makes them.

    You’re not the only expert in the room

    “Because the new program is written for OS 8 and can function twice as fast. Is that enough of a reason, Nancy Drew?”

    That’s the opening line of Nick Burns, Your Company’s Computer Guy, from the Saturday Night Live sketch with the same name. He’s a technical expert who shows up, verbally abuses you, fixes your computer, and then insults you some more before shouting, “Uh, you’re welcome!” It’s one of those funny-because-it’s-true things.

    The stereotype of the tech expert who treats everyone else as inferiors is so prevalent that it’s worked its way into comedy skits, television shows, and watercooler conversations in businesses across the nation.

    I’ve dealt with the guy (or gal). We all have. You know the guy, the one who won’t admit fault, who gets extremely defensive whenever others suggest their own ideas, who views his intellect as superior to others and lets others know it. In fact, everyone who works with developers has dealt with this person at some point.

    It takes a lot more courage and self-awareness to admit that I’ve been that guy on more than one occasion. As a smart guy, I’ve built my self esteem on that intellect. So when my ideas are challenged, when my intellect is called into question, it feels like a direct assault on my self esteem. And it’s even worse when it’s someone less knowledgeable than me. How dare they question my knowledge! Don’t they know that I’m the technical expert?

    Instead of viewing teammates as people who know less than you, try to view them as people who know more than you in different areas. Treat others as experts in other fields that you can learn from. That project manager may not know much about your object-oriented approach to the solution, but she’s probably an expert in how the project is going and how the client is feeling about things.

    Once again, in The Psychology of Computer Programming, Weinberg said, “Treat people who know less than you with respect, deference, and patience.” Take it a step further. Don’t just treat them that way—think of them that way. You’d be amazed how much easier it is to work with equals rather than intellectually inferior minions—and a change in mindset might be all that’s required to make that difference.

    Intelligence requires clarity

    It can be tempting to protect our expertise by making things appear more complicated than they are. But in reality, it doesn’t take a lot of intelligence to make something more complicated than it needs to be. It does, however, take a great deal of intelligence to take something complicated and make it easy to understand.

    If other developers, and non-technical people, can’t understand your solution when you explain it in basic terms, you’ve got a problem. Please don’t hear that as “All good solutions should be simple,” because that’s not the case at all—but your explanations should be. Learn to think like a non-technical person so you can explain things in their terms. This will make you much more valuable as a technical leader.

    And don’t take for granted that you’ll be around to explain your solutions. Sometimes, you’ll never see the person implementing your solution, but that email you sent three weeks ago will be. Work on your writing skills. Pick up a copy of Steven Pinker’s The Sense of Style and read up on persuasive writing. Start a blog and write a few articles on what your coding philosophies are.

    The same principle extends to your code. If code is really hard to read, it’s usually not a sign that a really smart person wrote it; in fact, it usually means the opposite. Speaker and software engineer Martin Fowler once said, “Any fool can write code that a computer can understand. Good programmers write code that humans can understand.”

    Remember: clarity is key. The perception of your intelligence is going to define the reality of your work experience, whether you like it or not.

    You set the tone

    Imagine going to the doctor to explain some weird symptoms you’re having. You sit down on the examination bed, a bit nervous and a bit confused as to what’s actually going on. As you explain your condition, the doctor listens with widening eyes and shaking hands. And the more you explain, the worse it gets. This doctor is freaking out. When you finally finish, the doctor stammers, “I don’t know how to handle that!”

    How would you feel? What would you do? If it were me, I’d start saying goodbye to loved ones, because that’s a bad, bad sign. I’d be in a full-blown panic based on the doctor’s reaction.

    Now imagine a project manager comes to you and starts explaining the weird functionality needed for a particularly tricky project. As you listen, it becomes clear that this is completely new territory for you, as well as for the company. You’re not even sure if what they’re asking is possible.

    How do you respond? Are you going to be the crazy doctor above? If you are, I can assure you the project manager will be just as scared as you are, if not more so.

    I’m not saying you should lie and make something up, because that’s even worse. But learning to say “I don’t know” without a hint of panic in your voice is an art that will calm down project teams, clients, supervisors, and anyone else involved in a project. (Hint: it usually involves immediately following up with, “but I’ll check it out.”)

    As a technical leader, people will follow your emotional lead as well as your technical lead. They’ll look to you not only for the answers, but for the appropriate level of concern. If people leave meetings with you more worried than they were before, it’s probably time to take a look at how your reactions are influencing them.

    Real technical leadership

    Technical leadership is just as people-centric as other types of leadership, and knowing how your actions impact others can make all the difference in the world in moving from technical expert to technical leader. Remember: getting people to follow your lead can be even more important than knowing how to solve technical problems. Ignoring people can be career suicide for a technical leader—influencing them is where magic really happens.


  • This week's sponsor: Skillshare 

    ​SKILLSHARE. Explore 1000’s of online classes in design, business, and more! Get 3 months of unlimited access for $0.99.

  • The Future of the Web 

    Recently the web—via Twitter—erupted in short-form statements that soon made it clear that buttons had been pushed, sides taken, and feelings felt. How many feels? All the feels. Some rash words may have been said.

    But that’s Twitter for you.

    It began somewhat innocuously off-Twitter, with a very reasonable X-Men-themed post by Brian Kardell (one of the authors of the Extensible Web Manifesto). Brian suggests that the way forward is by opening up (via JavaScript) some low-level features that have traditionally been welded shut in the browser. This gives web developers and designers—authors, in the parlance of web standards—the ability to prototype future native browser features (for example, by creating custom elements).

    If you’ve been following all the talk about web components and the shadow DOM of late, this will sound familiar. The idea is to make standards-making a more rapid, iterative, bottom-up process; if authors have the tools to prototype their own solutions or features (poly- and prolly-fills), then the best of these solutions will ultimately rise to the top and make their way into the native browser environments.

    This sounds empowering, collaborative—very much in the spirit of the web.

    And, in fact, everything seemed well on the World Wide Web until this string of tweets by Alex Russell, and then this other string of tweets. At which point everyone on the web sort of went bananas.

    Doomsday scenarios were proclaimed; shadowy plots implied; curt, sweeping ideological statements made. In short, it was the kind of shit-show you might expect from a touchy, nuanced subject being introduced on Twitter.

    But why is it even touchy? Doesn’t it just sound kind of great?

    Oh wait JavaScript

    Whenever you talk about JavaScript as anything other than an optional interaction layer, folks seem to gather into two big groups.

    On the Extensible Web side, we can see the people who think JavaScript is the way forward for the web. And there’s some historical precedent for that. When Brendan Eich created JavaScript, he was aware that he was putting it all together in a hurry, and that he would get things wrong. He wanted JavaScript to be the escape hatch by which others could improve his work (and fix what he got wrong). Taken one step further, JavaScript gives us the ability to extend the web beyond where it currently is. And that, really, is what the Extensible Web Manifesto folks are looking to do.

    The web needs to compete with native apps, they assert. And until we get what we need natively in the browser, we can fake it with JavaScript. Much of this approach is encapsulated in the idea of progressive web apps (offline access, tab access, file system access, a spot on the home screen)—giving the web, as Alex Russell puts it, a fair fight.

    On the other side of things, in the progressive enhancement camp, we get folks that are worried these approaches will leave some users in the dust. This is epitomized by the “what about users with no JavaScript” argument. This polarizing question—though not the entire issue by far—gets at the heart of the disagreement.

    For the Extensible Web folks, it feels like we’re holding the whole web back for a tiny minority of users. For the Progressive Enhancement folks, it’s akin to throwing out accessibility—cruelly denying access to a subset of (quite possibly disadvantaged) users.

    During all this hubbub, Jeremy Keith, one of the most prominent torchbearers for progressive enhancement, reminded us that nothing is absolute. He suggests that—as always—the answer is “it depends.” Now this should be pretty obvious to anyone who’s spent a few minutes in the real world doing just about anything. And yet, at the drop of a tweet, we all seem to forget it.

    So if we can all take a breath and rein in our feelings for a second, how might we better frame this whole concept of moving the web forward? Because from where I’m sitting, we’re all actually on the same side.

    History and repetition

    To better understand the bigger picture about the future of the web, it’s useful (as usual) to look back at its past. Since the very beginning of the web, there have been disagreements about how best to proceed. Marc Andreessen and Tim Berners-Lee famously disagreed about the IMG tag. Tim didn’t get his way, Marc implemented IMG in Mosaic as he saw fit, and we all know how things spun out from there. It wasn’t perfect, but a choice had to be made and it did the job. History suggests that IMG did its job fairly well.

    A pattern of hacking our way to the better solution becomes evident when you follow the trajectory of the web’s development.

    In the 1990’s, webmasters and designers wanted layout like they were used to in print. They wanted columns, dammit. David Siegel formalized the whole tables-and-spacer-GIFs approach in his wildly popular book Creating Killer Web Sites. And thus, the web was flooded with both design innovation and loads of un-semantic markup. Which we now know is bad. But those were the tools that were available, and they allowed us to express our needs at the time. Life, as they say…finds a way.

    And when CSS layout came along, guess what it used as a model for the kinds of layout techniques we needed? That’s right: tables.

    While we’re at it, how about Flash? As with tables, I’m imagining resounding “boos” from the audience. “Boo, Flash!” But if Flash was so terrible, why did we end up with a web full of Flash sites? I’ll tell you why: video, audio, animation, and cross-browser consistency.

    In 1999? Damn straight I want a Flash site. Once authors got their hands on a tool that let them do all those incredible things, they brought the world of web design into a new era of innovation and experimentation.

    But again with the lack of semantics, linkability, and interoperability. And while we were at it, with the tossing out of an open, copyright-free platform. Whoops.

    It wasn’t long, though, before the native web had to sit up and take notice. Largely because of what authors expressed through Flash, we ended up with things like HTML5, Ajax, SVGs, and CSS3 animations. We knew the outcomes we wanted, and the web just needed to evolve to give us a better solution than Flash.

    In short: to get where we need to go, we have to do it wrong first.

    Making it up as we go along

    We authors express our needs with the tools available to help model what we really need at that moment. Best practices and healthy debate are a part of that. But please, don’t let the sort of emotions we attach to politics and religion stop you from moving forward, however messily. Talk about it? Yes. But at a certain point we all need to shut our traps and go build some stuff. Build it the way you think it should be built. And if it’s good—really good—everyone will see your point.

    If I said to you, “I want you to become a really great developer—but you’re not allowed to be a bad developer first,” you’d say I was crazy. So why would we say the same thing about building the web?

    We need to try building things. Probably, at first, bad things. But the lessons learned while building those “bad” projects point the way to the better version that comes next. Together we can shuffle toward a better way, taking steps forward, back, and sometimes sideways. But history tells us that we do get there.

    The web is a mess. It is, like its creators, imperfect. It’s the most human of mediums. And that messiness, that fluidly shifting imperfection, is why it’s survived this long. It makes it adaptable to our quickly-shifting times.

    As we try to extend the web, we may move backward at the same time. And that’s OK. That imperfect sort of progress is how the web ever got anywhere at all. And it’s how it will get where we’re headed next.

    Context is everything

    One thing that needs to be considered when we’re experimenting (and building things that will likely be kind of bad) is who the audience is for that thing. Will everyone be able to use it? Not if it’s, say, a tool confined to a corporate intranet. Do we then need to worry about sub-3G network users? No, probably not. What about if we’re building on the open web but we’re building a product that is expressly for transferring or manipulating HD video files? Do we need to worry about slow networks then? The file sizes inherent in the product pretty much exclude slow networks already, so maybe that condition can go out the window there, too.

    Context, as usual, is everything. There needs to be realistic assessment of the risk of exclusion against the potential gains of trying new technologies and approaches. We’re already doing this, anyway. Show me a perfectly progressively enhanced, perfectly accessible, perfectly performant project and I’ll show you a company that never ships. We do our best within the constraints we have. We weigh potential risks and benefits. And then we build stuff and assess how well it went; we learn and improve.

    When a new approach we’re trying might have aspects that are harmful to some users, it’s good to raise a red flag. So when we see issues with one another’s approaches, let’s talk about how we can fix those problems without throwing out the progress that’s been made. Let’s see how we can bring greater experiences to the web without leaving users in the dust.

    If we can continue to work together and consciously balance these dual impulses—pushing the boundaries of the web while keeping it open and accessible to everyone—we’ll know we’re on the right track, even if it’s sometimes a circuitous or befuddling one. Even if sometimes it’s kind of bad. Because that’s the only way I know to get to good.

  • Help One of Our Own: Carolyn Wood 

    One of the nicest people we’ve ever known and worked with is in a desperate fight to survive. Many of you remember her—she is a gifted, passionate, and tireless worker who has never sought the spotlight and has never asked anything for herself.

    Carolyn Wood spent three brilliant years at A List Apart, creating the position of acquisitions editor and bringing in articles that most of us in the web industry consider essential reading—not to mention more than 100 others that are equally vital to what we do today. Writers loved her. Since 1999, she has also worked on great web projects like DigitalWeb, The Manual, and Codex: The Journal of Typography.

    Think about it. What would the web look like if she hadn’t been a force behind articles like these:

    Three years ago, Carolyn was confined to a wheelchair. Then it got worse. From the YouCaring page:

    This April, after a week-long illness, she developed acute injuries to the tendons in her feet and the nerves in her right hand and arm. She couldn’t get out of her wheelchair, even to go to the bathroom. At the hospital, they discovered Carolyn had acute kidney failure. After a month in a hospital and a care facility she has bounced back from the kidney failure, but she cannot take painkillers to help her hands and feet.

    Carolyn cannot stand or walk or dress herself or take a shower. She is dependent on a lift, manned by two people, to transfer her. Without it she cannot leave her bed.

    She’s now warehoused in a home that does not provide therapy—and her insurance does not cover the cost. Her bills are skyrocketing. (She even pays rent on her bed for $200 a month!)

    Perhaps worst of all—yes, this gets worse—is that her husband has leukemia. He’s dealing with his own intense pain and fatigue and side effects from twice-monthly infusions. They are each other’s only support, and have been living apart since April. They have no income other than his disability, and are burning through their life savings.

    This is absolutely a crisis situation. We’re pulling the community together to help Carolyn—doing anything we possibly can. Her bills are truly staggering. She has no way to cover basic life expenses, much less raise the huge sums required to get the physical and occupational therapy she needs to be independent again.

    Please help by donating anything you can, and by sharing Carolyn’s support page with anyone in your network who is compassionate and will listen.


  • This week's sponsor: Bitbucket 

    BITBUCKET: Over 450,000 teams and 3 million developers love Bitbucket - it’s built for teams! Try it free.

  • Promoting a Design System Across Your Products 

    The scene: day one of a consulting gig with a new client to build a design and code library for a web app. As luck would have it, the client invited me to sit in on a summit of 25 design leaders from across their enterprise planning across platforms and lines of business. The company had just exploded from 30 to over 100 designers. Hundreds more were coming. Divergent product design was everywhere. They dug in to align efforts.

    From a corner, I listened quietly. I was the new guy, minding my own business, comfortable with my well-defined task and soaking up strategy. Then, after lunch, the VP of Digital Design pulled me into an empty conference room.

    “Can you refresh me on your scope?” she asked. So I drew an account hub on the whiteboard.

    Diagram showing an account hub

    “See, the thing is…” she responded, standing up and taking my pen. “We’re redesigning our web marketing homepage now.” She added a circle. “We’re also reinventing online account setup.” Another circle, then arrows connecting the three areas. “We’ve just launched some iOS apps, and more—plus Android—are coming.” She added more circles, arrows, more circles.

    Diagram showing an interconnected enterprise ecosystem: marketing, account setup, account hub, plus iOS apps

    “I want it all cohesive. Everything.” She drew a circle around the entire ecosystem. “Our design system should cover all of this. You can do that, right?”

    A long pause, then a deep breath. Our design system—the parts focused on, the people involved, the products reached—had just grown way more complicated.

    Our industry is getting really good at surfacing reusable parts in a living style guide: visual language like color and typography, components like buttons and forms, sophisticated layouts, editorial voice and tone, and so on. We’ve also awoken to the challenges of balancing the centralized and federated influence of the people involved. But there’s a third consideration: identifying and prioritizing the market of products our enterprise creates that our system will reach.

    As a systems team, we need to ask: what products will use our system and how will we involve them?

    Produce a product inventory

    While some enterprises may have an authoritative and up-to-date master list of products, I’ve yet to work with one. There’s usually no more than a loose appreciation of a constantly evolving product portfolio.

    Start with a simple product list

    A simple list is easy enough. Any whiteboard or text file will do. Produce the list quickly by freelisting as many products as you can think of with teammates involved in starting the system. List actual products (“Investor Relations” and “Careers”), not types of products (such as “Corporate Subsites”).

    Some simple product lists
    Large Corporate Web Site Small Product Company Large Enterprise
    5–15 products 10–25 products 20–100 products
    • Homepage
    • Products
    • Support
    • About
    • Careers
    • Web marketing site
    • Web support site
    • Web corporate site
    • Community site 1
    • Community site 2
    • Web app basic
    • Web app premium
    • Web app 3
    • Web app 4
    • Windows flagship client
    • Windows app 2
    • Web home
    • Web product pages
    • Web product search
    • Web checkout
    • Web support
    • Web rewards program
    • iOS apps (10+)
    • Android apps (10+)
    • Web account mgmt (5+)
    • Web apps (10+)

    Note that because every enterprise is unique, the longer the lists get, the more specific they become.

    For broader portfolios, gather more details

    If your portfolio is more extensive, you’ll need more deliberate planning and coordination of teams spanning an organization. This calls for a more structured, detailed inventory. It’s spreadsheet time, with products as rows and columns for the following:

    • Name, such as Gmail
    • Type / platform: web site, web app, iOS, Android, kiosk, etc.
    • Product owner, if that person even exists
    • Description (optional)
    • People (optional), like a product manager, lead designer or developer, or others involved in the product
    • Other metadata (optional): line of business, last redesigned, upcoming redesign, tech platform, etc.
    Screenshot showing a detailed product inventory
    A detailed product inventory.

    Creating such an inventory can feel draining for a designer. Some modern digital organizations struggle to fill out an inventory like this. I’m talking deer-in-headlights kind of struggling. Completely locked up. Can’t do it. But consider life without it: if you don’t know the possible players, you may set yourself up for failure, or at least a slower road to success. Therefore, take the time to understand the landscape, because the next step is choosing the right products to work with.

    Prioritize products into tiers

    A system effort is never equally influenced by every product it serves. Instead, the system must know which products matter—and which don’t—and then varyingly engage each in the effort. You can quickly gather input on product priorities from your systems team and/or leaders using techniques like cumulative voting.

    Your objective is to classify products into tiers, such as Flagship (the few, essential core products), Secondary (additional influential products), and The Rest to orient strategy and clarify objectives.

    1—Organize around flagships

    Flagship products are the limited number of core products that a system team deeply and regularly engages with. These products reflect a business’ core essence and values, and their adoption of a system signals the system’s legitimacy.

    Getting flagship products to participate is essential, but challenging. Each usually has a lot of individual power and operates autonomously. Getting flagships to share and realize a cohesive objective requires effort.

    Choose flagships that’ll commit to you, too

    When naming flagships, you must believe they’ll play nice and deliver using the system. Expect to work to align flagships: they can be established, complicated, and well aware of their flagship status. Nevertheless, if all flagships deliver using the system, the system is an unassailable standard. If any avoid or obstruct the system, the system lacks legitimacy.

    Takeaway: obtain firm commitments, such as “We will ship with the system by such and such a date” or “Our product MVP must use this design system.” A looser “Yes, we’ll probably adopt what we can” lacks specificity and fidelity.

    Latch onto a milestone, or make your own

    Flagship commitment can surface as a part of a massive redesign, corporate rebranding, or executive decree. Those are easy events to organize around. Without one, you’ll need to work harder bottom-up to align product managers individually.

    Takeaway: establish a reasonable adoption milestone you can broadcast, after which all flagships have shipped with the system.

    Choose wisely (between three and five)

    For a system to succeed, flagships must ship with it. So choose just enough. One flagship makes the system’s goals indistinguishable from its own self-interest. Two products don’t offer enough variety of voices and contexts to matter. Forming a foundation with six or more “equally influential voices” can become chaotic.

    Takeaway: three flagships is the magic minimum, offering sufficient range and incorporating an influential and sometimes decisive third perspective. Allowing for four or five flagships is feasible but will test a group’s ability to work together fluidly.

    A system for many must be designed by many

    Enterprises place top talent on flagship products. It would be naive to think that your best and brightest will absorb a system that they don’t influence or create themselves. It’s a team game, and getting all-stars working well together is part of your challenge.

    Takeaway: integrate flagship designers from the beginning, as you design the system, to inject the right blend of individual styles and shared beliefs.

    2—Blend in a secondary set

    More products—a secondary set— are also important to a system’s success. Such products may not be flagships because they are between major releases (making adoption difficult), not under active development, or even just slightly less valuable.

    Include secondary products in reference designs

    Early systems efforts can explore concept mockups—also known as reference designs—to assess a new visual language across many products. Reference designs reveal an emerging direction and serve as “before and after” roadshow material.

    Takeaway: include secondary products in early design concepts to acknowledge the value of those products, align the system with their needs, and invite their teams to adopt the system early.

    Welcome participation (but moderate contribution)

    Systems benefit from an inclusive environment, so bias behaviors toward welcoming input. Encourage divergent ideas, but know that it’s simply not practical to give everyone a voice in everything. Jon Wiley, an early core contributor to Google’s Material Design, shared some wisdom with me during a conversation: “The more a secondary product’s designer participated and injected value, the more latitude they got to interpret and extend the system for their context.”

    Takeaway: be open to—but carefully moderate—the involvement of designers on secondary products.

    3—Serve the rest at a greater distance

    The bigger the enterprise, the longer and more heterogeneous the long tail of other products that could ultimately adopt the system. A system’s success is all about how you define and message it. For example, adopting the core visual style might be expected, but perhaps rigorous navigational integration and ironclad component consistency aren’t goals.

    Documentation may be your primary—or only—channel to communicate how to use the system. Beyond that, your budding system team may not have the time for face-to-face meetings or lengthy discussions.

    Takeaway: early on, limit focus on and engagement with remaining products. As a system matures, gradually invest in lightweight support activities like getting-started sessions, audits, and triaging office-hour clinics.

    Adjust approach depending on context

    Every product portfolio is different, and thus so is every design system. Let’s consider the themes and dynamics from some archetypal contexts we face repeatedly in our work.

    Example 1: large corporate website, made of “properties”

    You know: the homepage-as-gateway-to-products hegemon (owned by Marketing) integrated with Training, Services, and About Us content (owned by less powerful fiefdoms) straddling a vast ocean of transactional features like Support/Account Management and Communities. All of these “properties” have drifted apart, and some trigger—the decision to go responsive, a rebranding, or an annoyed-enough-to-care executive—dictates that it’s “time to unify!”

    Diagram showing a typical web marketing sitemap overlaid with a product section team’s choices on spreading a system beyond its own section
    Typical web marketing sitemap, overlaid with a product section team’s choices on spreading a system beyond its own section.

    The get? Support

    System influence usually radiates from Marketing and Brand through to selling Products. But Support is where customers spend most of their time: billing, admin, downloading, troubleshooting. Support’s features are complicated, with intricate UI and longer release cycles across multiple platforms. It may be the most difficult section to integrate , but it’s essential.

    Takeaway: if your gets—in this case Home, Products, and Support—deliver, you win. Everyone else will either follow or look bad. That’s your flagship set.

    Minimize homepage distraction

    Achieving cohesive design is about suffusing an entire experience with it. Yet a homepage is often the part of a site that is most exposed to, and justifiably distinct from, otherwise reusable componentry. It has tons of cooks, unique and often complex parts, and changes frequently. Such qualities— indecisiveness, complexity, and instability—corrode systems efforts.

    Takeaway: don’t fall prey to the homepage distraction. Focus on stable fundamentals that you can confidently spread.

    Exploit navigational change to integrate a system hook

    As branding or navigation changes, so does a header. It appears everywhere, and changes to it can be propagated centrally. Get those properties—particularly those lacking full-time design support—to sync with a shared navigation service, and use that hook to open access to the greater goodies your system has to offer.

    Takeaway: exploit the connection! Adopters may not embrace all your parts, but since you are injecting your code into their environment, they could.

    Example 2: a modest product portfolio

    A smaller company’s strategic shifts can be chaotic, lending themselves to an unstable environment in which to apply a system. Nevertheless, a smaller community of designers—often a community of practice dispersed across a portfolio—can provide an opportunity to be more cohesive.

    Radiate influence from web apps

    Many small companies assemble portfolios of websites, web apps, and their iOS, Android, and Windows counterparts. Websites and native apps share little beyond visual style and editorial tone. However, web apps provide a pivot: they can share a far deeper overlap of components and tooling with websites, and their experiences often mirror what’s found on native apps.

    Takeaway: look for important products whose interests overlap many other products, and radiate influence from there.

    Diagram of product relationships within a portfolio, with web apps relating to both web sites and native apps.
    Diagram of product relationships within a portfolio, with web apps relating to both web sites and native apps.

    Demo value across the whole journey

    A small company’s flagship products should be the backbone of a customer’s journey, from reach and acquisition through service and loyalty. Design activities that express the system’s value from the broader user journey tend to reveal gaps, identify clunky handoffs, and trigger real discussions around cohesiveness.

    Takeaway: evoke system aspirations by creating before/after concepts and demoing cohesiveness across the journey, such as with a stitched prototype.

    A series of screenshots of the Marriott.com project showing how disparate design artifacts across products were stitched together into an interactive prototype
    For Marriott.com, disparate design artifacts across products (left) were stitched together into an interactive, interconnected prototype (right).

    Bridge collaboration beyond digital

    Because of their areas of focus, “non-digital” designers (working on products like trade-show booths, print, TV, and retail) tend to be less savvy than their digital counterparts when it comes to interaction. Nonetheless, you’ll share the essence of your visual language with them, such as making sure the system’s primary button doesn’t run afoul of the brand’s blue, and yet provides sufficient contrast for accessibility.

    Takeaway: encourage non-digital designers to do digital things. Be patient and inclusive, even if their concerns sometimes drift away from what you care about most.

    Example 3: a massive multiplatform enterprise

    For an enterprise as huge as Google, prioritizing apps was essential to Material Design’s success. The Verge’s “Redesigning Google: How Larry Page Engineered a Beautiful Revolution” suggests strong prioritization, with Search, Maps, Gmail, and later Android central to the emerging system. Not as much in the conversation, perhaps early on? Docs, Drive, Books, Finance. Definitely not SantaTracker.

    Broaden representation across platforms & businesses

    With coverage across a far broader swath of products, ensure flagship product selection spans a few platforms and lines of business. If you want it to apply everywhere, then the system—how it’s designed, developed, and maintained—will benefit from diverse influences.

    Takeaway: Strive for diverse system contribution and participation in a manner consistent with the products it serves.

    Mix doers & delegators

    Massive enterprise systems trigger influence from many visionaries. Yet you can’t rely on senior directors to produce meticulous, thoughtful concepts. Such leaders already direct and manage work across many products. Save them from themselves! Work with them to identify design talent with pockets of time. Even better, ask them to lend a doer they recommend for a month- or weeklong burst.

    Takeaway: defer to creative leaders on strategy, but redirect their instincts from doing everything to identifying and providing talent.

    Right the fundamentals before digging deep

    I confess that in the past, I’ve brought a too-lofty ambition to bear on quickly building huge libraries for organizations of many, many designers. Months later, I wondered why our team was still refining the “big three” (color, typography, and iconography) or the “big five” (the big three, plus buttons and forms). Um, what? Given the system’s broad reach, I had to adjust my expectations to be satisfied with what was still a very consequential shift toward cohesiveness.

    Takeaway: balance ambition for depth with spreading fundamentals wide across a large enterprise, so that everyone shares a core visual language.

    The long game

    Approach a design system as you would a marathon, not a sprint. You’re laying the groundwork for an extensive effort. By understanding your organization through its product portfolio, you’ll strengthen a cornerstone—the design system—that will help you achieve a stronger and more cohesive experience.

  • Making your JavaScript Pure 

    Once your website or application goes past a small number of lines, it will inevitably contain bugs of some sort. This isn’t specific to JavaScript but is shared by nearly all languages—it’s very tricky, if not impossible, to thoroughly rule out the chance of any bugs in your application. However, that doesn’t mean we can’t take precautions by coding in a way that lessens our vulnerability to bugs.

    Pure and impure functions

    A pure function is defined as one that doesn’t depend on or modify variables outside of its scope. That’s a bit of a mouthful, so let’s dive into some code for a more practical example.

    Take this function that calculates whether a user’s mouse is on the left-hand side of a page, and logs true if it is and false otherwise. In reality your function would probably be more complex and do more work, but this example does a great job of demonstrating:

    function mouseOnLeftSide(mouseX) {
        return mouseX < window.innerWidth / 2;
    document.onmousemove = function(e) {

    mouseOnLeftSide() takes an X coordinate and checks to see if it’s less than half the window width—which would place it on the left side. However, mouseOnLeftSide() is not a pure function. We know this because within the body of the function, it refers to a value that it wasn’t explicitly given:

    return mouseX < window.innerWidth / 2;

    The function is given mouseX, but not window.innerWidth. This means the function is reaching out to access data it wasn’t given, and hence it’s not pure.

    The problem with impure functions

    You might ask why this is an issue—this piece of code works just fine and does the job expected of it. Imagine that you get a bug report from a user that when the window is less than 500 pixels wide the function is incorrect. How do you test this? You’ve got two options:

    • You could manually test by loading up your browser and moving your mouse around until you’ve found the problem.
    • You could write some unit tests (Rebecca Murphey’s Writing Testable JavaScript is a great introduction) to not only track down the bug, but also ensure that it doesn’t happen again.

    Keen to have a test in place to avoid this bug recurring, we pick the second option and get writing. Now we face a new problem, though: how do we set up our test correctly? We know we need to set up our test with the window width set to less than 500 pixels, but how? The function relies on window.innerWidth, and making sure that’s at a particular value is going to be a pain.

    Benefits of pure functions

    Simpler testing

    With that issue of how to test in mind, imagine we’d instead written the code like so:

    function mouseOnLeftSide(mouseX, windowWidth) {
        return mouseX < windowWidth / 2;
    document.onmousemove = function(e) {
        console.log(mouseOnLeftSide(e.pageX, window.innerWidth));

    The key difference here is that mouseOnLeftSide() now takes two arguments: the mouse X position and the window width. This means that mouseOnLeftSide() is now a pure function; all the data it needs it is explicitly given as inputs and it never has to reach out to access any data.

    In terms of functionality, it’s identical to our previous example, but we’ve dramatically improved its maintainability and testability. Now we don’t have to hack around to fake window.innerWidth for any tests, but instead just call mouseOnLeftSide() with the exact arguments we need:

    mouseOnLeftSide(5, 499) // ensure it works with width < 500


    Besides being easier to test, pure functions have other characteristics that make them worth using whenever possible. By their very nature, pure functions are self-documenting. If you know that a function doesn’t reach out of its scope to get data, you know the only data it can possibly touch is passed in as arguments. Consider the following function definition:

    function mouseOnLeftSide(mouseX, windowWidth)

    You know that this function deals with two pieces of data, and if the arguments are well named it should be clear what they are. We all have to deal with the pain of revisiting code that’s lain untouched for six months, and being able to regain familiarity with it quickly is a key skill.

    Avoiding globals in functions

    The problem of global variables is well documented in JavaScript—the language makes it trivial to store data globally where all functions can access it. This is a common source of bugs, too, because anything could have changed the value of a global variable, and hence the function could now behave differently.

    An additional property of pure functions is referential transparency. This is a rather complex term with a simple meaning: given the same inputs, the output is always the same. Going back to mouseOnLeftSide, let’s look at the first definition we had:

    function mouseOnLeftSide(mouseX) {
        return mouseX < window.innerWidth / 2;

    This function is not referentially transparent. I could call it with the input 5 multiple times, resize the window between calls, and the result would be different every time. This is a slightly contrived example, but functions that return different values even when their inputs are the same are always harder to work with. Reasoning about them is harder because you can’t guarantee their behavior. For the same reason, testing is trickier, because you don’t have full control over the data the function needs.

    On the other hand, our improved mouseOnLeftSide function is referentially transparent because all its data comes from inputs and it never reaches outside itself:

    function mouseOnLeftSide(mouseX, windowWidth) {
        return mouseX < windowWidth / 2;

    You get referential transparency for free when following the rule of declaring all your data as inputs, and by doing this you eliminate an entire class of bugs around side effects and functions acting unexpectedly. If you have full control over the data, you can hunt down and replicate bugs much more quickly and reliably without chancing the lottery of global variables that could interfere.

    Choosing which functions to make pure

    It’s impossible to have pure functions consistently—there will always be a time when you need to reach out and fetch data, the most common example of which is reaching into the DOM to grab a specific element to interact with. It’s a fact of JavaScript that you’ll have to do this, and you shouldn’t feel bad about reaching outside of your function. Instead, carefully consider if there is a way to structure your code so that impure functions can be isolated. Prevent them from having broad effects throughout your codebase, and try to use pure functions whenever appropriate.

    Let’s take a look at the code below, which grabs an element from the DOM and changes its background color to red:

    function changeElementToRed() {
        var foo = document.getElementById('foo');
        foo.style.backgroundColor = "red";

    There are two problems with this piece of code, both solvable by transitioning to a pure function:

    1. This function is not reusable at all; it’s directly tied to a specific DOM element. If we wanted to reuse it to change a different element, we couldn’t.
    2. This function is hard to test because it’s not pure. To test it, we would have to create an element with a specific ID rather than any generic element.

    Given the two points above, I would rewrite this function to:

    function changeElementToRed(elem) {
        elem.style.backgroundColor = "red";
    function changeFooToRed() {
        var foo = document.getElementById('foo');

    We’ve now changed changeElementToRed() to not be tied to a specific DOM element and to be more generic. At the same time, we’ve made it pure, bringing us all the benefits discussed previously.

    It’s important to note, though, that I’ve still got some impure code—changeFooToRed() is impure. You can never avoid this, but it’s about spotting opportunities where turning a function pure would increase its readability, reusability, and testability. By keeping the places where you’re impure to a minimum and creating as many pure, reusable functions as you can, you’ll save yourself a huge amount of pain in the future and write better code.


    “Pure functions,” “side effects,” and “referential transparency” are terms usually associated with purely functional languages, but that doesn’t mean we can’t take the principles and apply them to our JavaScript, too. By being mindful of these principles and applying them wisely when your code could benefit from them you’ll gain more reliable, self-documenting codebases that are easier to work with and that break less often. I encourage you to keep this in mind next time you’re writing new code, or even revisiting some existing code. It will take some time to get used to these ideas, but soon you’ll find yourself applying them without even thinking about it. Your fellow developers and your future self will thank you.

  • This week's sponsor: FULLSTORY 

    FullStory, a pixel-perfect session playback tool that captures everything about your customer experience with one easy-to-install script.

  • Commit to Contribute 

    One morning I found a little time to work on nodemon and saw a new pull request that fixed a small bug. The only problem with the pull request was that it didn’t have tests and didn’t follow the contributing guidelines, which results in the automated deploy not running.

    The contributor was obviously extremely new to Git and GitHub and just the small change was well out of their comfort zone, so when I asked for the changes to adhere to the way the project works, it all kind of fell apart.

    How do I change this? How do I make it easier and more welcoming for outside developers to contribute? How do I make sure contributors don’t feel like they’re being asked to do more than necessary?

    This last point is important.

    The real cost of a one-line change

    Many times in my own code, I’ve made a single-line change that could be a matter of a few characters, and this alone fixes an issue. Except that’s never enough. (In fact, there’s usually a correlation between the maturity and/or age of the project and the amount of additional work to complete the change due to the growing complexity of systems over time.)

    A recent issue in my Snyk work was fixed with this single line change:

    lines of code

    In this particular example, I had solved the problem in my head very quickly and realized that this was the fix. Except that I had to then write the test to support the change, not only to prove that it works but to prevent regression in the future.

    My projects (and Snyk’s) all use semantic release to automate releases by commit message. In this particular case, I had to bump the dependencies in the Snyk command line and then commit that with the right message format to ensure a release would inherit the fix.

    All in all, the one-line fix turned into this: one line, one new test, tested across four versions of node, bump dependencies in a secondary project, ensure commit messages were right, and then wait for the secondary project’s tests to all pass before it was automatically published.

    Put simply: it’s never just a one-line fix.

    Helping those first pull requests

    Doing a pull request (PR) into another project can be pretty daunting. I’ve got a fair amount of experience and even I’ve started and aborted pull requests because I found the chain of events leading up to a complete PR too complex.

    So how can I change my projects and GitHub repositories to be more welcoming to new contributors and, most important, how can I make that first PR easy and safe?

    Issue and pull request templates

    GitHub recently announced support for issue and PR templates. These are a great start because now I can specifically ask for items to be checked off, or information to be filled out to help diagnose issues.

    Here’s what the PR template looks like for Snyk’s command line interface (CLI) :

    - [ ] Ready for review
    - [ ] Follows CONTRIBUTING rules
    - [ ] Reviewed by @remy (Snyk internal team)
     #### What does this PR do?
     #### Where should the reviewer start?
     #### How should this be manually tested?
     #### Any background context you want to provide?
     #### What are the relevant tickets?
     #### Screenshots
     #### Additional questions

    This is partly based on QuickLeft’s PR template. These items are not hard prerequisites on the actual PR, but it does help in getting full information. I’m slowly adding these to all my repos.

    In addition, having a CONTRIBUTING.md file in the root of the repo (or in .github) means new issues and PRs include the notice in the header:

    GitHub contributing notice

    Automated checks

    For context: semantic release will read the commits in a push to master, and if there’s a feat: commit, it’ll do a minor version bump. If there’s a fix: it’ll do a patch version bump. If the text BREAKING CHANGE: appears in the body of a commit, it’ll do a major version bump.

    I’ve been using semantic release in all of my projects. As long as the commit message format is right, there’s no work involved in creating a release, and no work in deciding what the version is going to be.

    Something that none of my repos historically had was the ability to validate contributed commits for formatting. In reality, semantic release doesn’t mind if you don’t follow the commit format; they’re simply ignored and don’t drive releases (to npm).

    I’ve since come across ghooks, which will run commands on Git hooks, in particular using a commit-msg hook validate-commit-msg. The installation is relatively straightforward, and the feedback to the user is really good because if the commit needs tweaking to follow the commit format, I can include examples and links.

    Here’s what it looks like on the command line:

    Git commit validation

    ...and in the GitHub desktop app (for comparison):

    Git commit validation

    This is work that I can load on myself to make contributing easier, which in turn makes my job easier when it comes to managing and merging contributions into the project. In addition, for my projects, I’m also adding a pre-push hook that runs all the tests before the push to GitHub is allowed. That way if new code has broken the tests, the author is aware.

    To see the changes required to get the output above, see this commit in my current tinker project.

    There are two further areas worth investigating. The first is the commitizenproject. Second, what I’d really like to see is a GitHub bot that could automatically comment on pull requests to say whether the commits are okay (and if not, direct the contributor on how to fix that problem) and also to show how the PR would affect the release (i.e., whether it would trigger a release, either as a bug patch or a minor version change).

    Including example tests

    I think this might be the crux of problem: the lack of example tests in any project. A test can be a minefield of challenges, such as these:

    • knowing the test framework
    • knowing the application code
    • knowing about testing methodology (unit tests, integration, something else)
    • replicating the test environment

    Another project of mine, inliner, has a disproportionately high rate of PRs that include tests. I put that down to the ease with which users can add tests.

    The contributing guide makes it clear that contributing doesn’t even require that you write test code. Authors just create a source HTML file and the expected output, and the test automatically includes the file and checks that the output is as expected.

    Adding specific examples of how to write tests will, I believe, lower the barrier of entry. I might link to some sort of sample test in the contributing doc, or create some kind of harness (like inliner does) to make it easy to add input and expected output.

    Fixing common mistakes

    Something I’ve also come to accept is that developers don’t read contributing docs. It’s okay, we’re all busy, we don’t always have time to pore over documentation. Heck, contributing to open source isn’t easy.

    I’m going to start including a short document on how to fix common problems in pull requests. Often it’s amending a commit message or rebasing the commits. This is easy for me to document, and will allow me to point new users to a walkthrough of how to fix their commits.

    What’s next?

    In truth, most of these items are straightforward and not much work to implement. Sure, I wouldn’t drop everything I’m doing and add them to all my projects at once, but certainly I’d include them in each active project as I work on it.

    1. Add issue and pull request templates.
    2. Add ghooks and validate-commit-msg with standard language (most if not all of my projects are node-based).
    3. Either make adding a test super easy, or at least include sample tests (for unit testing and potentially for integration testing).
    4. Add a contributing document that includes notes about commit format, tests, and anything that can make the contributing process smoother.

    Finally, I (and we) always need to keep in mind that when someone has taken time out of their day to contribute code to our projects—whatever the state of the pull request—it’s a big deal.

    It takes commitment to contribute. Let’s show some love for that.

  • This week's sponsor: JIRA 

    Thanks to our sponsor Try JIRA for free today.

  • Once Upon a Time 

    Once upon a time, I had a coworker named Bob who, when he needed help, would start the conversation in the middle and work to both ends. My phone would ring, and the first thing I heard was: “Hey, so, we need the spreadsheets on Tuesday so that Information Security can have them back to us in time for the estimates.”

    Spreadsheets? Estimates? Bob and I had never discussed either. As I had been “discouraged” from responding with “What the hell are you talking about now?” I spent the next 10 minutes of every Bob call trying to tease out the context of his proclamations.

    Clearly, Bob needed help—and not just with spreadsheets.

    Then there was Susan. When Susan wanted help, she gave me the entire life story of a project in the most polite, professional language possible. An email from Susan might go like this:

    Good morning,

    I’m working on the Super Bananas project, which we started three weeks ago and have been slowly working on since. We began with persona writing, then did some scenarios, and discussed a survey.

    [Insert two more paragraphs of the history of the project]

    I’m hoping—if you have the opportunity (due to your previous experience with [insert four of my last projects in chronological order])—you may be able to share a content-inventory template that would be appropriate for this project. If it isn’t too much trouble, when you get a chance, could you forward me the template at your earliest convenience?

    Thank you in advance for your cooperation,


    An email that said, “Hey do you have a content-inventory template I could use on the Super Bananas Project?” would have sufficed, but Susan wanted to be professional. She believed that if I had to ask a question, she had failed to communicate properly. And, of course, that failure would weigh heavy on all our heads.

    Bob and Susan were as opposite as the tortoise and the hare, but they shared a common problem. Neither could get over the river and through the woods effectively. Specifically, they were both lousy at establishing context and getting to the point.

    We all need the help of others to build effective tools and applications. Communication skills are so critical to that endeavor that we’ve seen article after article after article—not to mention books, training classes, and job postings—stressing the importance of communication skills. Without the ability to communicate, we can neither build things right, nor build the right things, for our clients and our users.

    Still, context-setting is a tricky skill to learn. Stray too far toward Bob, and no one knows what we’re talking about. Follow Susan’s example, and people get bored and wander off before we get to the point.

    Whether we’re asking a colleague for help or nudging an end user to take action, we want them to respond a certain way. And whether we’re writing a radio ad, publishing a blog post, writing an email, or calling a colleague, we have to set the proper level of context to get the result we want.

    The most effective technique I’ve found for beginners is a process I call “Once Upon a Time.”

    Fairy tales? Seriously?

    Fairy tales are one of our oldest forms of folklore, with evidence indicating that they may stretch back to the Roman Empire. The prelude “Once upon a time” dates to 1380 BCE, according to the Oxford English Dictionary. Wikipedia lists over 75 language variations of the stock story opener. It’s safe to say that the vast majority of us, regardless of language or culture, have heard our share of fairy tales, from the 1800s-era Brothers Grimm stories to the 1987 musical Into the Woods.

    We know how they go:

    Once upon a time, there was a [main character] living in [this situation] who [had this problem]. [Some person] knows of this need and sends the [main character] out to [complete these steps]. They [do things] but it’s really hard because [insert challenges]. They overcome [list of challenges], and everyone lives happily ever after.

    Fairy tales are effective oral storytelling techniques precisely because they follow a standard structure that always provides enough context to understand the story. Almost everything we do can be described with this structure.

    Once upon a time Anne lacked an ice cream sandwich. This forced her to get off the couch and go to the freezer, where food stayed amazingly cold. She was forced to put her hands in the icy freezer to dig the ice cream sandwich box out of the back. She overcame the cold and was rewarded with a tasty ice cream sandwich! And they all lived happily ever after.

    The structure of a fairy tale’s beginning has a lot of similarities to the journalistic Five Ws of basic information gathering: Who? What? When? Where? Why? How?

    In our communication construct, we are the main character whose situation and problem need to be succinctly described. We’ve been sent out to do a thing, we’ve hit a challenge, and now we need specific help to overcome the challenge.

    How does this help me if I’m a Bob or a Susan?

    When Bob wanted to tell his story, he didn’t start with “Once upon a time…” He started halfway through the story. If Bob was Little Red Riding Hood, he would have started by saying, “We need scissors and some rocks.” (Side note: the general lack of knowledge about how surgery works in that particular tale gives me chills.)

    When Susan wanted to tell her story, she started before “Once upon a time…” If she was Little Red Riding Hood, she started by telling you how her parents met, how long they dated, and so on, before finally getting around to mentioning that she was trapped in a wolf’s stomach.

    When we tell our stories, we have to start at the beginning—not too early, not too late. If we’re Bob, that means making sure we’ve relayed the basic facts: who we are, what our goal is, possibly who sent us, and what our challenge is. If we’re Susan, we need to make sure we limit ourselves to the facts we actually need.

    This is where we take the fairy-tale format and put it into the first person. Susan might write:

    Once upon a time, the Bananas team asked me to do the content strategy for their project. We made good progress until we had this problem: we don’t have a template for content inventories. Bob suggested I contact you. Do you have a template you can send us?

    Bob might say:

    Once upon a time, you and I were working on the data mapping of the new Information Security application. Then Information Security asked us to send the mapping to them so they could validate it. This is a problem because we only have until Tuesday to give them the unfinished spreadsheets. Otherwise we’ll hit an even bigger problem: we won’t be able to estimate the project size on Friday without the spreadsheet. Can you help me get the spreadsheet to them on time?

    Notice the parallels between the fairy tales and these drafts: we know the main character, their situation, who sent them or triggered their move, and what they need to solve their problem. In Bob’s case, this is much more information than he usually provides. In Susan’s, it’s probably much less. In both cases, we’ve distilled the situation and the request down to the basics. In both cases, the only edit needed is to remove “Once upon a time…” from the first sentence, and it’s ready to go.

    But what about…?

    Both the Bobs and the Susans I’ve worked with have had questions about this technique, especially since in both cases they thought they were already doing a pretty good job of providing context.

    The original Susan had two big concerns that led her to giving out too much information. The first was that she’d sound unprofessional if she didn’t include every last detail and nuance of business etiquette. The second was that if her recipient had questions, they’d consider her amateurish for not providing every bit of information up front.

    Susans of the world, let me assure you: clear, concise communication is professional. The message isn’t not to use “please” and “thank you”; it’s that “If it isn’t too much trouble, when you get a chance, could you please consider…” is probably overkill.

    Beyond that, no one can anticipate every question another person might have. Clear communication starts a dialogue by covering the basics and inviting questions. It also saves time; you only have to answer the questions your colleague or reader actually have. If you’re not sure whether to keep a piece of information in your story, take it out and see if the tale still makes sense.

    Bob was a tougher nut to crack, in part because he frequently didn’t realize he was starting in the middle. Bob was genuinely baffled that colleagues hadn’t read his mind to know what he was talking about. He thought he just needed the answer to one “quick” question. Once he was made aware that he was confusing—and sometimes annoying—coworkers, he could be brought back on track with gentle suggestions. “Okay Bob, let’s start over. Once upon a time you were…?”

    Begin at the beginning and stop at the end

    Using the age-old format of “Once upon a time…” gives us an incredibly sturdy framework to use for requesting action from people. We provide all of the context they need to understand our request, as well as a clear and concise description of that request.

    Clear, concise, contextual communication is professional, efficient, and much less frustrating to everyone involved, so it pays to build good habits, even if the basis of those habits seems a bit corny.

    Do you really need to start with “Once upon a time…” to tell a story or communicate a request? Well, it doesn’t hurt. The phrase is really a marker that you’re changing the way you think about your writing, for whom you’re writing it, and what you expect to gain. Soup doesn’t require stones, and business communication doesn’t require “Once upon a time…”

    But it does lead to more satisfying endings.

    And they all lived happily ever after.

  • This week's sponsor: ​FullStory 

    With our sponsor FULLSTORY, you get a pixel-perfect session playback tool that helps answer any question about your customer’s online experience.​ ​One easy-to-install script captures everything you need.

  • The Rich (Typefaces) Get Richer 

    There are over 1,200 font families available on Typekit. Anyone with a Typekit plan can freely use any of those typefaces, and yet we see the same small selection used absolutely everywhere on the web. Ever wonder why?

    The same phenomenon happens with other font services like Google Fonts and MyFonts. Google Fonts offers 708 font families, but we can’t browse the web for 15 minutes without encountering Open Sans and Lato. MyFonts has over 20,000 families available as web fonts, yet designers consistently reach for only a narrow selection of those.

    On my side project Typewolf, I curate daily examples of nice type in the wild. Here are the ten most popular fonts from 2015:

    1. Futura
    2. Aperçu
    3. Proxima Nova
    4. Gotham
    5. Brown
    6. Avenir
    7. Caslon
    8. Brandon Grotesque
    9. GT Walsheim
    10. Circular

    And here are the ten most popular from 2014:

    1. Brandon Grotesque
    2. Futura
    3. Avenir
    4. Aperçu
    5. Proxima Nova
    6. Franklin Gothic
    7. GT Walsheim
    8. Gotham
    9. Circular
    10. Caslon

    Notice any similarities? Nine out of the ten fonts from 2014 made the top ten again in 2015. Admittedly, Typewolf is a curated showcase, so there is bound to be some bias in the site selection process. But with 365 sites featured in a year, I think Typewolf is a solid representation of what is popular in the design community.

    Other lists of popular fonts show similar results. Or simply look around the web and take a peek at the CSS—Proxima Nova, Futura, and Brandon Grotesque dominate sites today. And these fonts aren’t just a little more popular than other fonts—they are orders of magnitude more popular.

    When it comes to typefaces, the rich get richer

    I don’t mean to imply that type designers are getting rich like Fortune 500 CEOs and flying around to type conferences in their private Learjets (although some type designers are certainly doing quite well). I’m just pointing out that a tiny percentage of fonts get the lion’s share of usage and that these “chosen few” continue to become even more popular.

    The rich get richer phenomenon (also known as the Matthew Effect) refers to something that grows in popularity due to a positive feedback loop. An app that reaches number one in the App Store will receive press because it is number one, which in turn will give it even more downloads and even more press. Popularity breeds popularity. For a cogent book that discusses this topic much more eloquently than I ever could, check out Nicholas Taleb’s The Black Swan.

    But back to typefaces.

    Designers tend to copy other designers. There’s nothing wrong with that—designers should certainly try to build upon the best practices of others. And they shouldn’t be culturally isolated and unaware of current trends. But designers also shouldn’t just mimic everything they see without putting thought into what they are doing. Unfortunately, I think this is what often happens with typeface selection.

    How does a typeface first become popular, anyway?

    I think it all begins with a forward-thinking designer who takes a chance on a new typeface. She uses it in a design that goes on to garner a lot of attention. Maybe it wins an award and is featured prominently in the design community. Another designer sees it and thinks, “Wow, I’ve never seen that typeface before—I should try using it for something.” From there it just cascades into more and more designers using this “new” typeface. But with each use, less and less thought goes into why they are choosing that particular typeface. In the end, it’s just copying.

    Or, a typeface initially becomes popular simply from being in the right place at the right time. When you hear stories about famous YouTubers, there is one thing almost all of them have in common: they got in early. Before the market is saturated, there’s a much greater chance of standing out; your popularity is much more likely to snowball. A few of the most popular typefaces on the web, such as Proxima Nova and Brandon Grotesque, tell a similar story.

    The typeface Gotham skyrocketed in popularity after its use in Obama’s 2008 presidential campaign. But although it gained enormous steam in the print world, it wasn’t available as a web font until 2013, when the company then known as Hoefler & Frere-Jones launched its subscription web font service. Proxima Nova, a typeface with a similar look, became available as a web font early, when Typekit launched in 2009. Proxima Nova is far from a Gotham knockoff—an early version, Proxima Sans, was developed before Gotham—but the two typefaces share a related, geometric aesthetic. Many corporate identities used Gotham, so when it came time to bring that identity to the web, Proxima Nova was the closest available option. This pushed Proxima Nova to the top of the bestseller charts, where it remains to this day.

    Brandon Grotesque probably gained traction for similar reasons. It has quite a bit in common with Neutraface, a typeface that is ubiquitous in the offline world—walk into any bookstore and you’ll see it everywhere. Brandon Grotesque was available early on as a web font with simple licensing, whereas Neutraface was not. If you wanted an art-deco-inspired geometric sans serif with a small x-height for your website, Brandon Grotesque was the obvious choice. It beat Neutraface to market on the web and is now one of the most sought-after web fonts. Once a typeface reaches a certain level of popularity, it seems likely that a psychological phenomenon known as the availability heuristic kicks in. According to the availability heuristic, people place much more importance on things that they are easily able to recall. So if a certain typeface immediately comes to mind, then people assume it must be the best option.

    For example, Proxima Nova is often thought of as incredibly readable for a sans serif due to its large x-height, low stroke contrast, open apertures, and large counters. And indeed, it works very well for setting body copy. However, there are many other sans serifs that fit that description—Avenir, FF Mark, Gibson, Texta, Averta, Museo Sans, Sofia, Lasiver, and Filson, to name a few. There’s nothing magical about Proxima Nova that makes it more readable than similar typefaces; it’s simply the first one that comes to mind for many designers, so they can’t help but assume it must be the best.

    On top of that, the mere-exposure effect suggests that people tend to prefer things simply because they are more familiar with them—the more someone encounters Proxima Nova, the more appealing they tend to find it.

    So if we are stuck in a positive feedback loop where popular fonts keep becoming even more popular, how do we break the cycle? There are a few things designers can do.

    Strive to make your brand identifiable by just your body text

    Even if it’s just something subtle, aim to make the type on your site unique in some way. If a reader can tell they are interacting with your brand solely by looking at the body of an article, then you are doing it right. This doesn’t mean that you should completely lose control and use type just for the sole purpose of standing out. Good type, some say, should be invisible. (Some say otherwise.) Show restraint and discernment. There are many small things you can do to make your type distinctive.

    Besides going with a lesser-used typeface for your body text, you can try combining two typefaces (or perhaps three, if you’re feeling frisky) in a unique way. Headlines, dates, bylines, intros, subheads, captions, pull quotes, and block quotes all offer ample opportunity for experimentation. Try using heavier and lighter weights, italics and all-caps. Using color is another option. A subtle background color or a contrasting subhead color can go a long way in making your type memorable.

    Don’t make your site look like a generic website template. Be a brand.

    Dig deeper on Typekit

    There are many other high-quality typefaces available on Typekit besides Proxima Nova and Brandon Grotesque. Spend some time browsing through their library and try experimenting with different options in your mockups. The free plan that comes with your Adobe Creative Cloud subscription gives you access to every single font in their library, so you have no excuse not to at least try to discover something that not everyone else is using.

    A good tip is to start with a designer or foundry you like and then explore other typefaces in their catalog. For example, if you’re a fan of the popular slab serif Adelle from TypeTogether, simply click the name of their foundry and you’ll discover gems like Maiola and Karmina Sans. Don’t be afraid to try something that you haven’t seen used before.

    Dig deeper on Google Fonts (but not too deep)

    As of this writing, there are 708 font families available for free on Google Fonts. There are a few dozen or so really great choices. And then there are many, many more not-so-great choices that lack italics and additional weights and that are plagued by poor kerning. So, while you should be wary of digging too deep on Google Fonts, there are definitely some less frequently used options, such as Alegreya and Fira Sans, that can hold their own against any commercial font.

    I fully support the open-source nature of Google Fonts and think that making good type accessible to the world for free is a noble mission. As time goes by, though, the good fonts available on Google Fonts will simply become the next Times New Romans and Arials—fonts that have become so overused that they feel like mindless defaults. So if you rely on Google Fonts, there will always be a limit to how unique and distinctive your brand can be.

    Try another web font service such as Fonts.com, Cloud.typography or Webtype

    It may have a great selection, but Typekit certainly doesn’t have everything. The Fonts.com library dwarfs the Typekit library, with over 40,000 fonts available. Hoefler & Co.’s high-quality collection of typefaces is only available through their Cloud.typography service. And Webtype offers selections not available on other services.

    Self-host fonts from MyFonts, FontShop or Fontspring

    Don’t be afraid to self-host web fonts. Serving fonts from your own website really isn’t that difficult and it’s still possible to have a fast-loading website if you self-host. I self-host fonts on Typewolf and my Google PageSpeed Insights scores are 90/100 for mobile and 97/100 for desktop—not bad for an image-heavy site.

    MyFonts, FontShop, and Fontspring all offer self-hosting kits that are surprisingly easy to set up. Self-hosting also offers the added benefit of not having to rely on a third-party service that could potentially go down (and take your beautiful typography with it).

    Explore indie foundries

    Many small and/or independent foundries don’t make their fonts available through the major distributors, instead choosing to offer licensing directly through their own sites. In most cases, self-hosting is the only available option. But again, self-hosting isn’t difficult and most foundries will provide you with all the sample code you need to get up and running.

    Here are some great places to start, in no particular order:

    What about Massimo Vignelli?

    Before I wrap this up, I think it’s worth briefly discussing famed designer Massimo Vignelli’s infamous handful-of-basic-typefaces advice (PDF). John Boardley of I Love Typography has written an excellent critique of Vignelli’s dogma. The main points are that humans have a constant desire for improvement and refinement; we will always need new typefaces, not just so that brands can differentiate themselves from competitors, but to meet the ever-shifting demands of new technologies. And a limited variety of type would create a very bland world.

    No doubt there were those in the 16th century who shared Vignelli’s views. Every age is populated by those who think we’ve reached the apogee of progress… Vignelli’s beloved Helvetica, . . . would never have existed but for our desire to do better, to progress, to create.
    John Boardley, “The Vignelli Twelve”

    Are web fonts the best choice for every website?

    Not necessarily. There are some instances where accessibility and site speed considerations may trump branding—in that case, it may be best just to go with system fonts. Georgia is still a pretty great typeface, and so are newer system UI fonts likes San Francisco, Roboto/Noto, and Segoe.

    But if you’re working on a project where branding is important, don’t ignore the importance of type. We’re bombarded by more content now than at any other time in history; having a distinctive brand is more critical than ever.

    90 percent of design is typography. And the other 90 percent is whitespace.
    Jeffrey Zeldman, “The Year in Design”

    As designers, ask yourselves: “Is this truly the best typeface for my project? Or am I just using it to be safe, or out of laziness? Will it make my brand memorable, or will my site blend in with every other site out there?” The choice is yours. Dig deep, push your boundaries, and experiment. There are thousands of beautiful and functional typefaces out there—go use them!

  • Never Show A Design You Haven’t Tested On Users 

    It isn’t hard to find a UX designer to nag you about testing your designs with actual users. The problem is, we’re not very good at explaining why you should do user testing (or how to find the time). We say it like it’s some accepted, self-explanatory truth that deep down, any decent human knows is the right thing to do. Like “be a good person” or “be kind to animals.” Of course, if it was that self-evident, there would be a lot more user testing in this world.

    Let me be very specific about why user testing is essential. As long as you’re in the web business, your work will be exposed to users.

    If you’re already a user-testing advocate, that may seem obvious, but we often miss something that’s not as clear: how user testing impacts stakeholder communication and how we can ensure testing is built into projects, even when it seems impossible.

    The most devilish usability issues are those that haven’t even occurred to you as potential problems; you won’t find all the usability issues just by looking at your design. User testing is a way to be there when it happens, to make sure the stuff you created actually works as you intended, because best practices and common sense will get you only so far. You need to test if you want to innovate, otherwise, it’s difficult to know whether people will get it. Or want it. It’s how you find out whether you’ve created something truly intuitive.

    How testing up front saves the day

    Last fall, I was going to meet with one of our longtime clients, the charity and NGO Plan International Norway. We had an idea for a very different sign-up form than the one they were using. What they already had worked quite well, so any reasonable client would be a little skeptical. Why fix it if it isn’t broken, right? Preparing for the meeting, we realized our idea could be voted down before we had the chance to try it out.

    We decided to quickly put together a usability test before we showed the design.

    At the meeting, we began by presenting the results of the user test rather than the design itself.

    We discussed what worked well, and what needed further improvement. The conversation that followed was rational and constructive. Together, we and our partners at Plan discussed different ways of improving the first design, rather than nitpicking details that weren’t an issue in the test. It turned out to be one of the best client meetings I’ve ever had.

    Panels of photos depicting the transition from hand-drawn sketch to digital mockup

    We went from paper sketch to Illustrator sketch to InVision in a day in order to get ready for the test.

    User testing gives focus to stakeholder feedback

    Naturally, stakeholders in any project feel responsible for the end result and want to discuss suggestions, solutions, and any concerns about your design. By testing the design beforehand, you can focus on the real issues at hand.

    Don’t worry about walking into your client meeting with a few unsolved problems. You don’t need to have a solution for every user-identified issue. The goal is to show your design, make clear what you think needs fixing, and ideally, bring a new test of the improved design to the next meeting.

    By testing and explaining the problems you’ve found, stakeholders can be included in suggesting solutions, rather than hypothesizing about what might be problems. This also means that they can focus on what they know and are good at. How will this work with our CRM system? Will we be able to combine this approach with our annual campaign?

    Since last fall, I’ve been applying this dogma in all the work that I do: never show a design you haven’t tested. We’ve reversed the agenda to present results first, then a detailed walkthrough of the design. So far, our conversations about design and UX have become a lot more productive.

    Making room for user testing: sell it like you mean it

    Okay, so it’s a good idea to test. But what if the client won’t buy it or the project owner won’t give you the resources? User testing can be a hard sell—I know this from experience. Here are four ways to move past objections.

    Don’t make it optional

    It’s not unusual to look at the total sum in a proposal, and go, Uhm, this might be a little too much.  So what typically happens? Things that don’t seem essential get trimmed. That usability lab test becomes optional, and we convince ourselves that we’ll somehow persuade the client later that the usability test is actually important.

    But how do you convince them that something you made optional a couple of months ago is now really important? The client will likely feel that we’re trying to sell them something they don’t really need.

    Describe the objective, not the procedure

    A usability lab test with five people often produces valuable—but costly—insight. It also requires resources that don’t go into the test itself: e.g., recruiting and rewarding test subjects, rigging your lab and observation room, making sure the observers from the client are well taken care of (you can’t do that if you’re the one moderating the test), and so on.

    Today, rather than putting “usability lab test with five people” in the proposal, I’ll dedicate a few days to: “Quality assurance and testing: We’ll use the methods we deem most suitable at different stages of the process (e.g., usability lab test, guerilla testing, click tests, pluralistic walkthroughs, etc.) to make sure we get it right.”

    I have never had a client ask me to scale down the “get it right” part. And even if they do ask you to scale it down, you can still pull it off if you follow the next steps.

    Scale down documentation—not the testing

    If you think testing takes too much time, it might be because you spend too much time documenting the test. In a lab test, it’s a good idea to have 20 to 30 minutes between each test subject. This gives you time to summarize (and maybe even fix) the things you found in each test before you move on to the next subject. By the end of the day, you have a to-do list. No need to document it any more than that.

    List of update notifications in the Slack channel

    When user testing the Norwegian Labour party’s new crowdsourcing site, we all contributed our observations straight into our shared Slack channel.

    I’ve also found InVision’s comment mode useful for documenting issues discovered in the tests. If we have an HTML and CSS prototype, screenshots of the relevant pages can be added to InVision, with comments placed on top of the specific issues. This also makes it easy for the client to contribute to the discussion.

    Screen capture of InVision mockup, with comments from team members attached to various parts of the design

    After the test is done, we’ve already fixed some of the problems. The rest ends up in InVision as a to-do on the relevant page. The prototype is actually in HTML, CSS, and JavaSCript, but the visual aspect of InVision’s comment feature make it much easier to avoid misunderstandings.

    Scale down the prototype—not the testing

    You don’t need a full-featured website or a polished prototype to begin testing.

    • If you’re testing text, you really just need text.
    • If you’re testing a form, you just need to prototype the form.
    • If you wonder if something looks clickable, a flat Photoshop sketch will do.
    • Even a paper sketch will work to see if you’re on the right track.

    And if you test at this early stage, you’ll waste much less time later on.

    Low-cost, low-effort techniques to get you started

    You can do this. Now, I’m going to show you some very specific ways you can test, and some examples from projects I’ve worked on.

    Pluralistic walkthrough

    • Time: 15 minutes and up
    • Costs: Free

    A pluralistic walkthrough is UX jargon for asking experts to go through the design and point out potential usability issues. But putting five experts in a room for an hour is expensive (and takes time to schedule). Fortunately, getting them in the same room isn’t always necessary.

    At the start of a project, I put sketches or screenshots into InVision and post it in our Slack channels and other internal social media. I then ask my colleagues to spend a couple of minutes critiquing it. As easy as that, you’ll be able to weed out (or create hypotheses about) the biggest issues in your design.

    Team member comments posted on InVision mockup

    Before the usability test, we asked colleagues to comment (using InVision) on what they thought would work or not.

    Hit the streets

    • Time: 1–3 hours
    • Costs: Snacks

    This is a technique that works well if there’s something specific you want to test. If you’re shy, take a deep breath and get over it. This is by far the most effective way of usability testing if you’re short on resources. In the Labour Party project, we were able to test with seven people and summarize our findings within two hours. Here’s how:

    1. Get a device that’s easy to bring along. In my experience, an iPad is most approachable.
    2. Bring candy and snacks. Works great to have a basket of snacks and put the iPad on the basket too.
    3. Go to a public place with lots of people, preferably a place where people might be waiting (e.g., a station of some sort).
    4. Approach people who look like they are bored and waiting; have your snacks (and iPad) in front of you, and say: “Excuse me, I’m from [company]. Could I borrow a couple of minutes from you? I promise it won’t take more than five minutes. And I have candy!” (This works in Norway, and I’m pretty sure food is a universal language). If you’re working in teams of two, one of you should stay in the background during the approach.
    5. If you’re alone, take notes in between each test. If there are two of you, one person can focus on taking notes while the other is moderating, but it’s still a good idea to summarize between each test.
    Two people standing in a public transportation hub, holding a large basket and an iPad

    Morten and Ida are about to go to the Central Station in Oslo, Norway, to test the Norwegian Labour Party’s new site for crowdsourcing ideas. Don’t forget snacks!

    Online testing tools

    • Time: 30 minutes and up
    • Costs: Most tools have limited free versions. Optimal Workshop charges $149 for one survey and has a yearly plan for $1990.

    There isn’t any digital testing tool that can provide the kind of insight you get from meeting real users face-to-face. Nevertheless, digital tools are a great way of going deeper into specific themes to see if you can corroborate and triangulate the data from your usability test.

    There are many tools out there, but my two favorites are Treejack and Chalkmark from Optimal Workshop. With Treejack, it rarely takes more than an hour to figure out whether your menus and information architecture are completely off or not. With click tests like Chalkmark, you can quickly get a feel for whether people understand what’s clickable or not.

    Screencapture of Illustrator mockup

    A Chalkmark test of an early Illustrator mockup of Plan’s new home page. The survey asks: “Where would you click to send a letter to your sponsored child?” The heatmap shows where users clicked.

    Diagram combining pie charts and paths

    Nothing kills arguments over menus like this baby. With Treejack, you recreate the information architecture within the survey and give users a task to solve. Here we’ve asked: “You wonder how Plan spends its funds. Where would you search for that?” The results are presented as a tree of the paths the users took.

    Using existing audience for experiments

    • Time: 30 minutes and up
    • Costs: Free (e.g., using Hotjar and Google Analytics).

    One of the things we designed for Plan was longform article pages, binding together a compelling story of text, images, and video. It struck us that these wouldn’t really fit in a usability test. What would the task be? Read the article? And what were the relevant criteria? Time spent? How far he or she scrolled? But what if the person recruited to the test wasn’t interested in the subject? How would we know if it was the design or the story that was the problem, if the person didn’t act as we hoped?

    Since we had used actual content and photos (no lorem ipsum!), we figured that users wouldn’t notice the difference between a prototype and the actual website. What if we could somehow see whether people actually read the article when they stumbled upon it in its natural context?

    The solution was for Plan to share the link to the prototyped article as if it were a regular link to their website, not mentioning that it was a prototype.

    The prototype was set up with Hotjar and Google Analytics. In addition, we had the stats from Facebook Insights. This allowed us to see whether people clicked the link, how much time they spent on the page, how far they scrolled, what they clicked, and even what they did on Plan’s main site if they came from the prototyped article. From this we could surmise that there was no indication of visual barriers (e.g., a big photo making the user think the page was finished), and that the real challenge was actually getting people to click the link in the first place.

    Side-by-side images showing the design and the heatmap resulting from user testing

    On the left is the Facebook update from Plan. On the right is the heat map from Hotjar, showing how far people scrolled, with no clear drop-out point.

    Did you get it done? Was this useful?

    • Time: A few days or a week to set up, but basically no time spent after that
    • Costs: No cost if you build your own; Task Analytics from $950 a month

    Sometimes you need harder, bigger numbers to be convincing. This often leads people to A/B testing or Google Analytics, but unless what you’re looking for is increasing a very specific conversion, even these tools can come up short. Often you’d gain more insight looking for something of a middle ground between the pure quantitative data provided by tools like Google Analytics, and the qualitative data of usability tests.

    “Was it helpful?” modules are one of those middle-ground options I try to implement in almost all of my projects. Using tools like Google Tag Manager, you can even combine the data, letting you see the pages that have the most “yes” and “no” votes on different parts of your website (content governance dream come true, right?). But the qualitative feedback is also incredibly valuable for suggesting specific things your design is lacking.

    Feedback submission buttons

    “Was this article helpful?” or “Did you find what you were looking for?” are simple questions that can give valuable insight.

    This technique falls short if your users weren’t able to find a relevant article. Those folks aren’t going to leave feedback—they’re going to leave. Google Analytics isn’t of much help there, either. That high bounce rate? In most cases you can only guess why. Did they come and go because they found their answer straight away, or because the page was a total miss? Did they spend a lot of time on the page because it was interesting, or because it was impossible to understand?

    My clever colleagues made a tool to answer those kinds of questions. When we do a redesign, we run a Task Analytics survey both before and after launch to figure out not only what the top tasks are, but whether or not people were able to complete their task.

    When the user arrives, they’re asked if they want to help out. Then they’re asked to do whatever they came for and let us know when they’re done. When they’re done, we ask a) “What task did you come to do?” and b) “Did you complete the task?”

    This gives us data that is actionable and easily understood by stakeholders. At our own website, the most common task people arrive for is to contact an employee, and we learned that one in five will fail. We can fix that. And afterward, we can measure whether or not our fix really worked.

    Desktop and mobile screenshots from Task Analytics dashboard

    Why do people come to Netlife Research’s website, and do they complete their task? Screenshot from Task Analytics dashboard.

    Set up a usability lab and have a weekly drop-in test day

    • Time: 6 hours per project tested + time spent observing the test
    • Costs: rewarding subjects + the minimal costs of setting up a lab

    Setting up a usability lab is basically free in 2016:

    • A modern laptop has a microphone and camera built in. No need to buy that.
    • Want to test on mobile? Get a webcam and a flexible tripod or just turn your laptop around
    • Numerous screensharing and video conference tools like Skype, Google Hangout, and GoToMeeting mean there’s no need for hefty audiovisual equipment or mirror windows.
    • Even eyetracking is becoming affordable

    Other than that, you just need a room that’s big enough for you and a user. So even as a UX team of one, you can afford your own usability lab. Setting up a weekly drop-in test makes sense for bigger teams. If you’re at twenty people or more, I’d bet it would be a positive return on investment.

    My ingenious colleague Are Halland is responsible for the test each week. He does the recruiting, the lab setup, and the moderating. Each test day consists of tests with four different people, and each person typically gets tasks from two to three different projects that Netlife is currently working on. (Read up on why it makes sense to test with so few people.)

    By testing two to three projects at a time and having the same person organize it, we can cut down on the time spent preparing and executing the test without cutting out the actual testing.

    As a consultant, all I have to do is to let Are know a few days in advance that I need to test something. Usually, I will send a link to the live stream of the test to clients to let them know we’re testing and that they’re welcome to pop in and take a look. A bonus is that clients find it surprisingly rewarding to see other client’s tests and getting other client’s views on their own design (we don’t put competitors in the same test).

    This has made it a lot easier to test work on short notice, and it has also reduced the time we have to spend on planning and executing tests.

    Two men sitting at a table and working on laptops, with a large screen in the background to display what they are collaborating on

    From a drop-in usability test with the Norwegian Labour Party. Eyetracking data on the screen, Morten (Labour Party) and Jørgen (front-end designer) taking notes (and instantly fixing stuff!) on the right.

    Testing is designing

    As I hope I’ve demonstrated, user testing doesn’t have to be expensive or time-consuming. So what stops us? Personally, I’ve met two big hurdles: building testing into projects to begin with and making a habit out of doing the work.

    The critical first step is to make sure that some sort of user testing is part of the approved project plan. A project manager will look at the proposal and make sure we tick that off the list. Eventually, maybe your clients will come asking for it: “But wasn’t there supposed to be some testing in this project?”.

    Second, you don’t have to ask for anyone’s permission to test. User testing improves not only the quality of our work, but also the communication within teams and with stakeholders. If you’re tasked with designing something, even if you have just a few days to do it, treat testing as a part of that design task. I’ve suggested a couple of ways to do that, even with limited time and funds, and I hope you’ll share even more tips, tricks, and tools in the comments.

  • Meaningful CSS: Style Like You Mean It 

    These days, we have a world of meaningful markup at our fingertips. HTML5 introduced a lavish new set of semantically meaningful elements and attributes, ARIA defined an entire additional platform to describe a rich internet, and microformats stepped in to provide still more standardized, nuanced concepts. It’s a golden age for rich, meaningful markup.

    Yet our markup too often remains a tangle of divs, and our CSS is a morass of classes that bear little relationship to those divs. We nest div inside div inside div, and we give every div a stack of classes—but when we look in the CSS, our classes provide little insight into what we’re actually trying to define. Even when we do have semantic and meaningful markup, we end up redefining it with CSS classes that are inherently arbitrary. They have no intrinsic meaning.

    We were warned about these patterns years ago:

    In a site afflicted by classitis, every blessed tag breaks out in its own swollen, blotchy class. Classitis is the measles of markup, obscuring meaning as it adds needless weight to every page.
    Jeffrey Zeldman, Designing with Web Standards, 1st ed.

    Along the same lines, the W3C weighed in with:

    CSS gives so much power to the “class” attribute, that authors could conceivably design their own “document language” based on elements with almost no associated presentation (such as DIV and SPAN in HTML) and assigning style information through the “class” attribute… Authors should avoid this practice since the structural elements of a document language often have recognized and accepted meanings and author-defined classes may not. (emphasis mine)

    So why, exactly, does our CSS abuse classes so mercilessly, and why do we litter our markup with author-defined classes? Why can’t our CSS be as semantic and meaningful as our markup? Why can’t both be more semantic and meaningful, moving forward in tandem?

    Building better objects

    A long time ago, as we emerged from the early days of CSS and began building increasingly larger sites and systems, we struggled to develop some sound conventions to wrangle our ever-growing CSS files. Out of that mess came object-oriented CSS.

    Our systems for safely building complex, reusable components created a metastasizing classitis problem—to the point where our markup today is too often written in the service of our CSS, instead of the other way around. If we try to write semantic, accessible markup, we’re still forced to tack on author-defined meanings to satisfy our CSS. Both our markup and our CSS reflect a time when we could only define objects with what we had: divs and classes. When in doubt, add more of both. It was safer, especially for older browsers, so we oriented around the most generic objects we could find.

    Today, we can move beyond that. We can define better objects. We can create semantic, descriptive, and meaningful CSS that understands what it is describing and is as rich and accessible as the best modern markup. We can define the elephant instead of saying things like .pillar and .waterspout.

    Clearing a few things up

    But before we turn to defining better objects, let’s back up a bit and talk about what’s wrong with our objects today, with a little help from cartoonist Gary Larson.

    Larson once drew a Far Side cartoon in which a man carries around paint and marks everything he sees. “Door” drips across his front door, “Tree” marks his tree, and his cat is clearly labelled “Cat”. Satisfied, the man says, “That should clear a few things up.”

    We are all Larson’s label-happy man. We write <table class="table"> and <form class="form"> without a moment’s hesitation. Looking at Github, one can find plenty of examples of <main class="main">. But why? You can’t have more than one main element, so you already know how to reference it directly. The new elements in HTML5 are nearly a decade old now. We have no excuse for not using them well. We have no excuse for not expecting our fellow developers to know and understand them.

    Why reinvent the semantic meanings already defined in the spec in our own classes? Why duplicate them, or muddy them?

    An end-user may not notice or care if you stick a form class on your form element, but you should. You should care about bloating your markup and slowing down the user experience. You should care about readability. And if you’re getting paid to do this stuff, you should care about being the sort of professional who doesn’t write redundant slop. “Why should I care” was the death rattle of those advocating for table-based layouts, too.

    Start semantic

    The first step to semantic, meaningful CSS is to start with semantic, meaningful markup. Classes are arbitrary, but HTML is not. In HTML, every element has a very specific, agreed-upon meaning, and so do its attributes. Good markup is inherently expressive, descriptive, semantic, and meaningful.

    If and when the semantics of HTML5 fall short, we have ARIA, specifically designed to fill in the gaps. ARIA is too often dismissed as “just accessibility,” but really—true to its name—it’s about Accessible Rich Internet Applications. Which means it’s chock-full of expanded semantics.

    For example, if you want to define a top-of-page header, you could create your own .page-header class, which would carry no real meaning. You could use a header element, but since you can have more than one header element, that’s probably not going to work. But ARIA’s [role=banner] is already there in the spec, definitively saying, “This is a top-of-page header.”

    Once you have <header role="banner">, adding an extra class is simply redundant and messy. In our CSS, we know exactly what we’re talking about, with no possible ambiguity.

    And it’s not just about those big top-level landmark elements, either. ARIA provides a way to semantically note small, atomic-level elements like alerts, too.

    A word of caution: don’t throw ARIA roles on elements that already have the same semantics. So for example, don’t write <button role="button">, because the semantics are already present in the element itself. Instead, use [role=button] on elements that should look and behave like buttons, and style accordingly:

    [role=button] {

    Anything marked as semantically matching a button will also get the same styles. By leveraging semantic markup, our CSS clearly incorporates elements based on their intended usage, not arbitrary groupings. By leveraging semantic markup, our components remain reusable. Good markup does not change from project to project.

    Okay, but why?


    • If you’re writing semantic, accessible markup already, then you dramatically reduce bloat and get cleaner, leaner, and more lightweight markup. It becomes easier for humans to read and will—in most cases—be faster to load and parse. You remove your author-defined detritus and leave the browser with known elements. Every element is there for a reason and provides meaning.
    • On the other hand, if you’re currently wrangling div-and-class soup, then you score a major improvement in accessibility, because you’re now leveraging roles and markup that help assistive technologies. In addition, you standardize markup patterns, making repeating them easier and more consistent.
    • You’re strongly encouraging a consistent visual language of reusable elements. A consistent visual language is key to a satisfactory user experience, and you’ll make your designers happy as you avoid uncanny-valley situations in which elements look mostly but not completely alike, or work slightly differently. Instead, if it looks like a duck and quacks like a duck, you’re ensuring it is, in fact, a duck, rather than a rabbit.duck.
    • There’s no context-switching between CSS and HTML, because each is clearly describing what it’s doing according to a standards-based language.
    • You’ll have more consistent markup patterns, because the right way is clear and simple, and the wrong way is harder.
    • You don’t have to think of names nearly as much. Let the specs be your guide.
    • It allows you to decouple from the CSS framework du jour.

    Here’s another, more interesting scenario. Typical form markup might look something like this (or worse):

    <form class="form" method="POST" action=".">
    	<div class="form-group">
    		<label for="id-name-field">What’s Your Name</label>
    		<input type="text" class="form-control text-input" name="name-field" id="id-name-field" />
    	<div class="form-group">
    		<input type="submit" class="btn btn-primary" value="Enter" />

    And then in the CSS, you’d see styles attached to all those classes. So we have a stack of classes describing that this is a form and that it has a couple of inputs in it. Then we add two classes to say that the button that submits this form is a button, and represents the primary action one can take with this form.

    Common vs. optimal form markup
    What you’ve been using What you could use instead Why
    .form form Most of your forms will—or at least should—follow consistent design patterns. Save additional identifiers for those that don’t. Have faith in your design patterns.
    .form-group form > p or fieldset > p The W3C recommends paragraph tags for wrapping form elements. This is a predictable, recommended pattern for wrapping form elements.
    .form-control or .text-input [type=text] You already know it’s a text input.
    .btn and .btn-primary or .text-input [type=submit] Submitting the form is inherently the primary action.

    Some common vs. more optimal form markup patterns

    In light of all that, here’s the new, improved markup.

    <form method="POST" action=".">
    		<label for="id-name-field">What’s Your Name</label>
    		<input type="text" name="name-field" id="id-name-field" />
    		<button type="submit">Enter</button>

    The functionality is exactly the same.

    Or consider this CSS. You should be able to see exactly what it’s describing and exactly what it’s doing:

    [role=tab] {
    	display: inline-block;
    [role=tab][aria-selected=true] {
    	background: tomato;
    [role=tabpanel] {
    	display: none;
    [role=tabpanel][aria-expanded=true] {
    	display: block;

    Note that [aria-hidden] is more semantic than a utility .hide class, and could also be used here, but aria-expanded seems more appropriate. Neither necessarily needs to be tied to tabpanels, either.

    In some cases, you’ll find no element or attribute in the spec that suits your needs. This is the exact problem that microformats and microdata were designed to solve, so you can often press them into service. Again, you’re retaining a standardized, semantic markup and having your CSS reflect that.

    At first glance, it might seem like this would fail in the exact scenario that CSS naming structures were built to suit best: large projects, large teams. This is not necessarily the case. CSS class-naming patterns place rigid demands on the markup that must be followed. In other words, the CSS dictates the final HTML. The significant difference is that with a meaningful CSS technique, the styles reflect the markup rather than the other way around. One is not inherently more or less scalable. Both come with expectations.

    One possible argument might be that ensuring all team members understand the correct markup patterns will be too hard. On the other hand, if there is any baseline level of knowledge we should expect of all web developers, surely that should be a solid working knowledge of HTML itself, not memorizing arcane class-naming rules. If nothing else, the patterns a team follows will be clear, established, well documented by the spec itself, and repeatable. Good markup and good CSS, reinforcing each other.

    To suggest we shouldn’t write good markup and good CSS because some team members can’t understand basic HTML structures and semantics is a cop-out. Our industry can—and should—expect better. Otherwise, we’d still be building sites in tables because CSS layout is supposedly hard for inexperienced developers to understand. It’s an embarrassing argument.

    Probably the hardest part of meaningful CSS is understanding when classes remain helpful and desirable. The goal is to use classes as they were intended to be used: as arbitrary groupings of elements. You’d want to create custom classes most often for a few cases:

    • When there are not existing elements, attributes, or standardized data structures you can use. In some cases, you might truly have an object that the HTML spec, ARIA, and microformats all never accounted for. It shouldn’t happen often, but it is possible. Just be sure you’re not sticking a horn on a horse when you’re defining .unicorn.
    • When you wish to arbitrarily group differing markup into one visual style. In this example, you want objects that are not the same to look like they are. In most cases, they should probably be the same, semantically, but you may have valid reasons for wanting to differentiate them.
    • You’re building it as a utility mixin.

    Another concern might be building up giant stacks of selectors. In some cases, building a wrapper class might be helpful, but generally speaking, you shouldn’t have a big stack of selectors because the elements themselves are semantically different elements and should not be sharing all that many styles. The point of meaningful CSS is that you know from your CSS that that button or [role=button] applies to all buttons, but [type=submit] is always the primary action item on the form.

    We have so many more powerful attributes at our disposal today that we shouldn’t need big stacks of selectors. To have them would indicate sloppy thinking about what things truly are and how they are intended to be used within the overall system.

    It’s time to up our CSS game. We can remain dogmatically attached to patterns developed in a time and place we have left behind, or we can move forward with CSS and markup that correspond to defined specs and standards. We can use real objects now, instead of creating abstract representations of them. The browser support is there. The standards and references are in place. We can start today. Only habit is stopping us.