EW Resource


There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.

A List Apart: The Full Feed
  • Lyza Danger Gardner on Building the Web Everywhere: The Implicit Contract 

    I work with lots of different teams and different developers. I usually know innately, as does the team around me, whether the teams we’re working with are good or not. We rarely disagree on the evaluation.

    But what does good mean?

    I find that the most valuable web developers interact with each other along a kind of implicit contract, the tenets of which are based upon web standards and proven ways of doing things that we’ve cobbled together collectively over the years. Most of the time, good isn’t generated by an individual in isolation—it’s the plurality of tandem efforts that hum along to a shared, web-driven rhythm.

    When things are ticking along smoothly among devs, I find we have a common underlying way of talking and thinking about the web. We fit together in human and technical ways, upholding a shared understanding about how best to make pieces of the web fit together.

    In contrast to the tired stereotype of genius coming in the form of a lone, intense hacker, much of the effective work done on the web is done within the bounds of a certain kind of communal conformance. In a good way.

    Working together

    A heap of obvious things goes into making an individual web developer seem good: An innate understanding of time and effort. An indestructible drive to self-educate.  A lick-smart, logical mind quick to spot and take advantage of patterns. I think we look for these talents naturally.

    And yet when devs work together, those skills fade back just a bit. In a (grossly oversimplified) way, as part of a larger team each developer is a miniature black box. What comes fiercely front-and-center are the interfacing edges of the people and teams. The way they talk to each other and the timbre of what they build, what they disclose and what they don’t think they need to mention.

    When something unexpected pops up between healthy teams—which happens, because this is a complicated world—a communication like, “Hey, when I poke this service in this way, it throws a 500 at me” as often as not is enough information for the recipient to go off and fix it, because we have have similar scars to reference and a shared vocabulary built on common ground.

    A common vernacular and communication style is an echo of a common thinking style. Underneath the chatter are cognitive technical models of the metaphors at hand, based on each team member’s perception of how the web fits together—REST, modular patterns, progressive enhancement, etc.—and how those components apply to the current project. Happy days when those internal archetypes align.

    When you run into misaligned teams it is obvious. There’s a herky-jerky grating to communication. Seemingly dashed-off emails that don’t quite dig into the problem at hand. Teams where you can tell each member’s mental context differs. Code that feels weird and wrong.

    A common ground engenders brilliant ideas

    Unless it is the actual goal of the project, I don’t care too much if you can come up with a Nobel-worthy new implementation of a basic CRUD feature. In most cases, I’ll happily accept something predictable and expected.

    This is not an argument for ignorance or apathy. Ideally, everyone should be pretty good at what they do—those individual technical skills do of course matter. I mean, part of the contract here does involve boots-on-ground time—to understand the lay of the land, to break HTTP into bits and pieces, leak some memory, screw up DNS a few times. We break and heal frequently as we gain deeper web mastery.

    But having a web set of conceptual building blocks—standards, patterns, conventions—upon which we can frame and build gives us the freedom to focus on where we really need to be creative: the particular task, product, or site at hand. Common components, shared notions.

    After all, the best chefs in the world don’t reinvent carrots. Instead, they identify what other remixed food components might plug into a carrot to make it divine.

    Likewise, good developers are mixing up agreed-upon technical ingredients into the soup of the day. And just as a talented cook knows how to explain to the waitstaff the nuances that thyme brings to the potato, good devs know how to talk to those around them, team members both in the kitchen and beyond, about why today’s menu includes OAuth or moment.js.

    It’s not just touchy-feely

    It used to be that I would think, “Hey, these people seem like they’re on the same wavelength as my team; that’s cool,” but now I realize it’s likely that what seems merely like good vibrations saves prodigious time and money on projects.

    In damaged teams, mental reference dissonance carries through to the outcome, manifesting itself in jarring technical mismatches, poorly-thought-through integration seams and, frankly, bugs. It’s as if people are thinking about the web in different internal language systems.

    So? Things take longer, often a lot longer. Teams become frustrated with each other. Meetings and discussions are drawn-out and less fruitful. The results suffer. Things break.

    It matters.

    I’m not suggesting we all link arms and plow out code from a single hive mind. In fact, I’d argue that the constraints imposed by a common perspective help to drive a certain kind of unique brilliance.

  • This week's sponsor: Harvest 

    Thanks to Harvest for sponsoring A List Apart this week. Check out Forecast, a whole new way to plan your team’s time.

  • Tweaking the Moral UI 

    A couple of years ago, I was asked to help put together a code of conduct for the IA Summit. I laughed.

    We need a code of conduct here? The IA Summit is the nicest, most community-friendly conference ever! Those problems happen at other conferences! And they want me to help? There are sailors jealous of my cussing vocabulary—surely I was not PC enough to be part of such an effort. But the chairs insisted. So, being a good user-centered designer, I started asking around about the idea of a code of conduct.

    I found out design conferences are not the safe meetings of minds I thought they were.

    One woman told me that she had been molested by another attendee at a favorite conference, and was too scared to report it. “No one will ever see me as anything but a victim,” she said. “I’ve worked too hard for that.”

    At another conference, a woman was woken up in the middle of the night by a speaker demanding that she come over. When she told the organizer in the morning, he said, “We were all pretty drunk last night. He’s a good guy. He just gets a bit feisty when he’s drinking.”

    Then there was my own little story. Years ago at the IA Summit, I went to talk to a speaker about something he’d said. I’m a tall, tough lady. But he managed to pin me against a balcony railing and try to kiss me. I started wondering, what if there had been a code of conduct then? What if I had had someone to talk to about it? What if I hadn’t said, “Oh, he’s just drunk”?

    Maybe I wouldn’t have spent the past seven years ducking him at every event I go to. And maybe he wouldn’t have spent those same years harassing other women—women who now were missing out on amazing learning and networking opportunities because they thought they’d be harassed.

    The idea of a code of conduct didn’t seem so silly anymore.

    A wicked problem

    Unfortunately, it still seems silly to others. Recently I was talking to another conference organizer about setting up codes of conduct, and he said, “That doesn’t happen at our conferences. People know me, and they know they can talk to me. A code of conduct will make people nervous that we have a problem. And we don’t.”

    I wonder how he knew that, since most victims don’t come forward. They don’t want to be seen as a “buzzkill,” or be told that what they wore or what they drank meant that they asked for it. This is not unusual; every day we see examples of women whose reputations are trashed for reporting rape and harassment. On Twitter, women who talk about sexism in games or even think a woman should go on a stamp are given death threats. Reporting carries consequences. Reporting is scary.

    In order to feel safe enough to come forward, attendees and speakers need to know that the conference organizers are paying attention. We need a guarantee that they’ll listen, be discreet, and do something about it.

    In her recent piece, “Why You Want a Code of Conduct & How We Made One,” Erin Kissane frames precisely why codes of conduct are absolutely necessary:

    To define a code of conduct is to formally state that your community—your event or organization or project—does not permit intimidation or harassment or any of the other terrible things that we can’t seem to prevent in the rest of the world. It’s to express and nurture healthy community norms. In a small, limited way, it’s to offer sanctuary to the vulnerable: to stake out a space you can touch, put it under your protection, and make it a welcoming home for all who act with respect.

    A code of conduct is a message—not a message that there is a problem, but a message that there is a solution. As much as a label on a button or a triangle with an exclamation point in it, a code of conduct tells you how a conference works.

    Tweaking the UI

    We are designers.

    That means we make choices about the interface that sits between the people and the thing they want. We mock interfaces that aren’t clear. We write books with titles like Don’t Make Me Think. Yet when we hold conferences, we seem to assume that everyone has the same idea of how they work.

    Why do we expect that people will “just know” how to use this complex build of architecture and wetware? There is a lecture; that means professional behavior! There is a bar; that means drinking and flirting! There is a reception; that means…alcohol plus speakers…network-flirting? A conference can be a complex space to understand because it mixes two things that usually have clear boundaries: social and work. If one person is working and another is looking to get social, conflict will happen.

    These fluid boundaries can be particularly hard on speakers. Attendees often approach speakers with questions inspired by their talk, which could start a conversation that leads to work…or a date. It’s hard to tell; cautious flirting and cautious networking often look the same. People can feel uncomfortable saying no to someone who might hire them—or keep them from being hired.

    Sometimes after giving a talk, I’ve mistaken admiration for flirtation, and the other way around. A wise speaker stays neutral, but it can be hard to be wise after a few glasses of wine. A code of conduct is useful because it spells out parameters for interaction. Some codes have even gone so far as to say if you are a speaker, you cannot engage in romantic activities like flirting. Clarity around what is expected of you leads to fewer accidental missteps.

    Set expectations

    A good code, like a good interface, sets clear expectations and has a swift feedback loop. It must:

    • Define clearly what is and isn’t acceptable behavior at your con. “Don’t be a dick” or “Be excellent to each other” is too open to interpretation. The Railsconf policy offers clear definitions: “Harassment includes, but is not limited to: offensive verbal comments related to gender, sexual orientation, disability, physical appearance, body size, race, or religion; sexual images in public spaces; deliberate intimidation; stalking; following; harassing photography or recording; sustained disruption of talks or other events; inappropriate physical contact; and any unwelcome sexual attention.”
    • Set expectations for what will happen if the code is violated, as the O’Reilly code of conduct does: “Conference participants violating this Code of Conduct may be expelled from the conference without a refund, and/or banned from future O’Reilly events, at the discretion of O’Reilly Media.”
    • Tell people how and to whom to report the incident. The Lean Startup Conference’s code includes: “Please contact a staff member, volunteer, or our executive producer [name], [email] or [phone number].” Providing a phone number is a massive signal that you are willing to listen.
    • Set expectations about how it will be handled. The World IA Day code is very clear:

      First we will listen.

      Then, we will help you to determine the options that we have based on the situation. We will also document the details to assure trends of behavior are uncovered across locations.

      Lastly, we will follow the situation to a resolution where you feel safe and you can remain anonymous if you wish to be.

    A code of conduct is a little like a FAQ or a TOS. It’s clunky, and I hope someone comes up with something better. But it’s instructions on what to expect and how to behave and, most importantly, what to do when something breaks. Because, as we keep seeing, something will eventually break. It’s better if it’s not people.

    A lot of conferences are adopting codes of conduct now. The Lean Startup Conference one mentioned above is heartfelt and crafted based on their values. The art and technology festival XOXO has an excellent one, based on the template from Geek Feminism. Yes, there’s a template. It’s not even hard to write one anymore. It doesn’t even take a long time.

    Meet (or exceed) expectations

    Any good experience designer knows that setting expectations is worthless if they aren’t immediately met. Beyond writing a code of conduct, conference organizers must also train their team to handle this emotionally charged situation, including making sure the person reporting feels safe. And there needs to be a clear, established process that enables you to act swiftly and decisively to remove violators.

    So how should a conference handle it when the code is violated? There are a couple of telling case studies online: one from Elise Matthesen at the feminist science fiction conference WisCon, and another from Kelly Kend at XOXO.

    In both cases, these women were immediately supported by the people they spoke with—a critical first step. In Kelly’s case, she brought her situation directly to the organizers, who listened to her and made it clear they weren’t going to blame her for the incident. Once the organizers had made her feel safe, they removed the harasser. It was improvised action, but effective.

    In Elise’s case, it’s clear that WisCon was well-prepared to handle the incident. The first part of the story is exemplary:

    • The conference staff member (called a “safety staffer”) asked if Elise wanted someone there while she reported.
    • The safety staffer asked if she wanted to report it formally, or just talk it through first.
    • The safety staffer asked if she wanted to use her name, or remain anonymous.
    • And the safety staffer and the conference organizers kept checking in with her to make sure she was doing okay.

    Unfortunately, WisCon fell down when it came to acting on the report. Eventually the harasser was banned, but only after a slow and onerous process. And the ban isn’t permanent, which has infuriated the community.

    It is hard work to get the poison out of the apple. Elise writes, “Serial harassers can get any number of little talking-to’s and still have a clear record,” which has been my experience as well. Since I started writing about conference harassment, a number of women have spoken to me about incidents at various design conferences. Two names keep coming up as the abusers, yet they continue to get invitations to speak. Until more people step forward to share their stories, this won’t change. And people cannot step forward until they are sure they won’t be victimized by the reporting process.

    If you are a conference organizer, it is your job to make sure your attendees know you will listen to them, take them seriously, and act decisively to keep them safe.

    If you are an attendee who sees harassment, stand up for those who may be afraid to step forward, and act as a witness to bad behavior.

    And if you are harassed, please consider coming forward. But I can’t blame you if you choose not to. Keep yourself safe first.

    A promise

    John Scalzi, author of several best selling sci-fi novels, made a pledge to his community that he would neither speak at nor attend any conference without an enforced code of conduct.

    I will make the same pledge now. I will honor any commitments I’ve made previously; all new ones are subject to the pledge.

    I will neither speak at nor attend conferences that do not have and enforce a code of conduct. This may prove hard, as many conferences I’d love to speak at do not have a code yet. But change takes sacrifice. Integrity takes sacrifice.

    If you believe, as I do, that it is critical to make a safe place where everyone can learn and grow and network, then leave a comment with just one word: “cosigned.”

  • Conference Proposals that Don’t Suck 

    When it comes to turning your big idea into a proposal that you want to submit to a conference, there are no real rules or patterns to follow beyond “just do your best” and perhaps “keep it to 500 words,” which makes the whole process pretty daunting.

    I’ve worked with a number of people submitting proposals to events over the past few years. I’ve been racking my brain trying to identify a strong pattern that helps people pull together proposals that provide what conference chairs and program planners are looking for, while at the same time making the process a bit more clear to people who really want to find their way to the stage.

    I’ve found that it’s best to treat the proposal-writing process as just that—a process, not just something you do in a half-hour during a slow afternoon. One of the worst things you can do is to write your proposal in the submission form. I’ve done it. You probably know someone else who has done it. Most of our proposals probably sucked because of it. Hemingway advises us that “the first draft of anything is shit,” and this is as true for conference proposals as it is for just about anything.

    When you write a proposal in the submission form, you don’t give yourself the time that a proposal needs to mature. I’ve found six solid steps that can help you turn that idea into a lucid and concise conference proposal that paints a clear picture of what your presentation will be about.

    As I walk through these steps, I’m going to share my most recently created conference proposal. I’ve recently submitted this to some conferences, and I don’t yet know if it will be accepted or if I’ll have any opportunities to give this presentation—but following these steps made writing the proposal itself much easier.

    Let’s get to it.

    Step 1: Write down the general, high-level ideas that you want to talk about

    This is a very informal step, and it should be written just for you. I use this step to take any notes I’ve stored away on post-its or in Evernote, and turn them into something resembling a couple of paragraphs. It’s an exercise in getting everything out of your head.

    You don’t need to worry about being factually accurate in what you’re writing. This is the opportunity to go with what you know or remember, or assume you know or remember, and get it all into some other medium. You can fix anything that is inaccurate later; no one is going to read this but you.

    For example, I’m writing a proposal for a presentation about creating “skunk works” projects (essentially, where small teams work on secret/necessary endeavors) to get things done when you’re busily leading a team and don’t really have time to get all the things accomplished that should be in place.

    Here’s what I started with:

    Something About Skunk Works Projects

    The overall premise is that teams are really busy and if you’ve recently grown one (in-house or otherwise), you know that all the bodies go to the work, and little goes to the stuff that helps make a team purr along nice and smoothly, such as efficient on-boarding processes, sharing of thinking, processes, definitions, etc. Skunk Works projects can help you continue to increase the value to your team (and others) and also provide the team with an outlet for growth.

    Is there a formula? There sure is, and I can trace a lot back to Boeing, and other places like Atari & Chuck E. Cheese, and my own current “stuff.” It dovetails nicely into the guerrilla stuff that I’ve done in the past, and the leadership I’ve been doing recently.

    That’s the idea—how to get stuff done for your team when you’ve got so much stuff to do that you don’t have time.

    This is an extremely rough draft, and should be for your eyes only—despite the fact that I’m sharing mine with you here, in its poorly written and somewhat inaccurate state.

    At this point, you’ve earned a break. You’ll want to be fresh for the next step, where we start to build a supporting backbone for your free-flowing words.

    Step 2: Break your content into topic points

    Review what you’ve written and begin to break that content into topics. I create bullet points for “Pain,” “Solution,” and two rounds of “Support.” I also add a bullet point I call “Personable,” so that I have a place to add how the idea is relatable to my own experience (though this sometimes ends up being covered by one of the Support points).

    This isn’t final content; go ahead and lift sentences from your previous paragraphs if you feel like they’re relevant. Grammar still takes a backseat here, but do make sure that you’re addressing the topic point with some clarity. Also, spend a little time doing some fact-checking; tighten your points up a bit with real and concrete information.

    As I was working through this step, I did a little more homework. I cracked open a few books and hunted down articles in order to refresh myself and feel like I was on more solid ground as I pulled the points together.


    When you think about your presentation’s topic, what is the common point of pain that you believe you share with other people? What prompted you to feel that this is a strong idea for a presentation and worthy of sharing? Pain is something we all share, and when you can help someone else feel like their pain might be alleviated, they start to nod their heads, mentally say “yes!” to themselves, and begin to relate to you and your message.

    Pain point: Work has to get done; organizational good “stuff” often comes last, which means it never gets done because the bills have to get paid and people get booked on project work first.


    After you’ve identified that common point of pain, what’s the general, high-level solution? If you are the person who found the solution, you should say so; if not, you should identify who did, and explain what you learned from it. Give enough information to assure people there is a solution. Don’t get hung up on feeling like you’ll give away the ending; people will show up to your presentation to hear more about the journey you’ve taken from that common point of pain, not just to hear you recite the solution itself.

    Solution: Don’t worry, others have used skunk works to have some great successes. Companies such as Google, Microsoft, Ford, and Atari have done amazing work with skunk works. So have I, and I’ll show you how I’ve done it so you can put it into practice for yourself based upon my loose framework.

    Supporting points

    Once you’ve worked through the pain and solution, it’s time to provide a little more information to the reviewers and readers of your proposal. Merely telling people that there is pain and a solution is great to lead with; however, it’s not enough. You’ll still need to convince people that this idea applies to a broad range of other contexts, and that this is a presentation that they need to see so that they can benefit from your wisdom. What are a couple of key points that you can use to support the validity of your proposal and the claims that you may have made?

    Support 1: Origin in the 40s with Lockheed. They used it to create a jet fighter to help fend off the German jet fighter threat in 1943. Kelly Johnson and his team designed and built the XP-80 in only 143 days, seven less than was required.

    Support 2: Kelly had 14 Rules & Practices for skunk works projects—we don’t need them all; however, we can learn a lot from them.

    Something personal and/or humorous (optional)

    If you’re able to pull something personal into your proposal, you can help reviewers and audiences members further relate to you and what you’ve been through. It can shift a proposal from appearing to be “book report-ish” to one that speaks from your experience and perspective. I like to leave this as optional content because you may already be adding something similar in the Pain, Solution, or Supporting points sections.

    It’s important not to overlook the value—and the risk—of humor. Humor is tough to do in a conference proposal. You may have a line that you find hilarious; however, great comedy relies heavily on nuances of delivery that are difficult to transmit in a written proposal (and sometimes even harder for the readers to pick up on). Take caution, and when in doubt, skip anything that could be misperceived when creating your proposal.

    Personal: I’ve pulled together skunk works teams and busted out some skunk works projects myself!

    Humor: The results smell pretty damn good. (Wah wah wah.)

    Together, these provide the foundation for the next step, which is where we start to get more serious.

    Step 3: Turn your topics into a draft proposal

    This is where we take the organization and grouping of your thoughts and turn them into a few short paragraphs. It’s time to turn on the spell checker and call the grammar police; this is a serious activity and the midway point to having a proposal that’s ready for submission.

    You’ll be writing the best, most coherent sentences that you know how to craft based on your topic points. You should use your topic points as the outline for your proposal, hitting the ideas in the same order. As a refresher, here are my topic points, in the order they were created.

    Pain: Work has to get done; organizational good “stuff” often comes last, which means it never gets done because the bills have to get paid and people get booked on project work first.

    Solution: Don’t worry, others have used skunk works to have some great successes. Companies such as Google, Microsoft, Ford, and Atari have done amazing work with skunk works. So have I, and I’ll show you how I’ve done it so you can put it into practice for yourself based upon my loose framework.

    Support 1: Origin in the 40s with Lockheed. They used it to create a jet fighter to help fend off the German jet fighter threat in 1943. Kelly Johnson and his team designed and built the XP-80 in only 143 days, seven less than was required.

    Support 2: Kelly had 14 Rules & Practices for skunk works projects—we don’t need them all; however, we can learn a lot from them.

    Personal: I’ve pulled together Skunk Works teams and busted out some Skunk Works projects myself!

    Humor: The results smell pretty damn good. (Wah wah wah.)

    Once you’ve reviewed your topic points, put your writing skills to work. I did more gut-checking and fact-checking to make sure I wasn’t completely full of crap and to generally tighten up my thinking.

    The Science of Skunk Works — Making Sure the Cobbler’s Kids Get Shoes

    We’ve all worked at places where there’s never enough time to make sure that things are operationally done the “right way”—bills need to get paid, client or product work needs to get done and takes priority, and hey, everyone deserves to have a little bit of a life, right? There is a bit of a light at the end of this tunnel! Several companies, including Atari, Chuck E. Cheese, Ford, Microsoft, and Google, have pulled of some pretty great things by taking advantage of skunk works teams and projects. I’ve been fortunate enough to see a little bit of success with those teams and projects, as well, and will share how you can apply them to your own practice.

    Way back in the 1940s, Kelly Johnson and his team of mighty skunks used their skunk works process to design—and build—the XP-80 prototype jet fighter to compete with the Germans. In 143 days—seven days less than was needed. Kelly created 14 Rules & Practices for skunk works projects in order to help articulate the most effective way for his team to successful in the projects that they worked on. We can learn from Kelly’s rules, adapt them to our current times and perhaps more digitally dependent needs, and find some ways to put some shoes on the cobbler’s kids. And the results might just smell pretty good, if you’re patient enough.

    Notice that I didn’t just take the topic points and copy and paste them into paragraphs. Instead, I put on my editing hat and tried to establish the flow of what I was writing, keeping the paragraphs limited to 2–3 sentences for the sake of concision.

    Step 4: Phone a friend

    You know that friend you can always count on to tell you when you’ve got a booger on your noise or spinach in your teeth, or who will tell you when you were just a completely out-of-line jerk and you need to get your head on straight?

    That’s the friend you want to send your proposal to. If you’re fortunate enough to have more than one of these friends, send it to all of them. Explain to them—clearly—what they’re about to read and what the purpose is. Give them enough background so that they can provide you with actionable feedback. Tell them about the conference, the expected audience, your topic, why you’ll be good presenting on this topic, and what your proposal is about. Finally, give them a deadline of a day or two so they can review it with the focus that it deserves.

    I sent my proposal off to my friend Gabby Hon, because she’s that friend who will tell me all those things I listed above and because she’s a words-and-grammar nerd who kicks my work as hard as it needs.

    She sent me feedback, and, for once, my confidence was a bit higher than it should have been. I really like my topic and really felt strongly that I’d pulled together a solid proposal. Gabby’s feedback was essentially:

    • You’re using “a bit” and “a little bit” too much. I’ve counted 3 so far within a paragraph
    • Okay, so, there’s too much “this is what skunk works is”—which I can find on Wikipedia—and not enough “why this matters to design/tech/UX”
    • You say you can adapt the rules, but can you give a little hint?
    • I mean obviously it was all about design and working around restrictions and limitations—thus skunk works
    • If design is best when faced with limitations, then skunk works programs are our best historical example of how to do great work under something something

    Not only did Gabby provide some great things for me to think about and improve on, she was also gracious enough to let me know that I didn’t entirely stink up the page when I’d written my proposal:

    • It’s very good
    • Just the second paragraph needs some polishing

    Step 5: Revise your proposal

    Once you’ve had time to process the feedback, sit back down with your proposal and make adjustments. Don’t be shy about killing your darlings; the feedback you’ve received is meant to help you focus on the important parts and make them better. If something doesn’t fit, move it to a parking lot or remove it entirely.

    Here is my final revision that I’ll be submitting to conferences:

    DesignOps Skunk Works: Shoes for the Cobbler’s Children

    We’ve all worked at places where there’s never enough time to make sure that things are operationally done the “right way”—bills need to get paid, client or product/project work needs to get done and takes priority, and hey, everyone deserves to have a life, too. There is light at the end of this tunnel! Several companies, including Atari, Ford, Microsoft, and Google, have pulled off some great things by taking advantage of skunk works teams and projects. I’ve been fortunate enough to see some successes with those teams and projects, as well, and will share them so you can see how to apply the approach(es) to your own practice.

    Way back in the 1940s, Kelly Johnson and his team of mighty skunks used their skunk works process to design—and build—a prototype jet fighter in 143 days. Kelly established 14 Rules & Practices for skunk works projects in order to help articulate the most effective way for his team to be successful in the projects that they worked on. Not only can we learn from Kelly’s rules and adapt them to our current methods of working, we can also create our own skunk works teams and projects to ensure that the cobbler’s kids—the operational areas of our design practices—get some shoes put on their feet. And the results might just smell pretty good, if you’re patient enough.

    There’s a bit of a method to my madness, believe it or not. Here’s a micro-version of the change log of my proposal:

    • I made a key change in the title; I’m pretty uncomfortable with using the word “science” (originally “The Science of Skunk Works”). I’m pretty sure “science” is making a promise that I’m not certain I can keep in the presentation, and I’d prefer not to be called to the mat for that.
    • I tested my title with a few friends and this title fared the best. I was leaning toward “Shoes for the Cobbler’s Kids” personally, and the feedback encouraged me to not be so precious.
    • I also tightened up the copy based on Gabby’s feedback, placing extra focus on the second paragraph.

    Step 6: Submit the proposal to a conference

    You likely had a conference in mind when you started trying to pull together your proposal. Each year, I start contemplating my primary presentation for the next year as soon as I can. Generally, starting around March through April or May is when I really start to try and think about what I’ve learned and what is worth sharing with others, and then I start collecting information—notes, articles, books, and so on—so that I can support my thinking as best as I can.

    When I go through this process, then I know that I’m ready with a pretty solid proposal. I copy and paste the final, vetted version into the form and hit submit, confident that I’m not just winging it.

    And sure enough, that’s when I find that last typo.

  • This week's sponsor: MyFonts 

    Thanks to MyFonts for sponsoring A List Apart this week! Take a look at their list of the 50 most popular fonts on the web right now.

  • Rachel Andrew on the Business of Web Dev: The Ways We’ve Changed—and Stayed the Same 

    In 2005, my husband and business partner Drew McLellan had an idea for a website. He emailed friends and colleagues, we filled in the gaps, and 24 ways was launched: 24 articles in the run-up to Christmas, advent-calendar style. As I write this article, we are on day six of season 10 of that project. By 24 December, there will be 240 articles by 140 authors—many of them well-known names in web design and development. As a fun holiday season retrospective, I thought I would take a look at what 10 seasons of 24 ways can tell us about how our industry has changed—and what hasn’t changed.

    Hacking our way to CSS complexity

    The first season of 24 ways, prior to Christmas 2005, brought us techniques such as using JavaScript to stripe table rows, image techniques for rounded corners, and an article on Avoiding CSS Hacks for IE due to the imminent arrival of Internet Explorer 7. In that first season, we were still very much working around the limitations of browsers that didn’t have full support for CSS2.1.

    By 2006, Andy Budd was teasing us with the rounded corner possibilities brought to us in CSS3 and in 2007, Drew McLellan helped us to get transparent PNG images to work in Internet Explorer 6. The article titles from those early years show how much of our time as designers and developers was spent dealing with browser bugs and lack of CSS to deal with the visual designs we wanted to create. The things we wanted to do were relatively simple—we wanted rounded corners, nice quote marks, and transparency. The hoops we had to jump through were plentiful.

    The introduction to the 2013 archive of 24 ways notes that 2013 was the year that the Web Standards Project “buzzed its last.” By 2013, browsers had converged on web standards. They were doing standard things in standard ways. We were even seeing innovation by browser vendors via the established standards process. My article for the 2013 season described the new CSS Grid Layout specification, initially developed by Microsoft.

    Since 2005, the CSS that we can consider usable in production has grown. We have far more CSS available to us through browser support for the new modules that make up CSS3. The things that CSS can do are also far more complex, expressive, and far reaching. We’ve moved on from spending our time trying to come up with tricks to achieve visual effects, and are spending a lot of time working out what to do with all of this CSS. How do we manage websites and web applications that are becoming more like complex pieces of software than the simple styled HTML documents of days gone by? Topics in recent years include new approaches to using CSS selectors, front-end style guides, Git, and Grunt. The web has changed, and the ways in which we spend our time have changed too.

    We all got mobile

    In the 2006 edition of 24 ways, Cameron Moll explained that,

    The mobile web is rapidly becoming an XHTML environment, and thus you and I can apply our existing “desktop web” skills to understand how to develop content for it. With WML on the decline, the learning curve is much smaller today than it was several years ago. I’m generalizing things gratuitously, but the point remains: Get off yo’ lazy butt and begin to take mobile seriously.

    The Mobile Web Simplified

    The iPhone wasn’t launched until 2007, a move by Apple that forced us all to get off our lazy butts and think about mobile! In December 2007, Brian Fling explained the state of the mobile landscape half a year after the launch of the iPhone. It wasn’t until responsive design was brought to life by Ethan Marcotte on A List Apart in May 2010, however, that articles about mobile really became numerous on 24 ways. The 2011 season had four articles with the words “responsive design” in the title!

    By 2012, we were thinking through the implications of designing for mobile, and for mobile data. Paul Lloyd took a look back at the two approaches for responsive images discussed in the 2011 season and the emerging proposals for picture and srcset in Responsive Images: What We Thought We Needed. Tim Kadlec reminded us that,

    … there’s one part of the web’s inherent flexibility that seems to be increasingly overlooked: the ability for the web to be interacted with on any number of networks, with a gradient of bandwidth constraints and latency costs, on devices with varying degrees of hardware power.

    Responsive Responsive Design

    As we rushed to implement responsive sites and take advantage of new platforms, we had to take care that we weren’t excluding people by way of bandwidth limitations. Whether it is IE6 or mobile data, some things never change. We get excited about new technologies, then come back to earth with a bump as the reality of using them without excluding a chunk of our audience kicks in!

    The work of change

    In these 10 seasons, we can see how much the web has changed and we have changed, too. Every year, 24 ways picks up a new audience and new authors, many of whom would have still been in school in 2005.

    Always, 24 ways has tried to highlight the new, the experimental, and the technically interesting. However, it has also addressed more challenging aspects. Whether an old hand or a newcomer to the industry, we can all feel overwhelmed at times, as if we are constantly running to keep up with the latest new thing. In 2013, Christopher Murphy wrote Managing a Mind, a piece that starkly illustrated the challenges that constantly keeping up can bring. This year, we are given a reminder that we need to take care of our bodies while performing the repetitive tasks that design and programming require.

    The business of web development

    Often, 24 ways has featured articles that deal with the business of being a web designer, developer, or agency owner. In 2007, Paul Boag gave us 10 tips for getting designs signed off by the client. As the recession hit in 2008, Jeffrey Zeldman wrote up his Recession Tips for Web Designers. We’ve seen articles on subjects ranging from side projects to contracts and everything in-between.

    The business archive contains some of the most evergreen content on the site, demonstrating that good business knowledge can serve you well throughout a career.

    The industry that shares

    Another thing that hasn’t changed over these 10 seasons is the enthusiasm of each 24 ways contributor for their subject, and their generosity in writing and sharing their thoughts. This reflects our industry, an industry where people share their thoughts, research, and hard-earned experience for the benefit of their peers.

    On that note, I’ll close my final A List Apart column of 2014. Best wishes to all of you who are celebrating at this time of year. I look forward to sharing my thoughts on the business side of our industry throughout 2015.

  • Learning to be Accessible 

    I’m trying to be learn more about accessibility these days. Thinking about it more, reading about it some, and generally being aware as I write code what I should and shouldn’t do in that arena.

    I am grateful for the folks in our community who work tirelessly to make sure that I can easily find information about it online. The A11Y Project is constantly getting updates from the community to help me understand what are best practices. I just read Heydon Pickering’s book, Apps For All: Coding Accessible Web Applications, a short, but really good book to remind me of how I should be writing HTML.

    I’m also speaking up more in meetings and on my project teams. When I see something that just doesn’t quite jibe with what I’ve been learning about accessibility, I bring it up. I also ask trusted friends about it too, to make sure I’m not crazy, but that it’s actually a best practice; because it matters. Sometimes the little things, such as removing an outline on focus or using empty links instead of button, are the things that can add up to a bad experience for people using the site with a keyboard or screenreader.

    Unfortunately, sometimes I get pushback—especially when working on a minimum viable product or a quick project. There is always the answer of, “we’ll fix it later.” But will you? I’ve been working on applications and projects long enough to see that going back to refactor can be even tougher to make time for.

    I try, when getting that pushback, to remind people of the fact that it’s hard to make the time for code refactors. Taking a little bit of time to do things right the first time around saves time in the long run. Eventually, I usually share the consequences some companies have faced when they don’t take accessibility seriously. I don’t always win the battles, but often reminding colleagues that there will be a wide range of people using the site—some of whom may not use it exactly as we do—is worth it. Hopefully, next time they’ll be more willing to take the time necessary up front.

    I know this has been said before, but as I’ve started reading more and more on accessibility and trying to learn more about it, I’ve found it to be rewarding. Working on a project where all users can access and use it well, that’s satisfying. One of my most satisfying moments professionally was hearing from a blind user who was thankful they were able to use our app. And if you don’t think about accessibility, well, that can just lead to a world of hurt.


  • Antoine Lefeuvre on The Web, Worldwide: Stars and Stripes and ISO Codes 

    This is the real story of a promising French start-up expanding into the U.S. market. The founders now have West Coast offices and the app has been fully translated into English. One minor detail though: to switch to the website’s English version, American customers have to click on… the Union Jack.

    A red flag about flags

    Don’t laugh—this story is no exception. Bla Bla Car, one of Europe’s hottest tech companies, is a truly international operation with a presence in 13 countries. Strangely enough, only 11 flags are listed in Bla Bla Car’s version selector. Who’s missing? Belgium and Luxemburg, whose only “fault” is to be multilingual countries. Requiring Dutch-speaking Belgian users to click on the Netherlands flag, or American users on the British flag, is a cultural faux-pas. It can even raise political hackles when you have, say, a Ukrainian user clicking on the Russian flag.

    Bla Bla Car language selector list with flag icons
    Bla Bla Car language selector.

    Version links are a key element of an international website’s navigation. But many web designers still confuse flags and languages. “What is the flag of English?” is a surreal yet often-heard question in web agencies throughout Europe (and arguably all around the world). The obvious answer is that languages have no flags. But does this mean flags are not to be used on websites?

    Do you speak es-MX?

    Let’s be honest, flags are also popular with designers because they are small, colorful, handy 16px-wide icons you can stick in the top-right corner. Sometimes you really have to deal with limited space. We had this problem a few months ago at my start-up, Clubble, when setting up our pre-launch website. We’re happy to write “English,” “Français,” and “Español” in big letters in the desktop version, but what about the mobile version?

    We need the language selector to be immediately visible and don’t want to hide any language in a drop-down list. The solution: ISO codes, “a useful international, and formal, shorthand for indicating languages.” English is “en,” French is “fr,” and Spanish is “es.” ISO codes include language variants such as American English: en-US, Brazilian Portuguese: pt-BR, or Mexican Spanish: es-MX. (Note that the first part is in lowercase, leading some purists to argue that language codes should always be in lowercase as on the European Union official portal.)

    Clubble mailing groups mobile home page
    Language ISO codes on Clubble’s mobile websitee.

    Avoid the temptation to create your own abbreviations just because ISO codes don’t fit your design. Visitors to Switzerland’s official portal might be surprised to find an English-only website. Well, the French, German, and Italian versions exist. They’re just hiding behind the tiny F, D and I links!

    Government of Switzerland home page, English version
    Hidden language selector at gov.ch.

    One powerful language selection feature is the automatic detection of a user’s language and/or country based on the browser’s language and IP address. However it isn’t a substitute for a well-designed, easily-accessible language selector, as it cannot always detect who the user is and which content he wants. On my last trip to Spain, nike.com stubbornly refused to let me access the French website no matter what I tried—going to nike.fr, or even choosing France in the country selector.

    A country is more than its language

    A German version is not the same as a version for Germany. A user in Austria or Switzerland would be reading local restaurant reviews in German; we can’t assume they are interested in restaurants in Germany. When you localize an app or website, you go beyond the translation to adapt your product to a specific country. Think in terms of places under the same laws and customs rather than populations that share a way of speaking. It’s the same principle you apply to domain names: .de is for the country Germany, not the language German. In this case, using a country’s flag to tell users you offer a version tailored to their culture is a great idea.

    Mercado Libre country selector with flag icons
    Mercado Libre’s country selector.

    E-commerce websites tend to be less confused about flags and languages, as most are localized, not just translated. Selling in a foreign market implies you comply with local laws, take payments in the local currency, understand your clients’ culture and sometimes ship goods out of country. Chances are your website won’t have one but two selectors: language and country. Or even three!

    Site with multiple selectors
    Keen Footwear and Skyscanner’s language and country (and currency) selectors.

    The Middle Language

    When labeling a link with the name of a language, localize the name too! That is, write “Tiếng Việt,” “Русский,” and “עברית” rather than “Vietnamese,” “Russian,” and “Hebrew.” Facebook’s and Wikipedia’s language selectors are two impressive examples.

    Facebook and Wikipedia language selectors
    The Facebook and Wikipedia language selectors.

    Culture is often a sensitive issue. How to localize the name “Spanish,” for instance? The answer seems easy—“español,” of course. But this word happens to have connotations. In the national context of Spain, where strong regional identities and languages such as Catalan or Basque make headlines, the term “castellano” (from the historical region of Castile) is often used, as it puts all the Peninsula languages on the same level. During a six-month-long trip across South America, I also heard “castellano” a lot, probably used by those who think “español” has a colonial feeling. Many websites and applications choose to offer two Spanish versions: one for Spain and one for Latin America that is commonly named “español latinoamericano.”

    Government of Spain regional language selector
    La Moncloa (headquarters of the Spanish government) language selector.

    Naming the Chinese language isn’t any easier, I realized when I naively asked journalist and orientalist Silvia Romanelli how to say “Chinese” in Chinese. Chinese is what ISO calls a macrolanguage, i.e. a family of dozens of languages and dialects. What’s more, since Chinese languages use ideograms, there’s a big difference between written language (文 wen) and oral language (语 yu). So unless you’re designing a voice-based app, label your link 中文, zhongwen—literally, the Middle Language (China is 中国, zhongguo—the Middle Kingdom).

    Wait, that’s not all! In the People’s Republic under Mao Zedong, Chinese characters were simplified, so that today two written forms exist: 简体中文 (simplified) and 繁体中文 (traditional). Again, this can be a political issue, as the traditional form is mostly found in Taiwan. Silvia Romanelli therefore recommends following a country-agnostic approach like Global Voices and simply stating “Simplified” or “Traditional”, not “Chinese for China” or “Chinese for Taiwan” as does Facebook.

    Language selector with two versions of Chinese named simply “Simplified” and “Traditional”
    Global Voices language selector.

    Venture forth

    You had a nice app in English with no character-encoding bugs, no awfully long words that don’t wrap nicely, and no emotional or political issues about flags and languages. Now you’re leaving the comfort of designing at home to venture into foreign lands of unknown cultures and charsets. But what can be seen as a threat is actually an opportunity. Because it is tedious and complicated, internationalization is often overlooked. If you can provide a well-localized user experience, that’s still is a nice surprise today for most non-English-speaking users. Isn’t this what we’re all looking for: a great way to stand out.

  • The Only Constant is Change: A Q&A with Ethan Marcotte 

    It’s here: a new edition of Responsive Web Design is now available from A Book Apart! Our editor-in-chief, Sara Wachter-Boettcher, sat down with Ethan Marcotte—who first introduced the world to RWD right here in A List Apart back in 2010—to talk about what’s new in the second edition, what he’s been working on lately, and where our industry is going next.

    The first edition of Responsive Web Design came out in the summer of 2011. What projects have you been working on for the past three years?

    I’ve been fortunate to have worked on some really great stuff. I’ve worked on client projects for publishers—like The Boston Globe and People Magazine—as well as for some ecommerce and financial companies. I cofounded Editorially, a responsive web application for collaborative writing. (And a product I dearly miss using.) More recently, I’ve been doing some in-house consulting to help companies planning to go responsive, including the responsive design workshops I’ve been doing with my friend and colleague Karen McGrane.

    Also, Karen and I have a podcast! (Which is an entirely new thing for me to say!) New as the experience might be, it’s been ridiculously fun: we’re interviewing the people who oversee large responsive redesigns at large organizations, and I’ve learned quite a bit.

    So I’d say the years since the first edition have been a blur. But it’s been a happy, wonderful blur, and I’ve been learning so much.

    Those are some pretty big projects. What have you learned by applying responsive principles to major media sites?

    A couple things, I guess.

    First, the importance of flat, non-interactive comps has been lessening—at least in my practice. They’re still incredibly valuable, mind—nothing’s better than Photoshop or Illustrator for talking about layout and aesthetics—but prototypes, even rough ones, are much more important to early discussions around content, design, and functionality. So yeah, I’m with Dan Mall: we need to decide in the browser as soon as we can.

    Related to that: since working on The Boston Globe back in 2011, I try to incorporate devices as early as possible in design reviews. Does a great job reinforcing that there’s no canonical, “true” version of the design. Getting a prototype in someone’s hands is incredibly effective—it’s worth dozens of mockups.

    All right, let’s talk about the book. What changes will readers see in the second edition?

    The second edition’s changed quite a bit from the first, but the table of contents hasn’t: as in the first edition, the chapters revolve around the three “ingredients” of a responsive design—fluid grids, flexible images, and media queries—and how they work in concert to produce a responsive design.

    But if you look past the chapter headings, you’ll see a slew of changes. As ALA’s readers probably know, tons of people have written about how to work responsively—whenever possible, tips and resources have been pulled in. (I mean, heck: we now have a responsive images specification, which gets a brief but important mention.) On top of all of that, errors were corrected; broken links fixed; figures updated; questions I’ve received from readers over the years have, whenever possible, been incorporated. I can’t tell you how good it feels to have those edits in—it feels like it’s the book it should’ve been.

    But even more than that, it was incredibly exciting to revisit the sheer volume of responsive sites that’ve been launched since I first wrote the article. Pulling in screenshots of so many beautiful responsive sites was, well, a real joy.

    And finally, I’d be remiss if I didn’t mention that Anna Debenham was the technical editor. Anna is a talented writer, speaker, and front-end developer; she’s also the co-founder of Styleguides.io, and responsible for invaluable research into the various web browsers on handheld game consoles. I don’t know how she found the time to review my second edition, but I’m impossibly grateful she did: the book is better for her criticisms, her insightful questions, and her great suggestions.

    You mentioned your podcast with Karen earlier. I’m personally a huge fan. It’s fascinating to hear how all kinds of different organizations, like Harvard, Fidelity, and Marriott, have gone responsive. What have you learned from having diverse teams tell you about their projects?

    I think part of responsive design’s appeal is we realized our old ways of working weren’t, well, working. Siloing our designs into device-specific experiences might work for some projects, but that “mobile site vs. desktop site” approach isn’t sustainable. So as we began designing for more screens, more device classes, and more things than ever before, the device-agnostic flexibility at the heart of responsive design—or, heck, at the heart of the web—is appealing to many.

    But as teams and companies design responsively, they often find their challenges go beyond the code—advertising or content workflows need to be optimized for multi-device work, both of which are infinitely more challenging than flexible layouts and squishy images.

    Frequently, one of the biggest challenges is the relationship between design and development: in many organizations and project teams, they’re discrete groups that only overlap at certain points in a project. That old idea of “handoff” between design and technology is where problems most commonly pop up.

    In other words, I think we’re at a point where treating “design” and “development” as discrete teams is a liability. The BBC wrote about this problem beautifully: when we’re designing for a web that’s not just flexible, but volatile—“in a constant state of flux,” even—we need to iterate more quickly, and collaborate more closely. And a closer relationship between design and development is a large part of that.

    What do you think is the biggest misperception about RWD?

    If you’ve read anything about responsive design, you’ve probably come across it: this suggestion that responsive design is somehow incompatible with performance. In other words, if you care about building a site that loads quickly for your users—and you do, right?—then you should steer clear of responsive design.

    So, what’s the reality, then?

    The idea that responsive design can’t be fast is, bluntly, false. As everyone from Filament Group to The Guardian to the British Government have shown us, you can have responsive designs that are as fast as they are flexible. It just takes careful planning, as well as an acknowledgement that performance isn’t just a technical issue—it’s everyone’s problem. There’s even data to suggest that responsive sites are faster than mobile-specific “m-dot” sites. But even so, the suggestion still floats around.

    That said, I confess I’m not too worried. Because when it comes to the whole “responsive design is bad for performance” myth, I’m with Tim Kadlec: anything that gets people discussing performance, even a misconception, is great. And on most of my projects, the result of that conversation is usually a site that’s both lightweight and responsive.

    (Thankfully, Scott Jehl’s new book, Responsible Responsive Design, dives into these questions with gusto.)

    It’s awesome to see people making such great strides on performance. What other challenges do you see RWD needing to overcome in the next year or two?

    It’s a bit difficult to focus on one in particular: process is a big concern, as I mentioned above; there are lots of discussions around the best way to do multi-device QA/testing; and I get lots of questions about how to tackle more challenging design patterns.

    More broadly, I often say the most common words you hear in a responsive redesign—“mobile,” “tablet,” and “desktop”—are also the most problematic. Quick example: “mobile” is frequently used as a proxy for a “small touchscreen, limited bandwidth.” But what if the “mobile” user’s connected to wifi? Or the “desktop” user’s tethered to a spotty 3G connection? Shorthand terms can be helpful, it’s true, but it’s often more productive to discuss specific challenges: challenges like screen size, CPU/GPU quality, input mode, network quality, and so on, and design for each independent of specific device classes.

    I mention this because, now more than when I wrote the book, responsive design isn’t about designing for “mobile.” It’s about designing for the web, a medium that’s both flexible and device-agnostic by default. And while we’re looking ahead with excitement (and maybe some trepidation) to the next big thing, I think it’s worth remembering that thinking device-agnostically can be a real, real strength.

    It sounds like we’ll be busy figuring this stuff out for a while. What would you recommend to a reader who’s just getting started—besides cough buying your book, of course? How can they keep from losing their shit at all the new stuff to learn?

    First of all: if someone figures out how to not freak out at how quickly things change? Please do email me. I’d love to know your secret. (Please.)

    When the browsers are especially bad, when the layout doesn’t seem to be gelling, I reread John Allsopp’s “A Dao Of Web Design.” Really. Honestly, the idea that we can’t control the display of our work is actually pretty freeing. We can guide it, shape it, but we can’t know if the user’s network connection is reliable, or if their browser runs JavaScript, or whether our layout will be shown on a screen that is large or small (or very, very small).

    The only constant we have on the web is the rate of change. And progressive enhancement is the best way for us to manage that. That’s why I always turn back to “A Dao Of Web Design.” Not just because it was a huge influence on me, and a direct influence on responsive web design: but because now, more than ever, we have to accept “the ebb and flow of things” on the web.

    Let’s get started.

    Pick up your copy of the second edition of Responsive Web Design from A Book Apart.

  • Planning for Performance 
    I want you to ask yourself when you make things, when you prototype interactions, am I thinking about my own clock, or the user’s?
    Paul Ford, “10 Timeframes”

    We’re not doing a good job

    Page-load times in the ten-second range are still common on modern mobile networks, and that’s a fraction of how long it takes in countries with older, more limited networks. Why so slow? It’s mostly our fault: our sites are too heavy, and they’re often assembled and delivered in ways that don’t take advantage of how browsers work. According to HTTP Archive, the average website weighs 1.7 megabytes. (It’s probably heftier now, so you may want to look it up.) To make matters worse, most of the sites surveyed on HTTP Archive aren’t even responsive, but focus on one specific use case: the classic desktop computer with a large screen.

    That’s awful news for responsive (and, ahem, responsible) designers who aim to support many types of devices with a single codebase, rather than focusing on one type. Truth be told, much of the flak responsive design has taken relates to the ballooning file sizes of responsive sites in the wild, like Oakley’s admittedly gorgeous Airbrake MX site, which originally launched with a whopping 80-megabyte file size (though it was later heavily optimized to be much more responsible), or the media-rich Disney homepage, which serves a 5-megabyte responsive site to any device.

    Why are some responsive sites so big? Attempting to support every browser and device with a single codebase certainly can have an additive effect on file size—if we don’t take measures to prevent it. Responsive design’s very nature involves delivering code that’s ready to respond to conditions that may or may not occur, and delivering code only when and where it’s needed poses some tricky obstacles given our current tool set.

    Fear not!

    Responsible responsive designs are achievable even for the most complex and content-heavy sites, but they don’t happen on their own. Delivering fast responsive sites requires a deliberate focus on our delivery systems, because how we serve and apply our assets has an enormous impact on perceived and actual page-loading performance. In fact, how we deliver code matters more than how much our code weighs.

    Delivering responsibly is hard, so this chapter will take a deep, practical dive into optimizing responsive assets for eventual delivery over the network. First, though, we’ll tour the anatomy of the loading and enhancement process to see how client-side code is requested, loaded, and rendered, and where performance and usability bottlenecks tend to happen.

    Ready? Let’s take a quick look at the page-loading process.

    A walk down the critical path

    Understanding how browsers request and load page assets goes a long way in helping us to make responsible decisions about how we deliver code and speed up load times for our users. If you were to record the events that take place from the moment a page is requested to the moment that page is usable, you would have what’s known in the web performance community as the critical path. It’s our job as web developers to shorten that path as much as we can.

    A simplified anatomy of a request

    To kick off our tour de HTTP, let’s start with the foundation of everything that happens on the web: the exchange of data between a browser and a web server. Between the time when our user hits go and their site begins to load, an initial request pings back and forth from their browser to a local Domain Name Service (which translates the URL into an IP address used to find the host), or DNS, to the host server (fig 3.1).

    Diagram showing how data moves between browsers and servers.
    Fig 3.1: The foundation of a web connection.

    That’s the basic rundown for devices accessing the web over Wi-Fi (or an old-fashioned Ethernet cable). A device connected to a mobile network takes an extra step: the browser first sends the request to a local cell tower, which forwards the request to the DNS to start the browser-server loop. Even on a popular connection speed like 3G, that radio connection takes ages in computer terms. As a result, establishing a mobile connection to a remote server can lag behind Wi-Fi by two whole seconds or more (fig 3.2).

    Diagram showing how data moves on a mobile network.
    Fig 3.2: Mobile? First to the cell tower! Which takes two seconds on average over 3G.

    Two seconds may not seem like a long time, but consider that users can spot—and are bothered by—performance delays as short as 300 milliseconds. That crucial two-second delay means the mobile web is inherently slower than its Wi-Fi counterpart.

    Thankfully, modern LTE and 4G connections alleviate this pain dramatically, and they’re slowly growing in popularity throughout the world. We can’t rely on a connection to be fast, though, so it’s best to assume it won’t be. In either case, once a connection to the server is established, the requests for files can flow without tower connection delays.

    Requests, requests, requests!

    Say our browser requests an HTML file. As the browser receives chunks of that HTML file’s text from the server, it parses them procedurally, looking for references to external assets that must also be requested, and converts the HTML into a tree structure of HTML elements known as a Document Object Model, or DOM. Once that DOM structure is built, JavaScript methods can traverse and manipulate the elements in the document programmatically and CSS can visually style the elements however we like.

    The complexities of HTML parsing (and its variations across browsers) could fill a book. Lest it be ours, I will be brief: the important thing is getting a grasp on the fundamental order of operations when a browser parses and renders HTML.

    • CSS, for example, works best when all styles relevant to the initial page layout are loaded and parsed before an HTML document is rendered visually on a screen.
    • In contrast, JavaScript behavior is often able to be applied to page elements after they’re loaded and rendered.

    But both JavaScript and CSS present bumps on the critical path, blocking our page from showing while they load and execute. Let’s dig into this order of operations a bit.

    Rendering and blocking

    The quickest-to-load HTML document is one without extra external files, but it’s also not one you’ll commonly find. A typical HTML document references a slew of outside assets like CSS, JavaScript, fonts, and images.

    You can often spot CSS and JavaScript in the HTML document’s head as link and script elements, respectively. By default, browsers wait to render a page’s content until these assets finish loading and parsing, a behavior known as blocking (fig 3.3). By contrast, images are a non-blocking asset, as the browser won’t wait for an image to load before rendering a page.

    Diagram showing CSS and JavaScript blocking.
    Fig 3.3: Blocking CSS and JavaScript requests during page load.

    Despite its name, blocking rendering for CSS does help the user interface load consistently. If you load a page before its CSS is available, you’ll see an unstyled default page; when the CSS finishes loading and the browser applies it, the page content will reflow into the newly styled layout. This two-step process is called a flash of unstyled content, or FOUC, and it can be extremely jarring to users. So blocking page rendering until the CSS is ready is certainly desirable as long as the CSS loads in a short period of time—which isn’t always an easy goal to meet.

    Blocking’s value with regard to JavaScript almost always undermines the user experience and is more a response to a lingering JavaScript method called document.write, used to inject HTML directly into the page at whatever location the browser happens to be parsing. It’s usually considered bad practice to use document.write now that better, more decoupled methods are available in JS, but document.write is still in use, particularly by scripts that embed advertisements. The biggest problem with document.write is that if it runs after a page finishes loading, it overwrites the entire document with the content it outputs. More like document.wrong, am I right? (I’m so sorry.) Unfortunately, a browser has no way of knowing whether a script it’s requesting contains a call to document.write, so the browser tends to play it safe and assume that it does. While blocking prevents a potential screen wipe, it also forces users to wait for scripts before they can access the page, even if the scripts wouldn’t have caused problems. Avoiding use of document.write is one important step we can take to address this issue in JavaScript.

    In the next chapter, we’ll cover ways to load scripts that avoid this default blocking behavior and improve perceived performance as a result.

  • Blue Beanie Day 14: Toque ’em if You’ve Got ’em 

    On Sunday, November 30, web designers and developers across the globe will celebrate Blue Beanie Day 2014, wearing a blue beanie to show their support for web standards. Join in!

    “What’s Blue Beanie Day,” you may ask? Well, it’s possible you’ve seen it in years past: a host of avatars on Twitter and Facebook, with selfies galore, each sporting a little blue toque. Here’s the thing: each is a tribute to the hat that launched a thousand sites: the blue beanie worn by A List Apart’s own Jeffrey Zeldman in that infamous selfie, and that eventually emblazoned the cover of Zeldman’s Designing With Web Standards.

    But this isn’t a plug for a book, or for the man wearing the rather fetching hat: rather, sporting a blue chapeau is a reminder that web standards—standards like semantic markup, neatly separated styles, and DOM scripting—are responsible for much of the work we do today. In the pre-WaSP, pre-DWWS world, we were forced to build to the idiosycrasies of each broken desktop browser—could you imagine anything like responsive web design without web standards? It’s true: we face a lot of challenges as the web moves beyond the desktop. But as wild and woolly as this multi-device version of the web is, it’d be significantly more challenging without the solid web standards support we enjoy today.

    So if web standards have made your life a little easier—and I know I couldn’t do my job without ’em—then upload a shot of yourself wearing a blue beanie, hat, or cap to any of these fine social media locations:

    And there’s no need to wait until November 30: if you’ve got a beanie-enabled shot of yourself, then post away!

  • Driving Phantom from Grunt 

    While building websites at Filament Group, there are a couple tools that consistently find their way into our workflow:

    • GruntJS is a JavaScript Task Runner. It runs on NodeJS and allows the user to easily concatenate and minify files, run unit tests, and perform many other tasks, from linting to minification of images.
    • PhantomJS is a headless (Webkit-based) web browser. A headless web browser renders a page without having a visible window. Using this functionality, we can write code that you would want to run in a browser, but see its results in the command line. This allows us to run scripts and even render snapshots of pages without having to open a browser and do it manually.

    Together, these tools allow us to get consistent feedback for our code, by further automating checks that would normally require opening a browser.

    For this example, we’re going to build a Grunt task that takes a screen shot of the pages we’re building (similar to Wraith, but far less advanced). There are multiple parts to make this work, so let’s break it down. First, we will write a PhantomJS script that renders each page. Second, we make a NodeJS function that calls this script. Finally, we make a GruntJS task that calls that Node function. Fun!

    To get started, we need to make sure that PhantomJS is installed. Since we’re using Phantom from the context of a NodeJS application, a very easy way to install it is by using the NPM PhantomJS installer package. Installing Phantom in this manner allows us to make sure we have easy access to the path for the Phantom command while simultaneously having a local, project-specific version of it installed.

    To install locally: npm install phantomjs.

    Now, we need to write a script to give to PhantomJS that will render a given page. This script will take two arguments. The first is the URL of the page that needs to be opened. The second is the file name for the output. PhantomJS will open the page, and when the page has opened successfully, it will render the page as a PNG and then exit.

    var page = require( "webpage" ).create();
    var site = phantom.args[0],
        output = phantom.args[1];
    page.open( site, function( status ){
        if( status !== "success" ){
            phantom.exit( 1 );
        page.render( output + ".png" );
        phantom.exit( 0 );

    Let’s create a lib directory and save this file in it. We’ll call it screenshotter.js. We can test this quickly by running this command from our command line (in the same directory we installed phantom): ./node_modules/.bin/phantomjs lib/screenshotter.js https://www.google.com google. This should create a file in the same directory named google.png.

    Now that we have a PhantomJS script, let’s work on making this run from Node. PhantomJS is a completely different runtime than Node, so we need a way to communicate. Luckily, Node gives us an excellent library named child_process and in particular, a method from that library called execFile.

    If we look at the documentation for the execFile method, we can see that it takes up to four arguments. One is mandatory, the other three are optional. The first argument is the file or, in our case, the path to PhantomJS. For the other arguments, we’ll need to pass PhantomJS args (the URL and output from above), and we’ll also want to include our callback function—so we can make sure we grab any output or errors from running Phantom.

    var path = require( "path" );
    var execFile = require( "child_process" ).execFile;
    var phantomPath = require( "phantomjs" ).path;
    var phantomscript = path.resolve( path.join( __dirname, "screenshotter.js" ) );
    exports.takeShot = function( url, output, cb ){
        execFile( phantomPath, [
        function( err, stdout, stderr ){
            if( err ){
                throw err;
            if( stderr ){
                console.error( stderr );
            if( stdout ){
                console.log( stdout );
            if( cb ){

    Our example code from above is written as a Node.js module. It has a function that takes three parameters. These parameters are the same parameters that are used in the PhantomJS script from above and a callback function to run when the task has completed. It then calls execFile and passes it three arguments. The first is the path to PhantomJS. The second is an Array with the our passed in parameters. The third is our callback function. This callback function is called with three arguments: err, stdout, and stderr. err is the error thrown by Phantom if something bad happens within that script. stderr and stdout are the standard error and standard output streams. This should give us everything we need to call our script as though it’s a regular NodeJS function, which will make it perfect for a Grunt task. Let’s save it in lib/shot-wrapper.js.

    Now, for the Grunt task:

    var screenshot = require( "../lib/shot-wrapper" );
    grunt.registerMultiTask( 'screenshots', 'Use Grunt and PhantomJS to generate Screenshots of pages', function(){
        var done = this.async();
        // Merge task-specific and/or target-specific options with these defaults.
        var options = this.options({
            url: '',
            output: ''
        screenshot.takeShot( options.url, options.output, function(){

    Let’s take a look at this piece by piece. First, we require the shot-wrapper library we built above. Then, we create the task screenshots by using grunt.registerMultiTask. Since the takeShot method is asynchronous, we need to create a done callback function that lets Grunt know when to complete the task. The options object sets defaults for url and output in case they aren’t passed in (in this case, they’re empty strings, which won’t work). Finally, pass the options and the done callback into the takeShot method. Now, when somebody calls this Grunt task, your code will run.

    Let’s give it a try. Here’s an excerpt from my Gruntfile:

    screenshots: {
      default_options: {
        options: {
          url: 'http://www.alistapart.com/',
          output: 'ala'
    An animated gif running screenshots tasks

    The task has run, so we’ll open the file produced:

    open ala.png

    And voilà: as you can see from this rather large image, we have a full-page screenshot of A List Apart’s homepage. (Note: you may notice that the web fonts are missing in the rendered image. That’s currently a known issue with PhantomJS.)

    Just imagine what you can do with your newfound power. Phantom and Grunt give you ample freedom to explore all sorts of new ways to enhance your development workflow. Go forth and explore!

    For more in-depth code and to see the way this works when building a project, check out the repository.

  • Matt Griffin on How We Work: Pricing the Web 

    I probably don’t have to tell you that pricing is slippery business. It requires a lot of perspective, experience, and luck (read: trial and error). There are a number of ways we can correlate monetary value to what we do, and each has its pros and cons.

    It may seem at first glance that pricing models begin and end in the proposal phase of a project. That pricing is simply a business negotiation. But whether we’re talking about design, development, or business methodologies, our processes affect our motivations, and influence outcomes—often throughout the entire project. We’ll be examining both client and agency motivations in our comparisons of pricing models, so you can judge whether those motivations will help you make better work with your clients.

    All of these pricing systems operate with the same set of variables: price, time, and scope. In some systems, such as hourly pricing, variables are directly dependent on each other (e.g. if I work an hour, I get paid my hourly rate, and deliver an hour’s worth of work). In others, like fixed price and value pricing, the relationships can be nonlinear (eg. I am paid a sum of money to achieve some set of results, regardless of how much time I spend doing it).

    These dependencies tend to define each system’s inherent risk and potential for profit. And all the differences can get pretty bewildering. One person’s experience is hardly enough to understand them all well, so I’ve enlisted some friends from web agencies of various sizes to chime in about how they make things work.

    As with most things in life, there’s no perfect solution. But if you want to get paid, you have to do something! Enough gum-flapping, let’s take a look at some of the different ways that people are pricing web projects.

    Fixed price

    With fixed-price projects, you and the client agree up front on a cost for the entirety of the project. Many folks arrive at this number by estimating how many hours they think it would take them to do the project, and multiplying that by an hourly rate. That cost will be what the client pays, regardless of actual hours spent.

    Client motivation

    When the price of a project is fixed, the variable tends to become scope of work. This encourages clients to push for the maximum deliverables they can get for that cost. This can be addressed to a degree by agreeing on a time limit for the project, which keeps requests and scope changes from occurring in perpetuity.

    Agency motivation

    On the agency side, your motivation is to be as efficient as possible to maximize the results while reducing time spent. Less time + more money = greater profit.


    Because you know exactly how much money is coming in, revenue is fairly predictable. And since revenue isn’t tied to the time you spend, profit is potentially greater than with a time-based model—especially when the cost is high and the timeline is short.


    The same factors that provide the possibility of greater profit create the potential for greater loss. Defining exactly what a client will receive for their money becomes a high priority—and defining things well can be harder than it sounds.

    Eileen Webb, Director of Strategy and Livestock at webmeadow, provides some insight into how she defines scope with her clients:

    I like to define the project boundaries clearly by having a “What’s Not Included” section. This may be a listing of services you don’t offer, like SEO or hosting. It’s also a place to list features that you and the client discussed but decided against for this budget or phase. Defining what falls outside the scope is a good way to help everyone agree on what falls in it.

    Now, getting to this definition in the first place is—I probably don’t need to tell you—hard work. And hard work is something you should get paid for. Starting out with an initial discovery engagement is something nearly any project can benefit from, but for fixed-price projects it can be invaluable.

    Resourcing for a fixed-price project can also be hard to estimate, since scope is not necessarily easy to equate to effort and person-hours needed.

    But the primary difficulty with fixed price may be the innate conflict between a client’s motivation to ask for more, and an agency’s motivation to provide less. For a fixed-price project to be successful, this must be addressed clearly from the beginning. Remember that scope discussions are just that: discussions. More isn’t always better, and it’s our job to help keep everyone on the project focused on successful outcomes, not just greater quantities of deliverables.


    At its core, hourly pricing is pretty simple: you work an hour, you get paid for an hour. Hourly, like all time-based pricing, suggests that what’s being paid for is less a product than a service. You’re being paid for your time and expertise, rather than a particular deliverable. Rob Harr, Technical Director at Sparkbox, explains how hourly projects tend to work for them:

    Since everything we do is hourly, the end of the job is when the client says we are done. This sometimes happens when there is still approved budget left, and other times when the budget is completely gone. Often times our clients come back for additional SOW’s to continue the work on the original project.

    Client motivation

    With hourly, clients are encouraged only to ask for work when that work appears to be worth the hourly cost. Since there’s no package deal, for each feature request or task they can ask themselves, “Is this worth spending my money on, or would I rather save it for something else?”

    Project delays are not a financial concern for the client, as no money is spent during this time.

    Agency motivation

    The more an agency works, the more they get paid. In its purest form, this leads to the agency simply wanting to work as much as possible. This can be limited by a few factors, including a budget cap, or not-to-exceed, on the project.

    Project delays are a major concern for the agency, as they’ll lose revenue during these periods.


    Every hour a team member spends is paid for, so the risk of this model is very low. If a company is struggling with profitability, I’ve personally found that this is a great way to get back on track.


    Unlike fixed-price models, you can only earn as much as you can work. This means that profit maxes out fairly quickly, and can only be increased by increasing hourly rate (which can only go as high as the market will bear), or expanding the team size.

    Because the agency is only paid when they work, this also means a big imbalance in how project delays affect both sides. Thus clients that aren’t in a big hurry to complete work—or have inefficient decision-making structures—may not worry about long delays that leave the agency financially vulnerable. This can be addressed somewhat by having conditions about what happens during delays (the client pays some sort of fee, or the project becomes disproportionately delayed so the agency can take on new work to fill the gap in their schedule). Even with these measures, however, delays will cause some kind of financial loss to the agency.

    Weekly or monthly

    Though similar to hourly in many ways, charging by weekly or monthly blocks has some distinct differences. With these models, the cost assumes that people work a certain number of hours per week or month, and the client is billed for the equivalent number of hours, regardless of whether or not actual hours spent were more or less than assumed.

    Trent Walton, founder of Paravel, explains why they like this approach:

    Most of our clients operate in two-week or month-long sprints. For many projects, we’ll quote chunks of weeks or months to match. This alignment seems to make sense for both parties, and makes estimating scope and cost much easier.

    Client motivation

    Clients tend to want the agency to work as much as possible during the time period to get the maximum amount of work or value. This can be curbed by having a maximum number of hours per week that will be spent, or understanding limitations like no nights or weekends. Related to this, it’s in the client’s best interest to not let project work succumb to delays.

    Agency motivation

    On the agency side, we’re encouraged to be as efficient as possible to maximize results each week, while spending fewer hours accomplishing those tasks. As long as the results are comparable to what’s expected, this motivation tends not to result in conflict.

    At Bearded we’ve found that with weekly projects we spend, on average, the number of hours we bill for. Some weeks a little more, some a little less. But it seems to all come out in the wash.


    Knowing that a time period is booked and paid for makes resourcing simple, and keeps the financial risk very low.

    Because the agency is paid the same amount every week or month, clients will tend to do whatever’s necessary to avoid any delays that are in their control. This completely removes the risk of the agency losing money when projects are held up, but also requires the agency to use a process that discourages delays. For instance, at Bearded, we’ve moved to a process that uses smaller, more frequent deliverables, so we can continue working while awaiting client feedback.


    Similar to hourly, the agency’s profit is capped at the weekly or monthly rate they charge. To make more revenue they’ll need to charge more for the same amount of work, or hire more people.


    Value pricing is a method wherein the cost of the project is derived from the client’s perception of the value of the work. That cost may be a fixed price, or it may be a price that factors in payment based on the effect the work has (something closer to a royalty system).

    Dan Mall, founder of SuperFriendly, explains his take on value pricing using a fixed cost:

    I use a combination of value pricing with a little of cost-plus. I try my best to search for and talk about value before we get to dollar amounts. When my customers are able to make a fully informed price/value assessment, the need to justify prices has already been done, so I rarely have to defend my prices.

    Dan’s approach suggests that if a company stands to gain, say, millions of dollars from the work you do, then it doesn’t make sense for you to merely charge a few thousand. The value of your work to the company needs to be factored in, resulting in a proportionally larger fixed cost.

    Other takes on value pricing tie the cost of the project directly to the results of the work. This can be assessed using whatever metrics you agree on, such as changes in revenue, site traffic, or user acquisitions. This sort of value pricing lends itself to being used as an add-on to other systems; it could augment an hourly agreement just as easily as a fixed price one.

    It’s worth noting that none of the folks I talked to for this article have done this in practice, but the general approach is outlined in Jason Blumer’s article Pricing Strategy for Creatives.

    Client motivation

    This depends primarily on the other system that you’re using in conjunction with value pricing. However, if a client recognizes the tangible gain they expect from the outset, this will tend to focus their attention on how the work will influence those outcomes.

    Agency motivation

    When payment is tied to metrics, the focus for the agency will be on work that they believe will positively affect those metrics. Like client motivations, an agency’s other motivations tend to be the same as the other system this is based on (fixed, hourly, weekly, or monthly).


    Because of the nonlinear relationship between labor and revenue, this approach has the highest potential for profit. And as long as the base pricing is reasonable, it can also have very low financial risk.


    Since value pricing is potentially connected to things outside your control, it’s also potentially complicated and unpredictable. If revenue is based on future performance metrics, then accurately determining what you’re owed requires knowledge of those metrics, and likely a little legwork on your part. There’s also a certain amount of risk in delaying that payment until a future date, and having its existence in question altogether. As long as the base pricing you use is enough to sustain the business on its own, that risk seems less worrisome.

    With value pricing, there’s also the need to assess the value of the work before agreeing on a price. Which is why—as with fixed-price projects—value-pricing projects often work well as a followup to an initial discovery engagement.

    Patty Toland and Todd Parker, partners and co-founders of Filament Group, explain their approach to an initial engagement:

    Most of the projects we engage in with clients involve fairly large-scale system design, much of which will be defined in detail over months. We provide high-level overall estimates of effort, time and cost based on our prior project work so they can get a sense of the overall potential commitment they’re looking at.

    If those estimates work with their goals, schedule and budget, we then agree to an initial engagement to set a direction, establish our working relationship, and create some tangible deliverables.

    With that initial engagement, we estimate the total amount of time in person-days we plan to spend to get to that (final) deliverable, and calculate the cost based on a standard hourly rate.

    It depends

    So what’s the best approach for you? Blimey, it depends! I’ve talked with many very smart, successful people that use very different takes on various approaches. Each approach has its benefits and its traps to watch for, and each seems to work better or worse for people depending on their personalities, predilections, and other working processes.

    Ultimately it’s up to you. Your hunches, experience, and probably a little experimentation will help you decide which method makes the most sense for you, your team, and your clients. But don’t be surprised if once you find a good system, you end up changing it down the road. As a business grows and evolves, the systems that work for it can, too.

    Now that we’ve talked about pricing methods, we’re ready to move on to something everyone’s really bad at: estimating! Stay tuned for that in part three of this series.

  • Destroying Your Enemies Through the Magic of Design 

    A note from the editors: We’re pleased to share an excerpt from Jenny Lam and Hillel Cooperman’s new book Making Things Special, Tech Design Leadership from the Trenches, available now. A List Apart readers can also enter to win a copy of the book.

    Hierarchical organizations large and small are rife with politics. In fact, the smaller the stakes, the more vicious they can be. Political organizations are ones where what things look like are just as, or more, important as what you actually do. Dealing with perceptions as well as ego and insecurity is part of dealing with human beings. This is who we are. And as soon as we create situations where there are winners and losers we create politics. And fighting. In some organizations, regardless of how brilliant your design may be, the politics will kill your plans before they have a chance to really blossom. And that’s a shame.

    The single most important thing you can understand about navigating the gauntlet of organizational politics is the relative risks of saying no versus yes. Your job, your dream, your passion is to say “yes.” Yes to your product vision. Yes to your design. Yes to delighting customers. But the road is littered with opponents. These are people who will raise concerns about your proposals, reasonable sounding concerns. Concerns that may or may not be genuine. Maybe they’re good thoughts to consider that have been offered in good faith, and maybe they’re just obstacles designed to trip you up and damage you as a competitor in the organization. If you suspect an opponent’s motivations are personal, you’ll never prove it. That only happens in the movies. In effect, their motivations are irrelevant. Genuine or jerky, your only remaining option is to deal with their issues at face value.

    But how?

    Before we answer, let’s pause for an anecdote.

    Years ago we worked on one of two teams in the same company that worked on competing projects. This happens often. The company’s leadership hopes competition fosters innovation, and people bringing forth their best ideas. The other team was huge and had been working on their project for years. There were smart and talented people on that team doing good work. They even had good design talent, but the team wasn’t design driven. They were technology driven. This is not to say that they didn’t think about customers. They did. It’s just that the high order bit was their technology choice, and then they did their best to design around those choices.

    Our team was small. We had decent ideas and were design led. Our team fashioned a high-fidelity prototype that illustrated our ideas. It was on rails, a glorified slide show. And it was gorgeous. The other team had code. We had beautiful images that moved.

    As things came to a head politically, we finally revealed our design to the other team. After the presentation, they looked like they’d been punched in the stomach. Even though they had code, we just had a better story. We had something inspiring. Their stuff was flat. And boring. Literally and metaphorically. And even though they were creative and smart, the genetics of their team had led them down an uninspiring path. They knew it. And so did the execs who saw both teams’ work.

    Within a week those execs tried to merge our teams. And when it was clear that we were culturally incompatible, their project was killed. Was our design work solely responsible for the end of their project? No. Was it one of the things that sent them over the edge? Without a doubt.

    Now let’s return to our discussion of how you can deal with the people who oppose your plans in your organization. Your first choice is to use the logic of your arguments, your personal charm, and maybe a little horse trading to get those folks on board. And in many cases that works. It’s always your best option. We’re big fans of working together harmoniously. But the larger the organization (and it doesn’t have to be all that large) the higher the odds that there will be some people where reasoned discussion and collaboration doesn’t work. Ever.

    Remember, the political economics of saying “no” in large organizations are so much better than saying “yes.” Saying “no” costs essentially nothing. You don’t need to prove anything. You’ll almost never be proven wrong for saying no. And the optics are great too. The person saying “yes” looks overly enthusiastic, while the person saying “no” in reasonable tones sounds like the grownup. The naysayer just has to raise reasonable doubt to save the company from wasting time and money on some “foolish and poorly thought out initiative.” However, saying “yes” is costly. You’re putting yourself out on a limb. You’re being specific. You’re opening yourself up to attack. You’re trying to do something.

    As a user experience design leader you have a secret advantage. It’s the thing that often overcomes every opponent, every craven naysayer. It’s the High Fidelity Visualization.

    What is the High Fidelity Visualization? It could be anything from a series of beautiful UI mockups, to a user experience prototype on rails, to a freeform prototype that the audience can try themselves, to a beautifully produced video showing customers using the prototype.

    There will always be “no” people. But “no” people rarely have counterproposals. And when they do, they’re usually vague or a set of yawn-inducing PowerPoint bullets. By definition, they don’t want to be out on a limb or they’d be subject to attack. So they keep things light on details. But when you show up with a High Fidelity Visualization, if you’ve done your job, and told a great story, everyone else in the room will fall in love with your plan. Decision makers will get excited. They’ll start defending your ideas against the naysayers. Emotion motivates them to become advocates for your plan, your story. And this is a good thing.

    But take note, we liken these visualizations to nuclear weapons. They’re incredibly powerful tools and can cause collateral damage. You’ve got to get the dosage just right. Sometimes you can do such a good job getting your company’s leadership on board with your ideas that now they bother you every week to find out why the product isn’t done yet. After all, that prototype looked essentially ready to ship, and you didn’t spend a lot of time in your pitch meeting talking about the smoke and mirrors you used to put it together.

    The point is this: with a beautifully executed High Fidelity Visualization that sets the right tone, you can neutralize the people in your organization who love to say “no.” This is your secret advantage as someone with vision, an ability to visualize your plan and bring it to life in people’s imagination, and the leadership skills to deliver on that vision. Tell the right story with your execution here and anyone who’s getting in your way will fall by the wayside.

    And for those of you who feel this is militaristic in tone, you’re right. Hierarchical organizations with more than ten people on the team invariably have a representative population of personality types — including people who will get in your way. If you really want to make something special and deliver it to customers, then you need to get the doubters on board or run them over. Partnering with the doubters is always preferable as long as it’s not at the expense of your ideas. But unfortunately, it’s not always possible. It’s not personal. It’s not about being a jerk. It’s not about beating your chest. It’s about making something great. And if you’re in an organization where people with limited vision and possibly political aims are forever stopping you from delivering something wonderful, you need to arm yourself and fight. Spending your time arguing endlessly with people so you can deliver a watered-down version of the great thing that resides in your head is a waste of your time.

    How do you know which feedback is killing your vision and which is making it better? Listen to everyone, open your mind, but trust your instincts. If you stick to your guns and fail, at least you’ll learn something. If you turn your ideas into some sort of compromise mishmash and you fail, you’ll never know exactly what caused the failure and you truly will have wasted your time.

    Good luck soldier.

  • UX for the Enterprise 

    Imagine this scenario. You’re hired to design a product that has a guaranteed audience of 50,000 users, right out of the gate. Your clients have a dedicated support staff with a completely predictable technology stack. Best of all, your work directly improves the quality of your users’ lives.

    That’s enterprise UX.

    Yes, those 50,000 people use your software because they don’t have a choice. And sure, that completely predictable technology stack is ten years out-of-date. But, despite its quirks, doing UX work for enterprise clients is an opportunity to spread good design to the industries that need it most.

    Enterprise UX is a catch-all term for work done for internal tools—software that’s used by employees, not consumers. Examples include:

    • HR portals
    • Inventory tracking apps
    • Content management systems
    • Intranet sites
    • Proprietary enterprise software

    Since switching from working with smaller clients to tackling the problems of the Fortune 500, I’ve fielded a lot of questions from designers mystified by my decision. Why choose to specialize in enterprise design when you could do more interesting work in leaner, more agile, t-shirt-friendly companies? Isn’t big business antithetical to design culture?

    The answer is: yes, often. Working with enterprise clients can be an exercise in frustration, filled with endless meetings and labyrinthine bureaucracy. It can also be immensely rewarding, with unique challenges and creatively satisfying work. As designers, we live to solve problems, and few problems are larger than those lurking in the inner depths of a global organization. After all, Fortune 500s tend to have a “just get it done” attitude toward internal tools, resulting in user experiences that aren’t well designed or tested. By giving those tools the same attention to experience that you give consumer-facing products, you can improve the lives of your users and support the organization’s values and brand.

    Why bother with enterprise work?

    Enterprise UX is often about solving ancillary problems by creating tools that facilitate an organization’s primary goals. These problems are rarely as compelling or visible as the goals they support, but they’re just as necessary to solve. A company might build the best-designed cars in the world, but it won’t matter if its quality-assurance process is hobbled by unusable software. Good design enables enterprises to do the work they were founded to do.

    Enterprise employees are also consumers, and they’ve come to expect consumer-level design in all the tools they use. Why shouldn’t a company’s inventory software or HR portal be as polished as Evernote, Pinterest, or Instagram? When a consumer app is poorly designed, the user can delete it. When an enterprise app is poorly designed, its users are stuck with it.

    The stakes can be enormously high. The sheer scale of enterprise clients magnifies the effects of good and bad design alike. Small inefficiencies in large organizations result in extra costs that are passed on to the end user in time spent, money lost, and frustration increased. Likewise, when an enterprise prioritizes user experience for its internal tools, it becomes a more effective organization; a recently released business index shows that design-driven companies outperformed the S&P average by 228% over the last ten years.

    A perfect example of the business value of enterprise UX is found in the article, “Calculating ROI on UX & Usability Projects”:

    …if you optimize the UX on a series of screens so that what was once a 5 minute task is now a 2.5 minute task, then you’ve increased a person’s productivity by 100%. That’s huge. HUGE. If the company has 100 phone agents who have an average salary of $40,000 + benefits (~$8,000) (+ an unknown amount for overhead), you could either release or retask those agents on other activities with a savings of $2,400,000/year. (half of 100 agents x $48,000).

    It’s simplified, but the point is dead-on. A company with 100 phone agents could result in millions of dollars of savings. Imagine the impact on a company with thousands of employees? Or tens of thousands?

    We have an opportunity to set the tone in some of the largest industries on the planet. Many big organizations have been defined by engineering and business thinking, with any design being either incidental or unintentional. Now, as those companies wake up to the value of solid design, they have to contend with the years of cruft that have obscured their tools and processes. Design is essential to shedding the excess and building better, leaner, and more human organizations.

    Working on enterprise projects

    There’s no such thing as an average enterprise UX project. The variety of projects within even a single company can be dizzying. I’ve worked on sites with a million visitors in the first week, and apps that fewer than 12 people use in a year.

    Projects that would be iterative in the consumer space may be a one-off in the enterprise space, so it’s crucial to get things right the first time around. Further, due to cost, culture, and the immense hassle of rolling out updates to tens of thousands of employees, enterprise clients are often bogged down with wildly out-of-date solutions. We’ve heard of huge companies begging Microsoft to extend the lifespan of Windows XP; that’s the rule, not the exception.

    Designing internal tools for a Fortune 500 company requires adaptation, but it isn’t a seismic shift from the world of consumer-facing design. Though a set of universal rules governing enterprise UX might not exist, there are a few principles I wish I’d known when transitioning from working with smaller clients.

    Design for the end user, not the client

    As with many design jobs, the end users of your software probably aren’t the same people who commissioned it.

    In large organizations, the divide between the user and the client can be vast. The director of operations might commission an inventory app for warehouse personnel, or someone from IT might commission a reporting tool for the sales team. In an enterprise-scale bureaucracy, the clients in charge of UX projects are often in higher-level management roles. And while they typically have an invaluable grasp of the big picture, they may not completely realize the everyday needs of the people who will use the software.

    Conduct your stakeholder interviews to understand and agree on your client’s business goals, but don’t forget to gather user and empirical data too. Fortunately, that type of research is easier to do in an enterprise setting than in the consumer space. Corporations like to quantify things, so data on productivity and software use may already exist. And, unlike consumers who need an incentive to fill out a survey or participate in an usability study, enterprise users have an inherent investment in the end product—setting aside some time to answer your questions is part of their job.

    A successful enterprise UX project considers the users’ needs, the clients’ goals, and the organization’s priorities. The best user experience sits at the intersection of these concerns.

    Be an educator and advocate, but above all, be flexible

    Being a designer is as much a consultative role as a practical one; to justify our design decisions, we need to explain to clients our guiding principles and teach them the basics of good user experience. Otherwise, we’re nothing more than pixel-pushers.

    Most enterprise clients have their own procurement procedures and project management techniques that don’t jive with a healthy UX workflow. Designers often find themselves needing to shoehorn their process into an existing structure, an exercise which can be frustrating if not approached properly.

    I was recently involved in redesigning a section of a large corporation’s website. My team was responsible for handling the visual design—the content was set, and a development partner had already been hired.

    Ordinarily, we prefer to have plenty of overlap between the design and development phases, to ensure that the live site matches the intentions of the design. However, the tight deadline and the client’s existing workflow made this impossible. Instead, we handed off the final mock-ups to the developers and hoped that everything was implemented without a hitch.

    We didn’t see the site again until a week before launch. Predictably, the soon-to-be-live site had numerous inconsistencies. Issues that would have been obvious with a glance from a designer—incorrect fonts, uneven margins, wrong colors—were left until the last minute to fix. The process provided ample room for the developers to do quality control (remember that ancient tech stack?), but not the designers.

    We wrote a list of crucial changes, ordered by priority, to bring the site in line with our design and the client’s goals. Many items were fixed before launch, and the client fast-tracked a second iteration to fix the rest. But none of those design issues would have launched in the first place had we insisted on more interaction between the designers and developers. Some good did come out of this challenge: we recommended the client reevaluate their design/development workflow requirements, explaining why the two processes needed to overlap. We also examined our own workflow to figure out how to make it more accommodating to the peculiarities of enterprise work—adding a postmortem phase, for instance, enables us to give feedback to a third-party developer while maintaining a tight timeline. If we were asking our clients to be flexible, we needed to be flexible too. Sure enough, the client offered us a greater opportunity to set the terms of the process on the next project.

    Needing to adapt to a new set of restrictions is an opportunity, not a hindrance. One of the most valuable things a designer can offer a large organization is insight into the design process and its importance. Design education and advocacy can extend beyond a single project, giving the client an understanding of how to better accommodate design thinking within the organization.

    Learn the culture, speak the language

    Designing internal tools for an organization requires an understanding of that organization’s culture, from the basic mindset to the quirks that make it unique.

    Corporate clients are often forced into short-term thinking, which can make it difficult to push longer-term design goals. When dealing with enterprise clients, remember their priorities: meeting a quota by the end of the quarter, exhausting a budget so they can secure the same amount next year, or improving a metric to keep the boss happy. Corporate clients are less concerned with design trends or UX best practices—they just want something that works for them. It’s best to frame design decisions around the client’s goals to sell them on your thinking.

    Of course, that’s easier said than done. It isn’t always obvious what the client cares about. Plenty of organizations pay lip service to values that haven’t really permeated the culture, making it hard to know what to aim for in the design process. It’s amazing how many enterprises describe themselves as “design-focused” or “innovation-driven” without anyone below the C-suite knowing what those terms mean.

    So how do we figure out what an enterprise client is really about?

    It takes some time, but one of the best ways is to pay attention to the language your clients use. Different organizations have different vocabularies, which reveal the way they think. You’ll likely encounter jargon, but your job is to listen—and help your clients translate that language into actionable goals. Do employees talk about “circling back” or “talking about this offline”? Structured communication may be important to that company. How about “value-add” or “low-hanging fruit”? Quick wins and return-on-investment are probably cornerstones of that organization’s culture.

    No client wants to learn design lingo just to be able to communicate with you, and corporate clients in particular are busy with a million other things. Learn their language so they don’t have to learn yours.

    Go ahead

    We designers live to solve problems, and enterprise organizations provide fertile ground. They present a different set of constraints than startups and smaller clients, and while some designers balk at the idea of their work being constricted by a bureaucracy, others remember that the best design flourishes within well-defined boundaries.

    Working on enterprise projects is something every UX designer should try. Who knows? You may just like it enough to stay.

  • Cultivating the Next Generation of Web Professionals 

    I’ve spent most of my career at institutions of higher education, and during that time, I have had the good fortune to work with several incredible students. Former interns are now LinkedIn connections working for television shows, book publishers, major websites, ad agencies, and PR firms, and the list of job titles and employers makes me proud. Along the way, I tried to give them interesting projects (when available), enthusiastic references (when merited), and helpful career advice (when requested).

    And despite their success, I feel like I fell short. I could have offered more to them.

    Mentoring opportunities, after all, aren’t limited to internships and official programs. There is a lot that we as individuals can do to serve as role models, ambassadors, and teachers to the web professionals of tomorrow.

    Skillsets will evolve and technologies will come and go, but we can create the digital experiences of the future today through the values and attitudes we instill in the next generation of web workers.

    Finding new layers of learning

    The web has matured significantly since it hijacked my career path back in college, and so have our understanding of and attitudes toward it. “Doing it right” calls for strategic skills like testing, measurement, and planning; interpersonal skills like negotiation, leadership, and collaboration; and technical skills in writing, coding, or design.

    But has the education of the next generation of web professionals matured accordingly? How much are they learning bricklaying versus architecture? This isn’t meant to be a condemnation of curriculums at colleges and universities, where we are beginning to see more courses, certificates, and even degree programs that reflect this approach. This is more an acknowledgement of the new nature of education nowadays—experiential, fluid, occasionally roundabout, and highly networked.

    We often talk about how the success of our work is determined by the strength of our relationships and our ability to work with people. This is what Jonathan Kahn has been talking about and working on with the Dare Conference. We need to extend that way of thinking to the relationships we build with each other, and, in particular, with the future professionals who will one day take our place at the client’s table.

    I’ve always cherished the thoughtfulness that our industry regularly displays, and how, despite serious concerns about sexism, diversity, and harassment, there is an overriding sense of justice and support. Within our profession, we have built a special community. Since our future colleagues are among these students, let’s welcome them into it.

    Bring your experience back to the classroom

    Future web professionals require connections to peers and leaders in the field and to enhanced learning experiences. We can build those connections by meeting students where they are: in their classrooms. What undergraduate or graduate programs offered in your area align with your skillset? Reach out to the relevant faculty—in journalism, public relations, computer science, human-computer interaction, graphic design, technical writing, and other departments—to see if they are looking for guest speakers.

    Brand and content strategist Margot Bloomstein has spoken to undergraduate classes about a half-dozen times, and invited top names in the field to speak to her own content strategy class at Columbia University. My Meet Content partner-in-crime, Rick Allen, teaches a course at Emerson College in Boston on electronic publishing, and he’s been kind enough to invite me to speak to his graduate students twice (and sometimes I think the experience is more rewarding for me than for them!).

    You can also reach out directly to college career centers. Amanda Costello, a content strategist at the University of Minnesota, has had success with this approach, working with them to organize and promote events where she can talk about her work with students who may have an interest in a web profession.

    If you catch the bug after a guest-lecturing stint, reach out to those programs or your local community college to see if they are looking for new adjunct faculty, and teach your own course. It’s a huge time commitment, to be sure, but teaching is a great way to approach your work with fresh eyes and maybe realize a thing or two you didn’t know before—while sharing your knowledge with an eager audience.

    Expand learning opportunities off-campus

    Invite students to the next local industry event. Hackathon? Content strategy meetup? UX book club? It’s all good. Work with professors teaching relevant courses to see if their students can get extra credit for attending, or maybe host a “student night” of lightning talks where they can talk about their research or perspectives on the field so far. Similar to Costello’s approach, send information about your professional networking event directly to career centers so they can promote it to students who may be interested.

    We can also make powerful connections outside the construct of a university setting. Karen McGrane recently wrote about how she pays forward the 30 minutes an academic whose name she can’t even recall gave her that helped steer her toward a graduate program and, eventually, a career.

    With that post echoing in my mind, I recently agreed to meet with a young woman who reached out to me via Twitter. She was intrigued by my job title, curious about how I got to where I am, and wondering what her next steps might be. We closed our 30-minute conversation over coffee on an Au Bon Pain patio with me promising to connect her to a former intern of mine, whom I had counseled as she struggled to find her place postgraduation and watched as she emerged confident with a rewarding job in her chosen field.

    I don’t know if that connection will help her, or if anything I said in those 30 minutes made sense, but if nothing else, I know that I helped reassure her that there are people in the industry who are willing to meet a complete stranger for 30 minutes on a Tuesday and talk shop. For someone just starting out professionally and looking to find her place, that’s significant.

    Make conferences more accessible for young attendees

    We are lucky to work in an industry with several opportunities for professional development, events where we can gather in person and learn from each other. We need to work harder to bring college students into this fold. There are two main obstacles: awareness and budget.

    Building the pathways to make students aware of conference opportunities (both for presenting and attending) is doable over time, but a tougher problem to solve is budget. The average college junior does not have the resources to pay a conference fee, let alone airfare and hotel. Within a university, a student may receive funding from the provost’s office or a dean’s special-projects fund to attend an academic conference. But what about professional events?

    As sponsorship dollars fly fast and furious around various events, let’s consider the possibility of offering scholarships to select student attendees or a discounted student rate, as some conferences (like An Event Apart) already do. In this vein, for two years running, Facebook has sponsored content strategy fellowships that fund three students’ attendance at the annual Confab Central content strategy conference, in addition to extending an opportunity to apply for a content strategy internship with the social-networking giant. Some conferences, like UXPA, organize a student volunteer program that helps staff the conference while providing a free conference experience (complete with networking opportunities) in return. If sponsorships and scholarships aren’t possible, conferences should work with colleges to allow attendance at events relevant to a student’s major to count as course credit.

    But what about taking that a step further and creating a professional development experience just for students? Two such initiatives are currently underway. One is the Center Centre at the Unicorn Institute, a user experience design-focused education project spearheaded by Jared Spool and Leslie Jensen-Inman. Also, the HighEdWeb conference has introduced the CrowdSource Summit, a sub-conference geared toward college students with the stated goal of providing them with a multidisciplinary, human perspective on web professions. (Full disclosure: I spoke at CrowdSource Summit in October.)

    While attendance is great, presenting is even better. In helping to organize Confab Higher Ed for the past two years, I am particularly proud of the fact that we have included sessions that feature not only student-generated communications efforts, but also teams with student copresenters. And offering those opportunities can yield results. Recently, I learned that one of last year’s student speakers, RIT’s Erin Supinka, landed a job as a social media manager at Dartmouth in part thanks to the recommendation of Dartmouth content strategist Sarah Maxell Crosby, who attended Supinka’s session. I hope to see this trend continue, and be echoed at other conferences.

    This increased access for students would have to go hand in hand with making conferences’ social activities less focused around drinking and creating more all-ages social events. A List Apart technical editor and front-end developer Anna Debenham recently wrote about this on the ALA blog, observing that all of the efforts I’ve outlined here would be for naught if we don’t address the social component as well. “The more young people we encourage to join the fold, the more we are excluding from these events,” she observed. (Debenham has even crafted some handy guidelines for event organizers.)

    Expand your interns’ horizons

    If you have interns at your company, don’t limit their involvement to tasks like research, answering emails, or bug fixes. Get them invested in the culture of your organization by discussing clients, projects, process, deliverables, and industry trends and challenges with them. Let them sit in on a kickoff meeting, client pitch, or deliverables presentation, and encourage them to share their ideas in whatever way is most appropriate. Earlier this year, Jim Ross wrote for UX Matters not only about how interns can get the most out of their opportunity, but also how companies can give interns enlightening, productive work experiences. The Boston-based web design firm Upstatement offers a development apprenticeship that seems like it would be one such position.

    The other day, I sat in on a weekly status call with a client. Our project manager called in our co-op student who had worked on a landing page design and asked her to explain to the client the different options and the reasoning behind them. The PM could have easily summarized the work, but instead she asked our co-op to represent her own work—which, I might add, the client liked.

    In addition, have interns talk to people in roles that are distinct from their skillset or comfort zone—programmers, project managers, IAs, UX specialists, content strategists, designers, you name it. These mini job-shadowing opportunities will help establish a well-rounded approach to web work.

    A couple of years ago, I wrote for Meet Content about how student workers can help support content strategy work (both from the staff perspective as the one managing the students and assigning them work, and from the student perspective of someone thinking about a career path and looking for paid work that will help advance them along). Last year, a design intern at Fastspot wrote glowingly on the company’s blog about how deeply involved she became in the agency’s process while working there. It’s within these purposeful, immersive early work experiences that students discover their true callings as professionals—as well as the things they don’t like, which is important too.

    Building the future

    In a 2010 HighEdWeb presentation, Dylan Wilbanks (then at the University of Washington) exhorted the audience not to let the politics of higher ed beat them down and make them bitter. “Love the web, love higher ed, love people,” he implored us.

    In thinking about the importance of mentorship, I am drawn back to Wilbanks’s words. We may love the work that we do, yes, but we also love our field and the people within it. This love is why I care about what not only the web will look like in five, 10, 20 years, but also our profession and our community.

    In an August post on The Pastry Box Project, Brad Frost reminded us of the importance in remaining self-aware as professionals, always asking ourselves why we do what we do and not just getting dragged along by the act of doing. “Understanding why we enjoy doing what we do better prepares us for whatever the future has in store,” he wrote. In short, we need to actively give a damn.

    The more we openly communicate about what drives us, the better off we, our colleagues, and our future colleagues will be. We use forums like this to debate and evolve our understanding both of the web and of ourselves as professionals, to everyone’s benefit.

    By the same token, because we care so damned much, we should be similarly engaged with the next wave of web professionals. We should work to cultivate their senses of passion and exploration, and their appreciation of a well-rounded approach to web work, so they can take the web places we’ve never dreamed it could go. Now that’s being future friendly.

  • Knowledge vs. Intelligence 

    About a week ago, I was running into major issues during development of one of my side projects. After a few nights working to resolve whatever was breaking, I was getting frustrated with my lack of progress.

    The next night, I was video chatting with Olivier Lacan, and we started discussing the problem. Since he’s a good friend, he suggested sharing my screen and helping me work through it. I was working in Laravel, the new era PHP framework, which Olivier has never worked with (nor does he work with PHP). But he’s intelligent and a great developer, so I quickly took him up on his offer.

    We pored through the codebase together—I walked him through the application and the framework, and he asked probing questions about what was happening internally. Since Olivier isn’t deeply familiar with Laravel, he asks different questions than I do, and those questions led us to interesting parts of the framework that I wouldn’t have gotten to alone. After about an hour of debugging, we identified the root issue and fixed it.

    I’ve talked about “switch programming” before—trading computers with someone and working through each others’ issues separately—but this is something different. It’s more akin to traditional “rubber ducking,” except with a trusted, intelligent friend.

    The difference between knowledge and intelligence is key here. Knowledge is the collection of skills and information a person has acquired through experience. Intelligence is the ability to apply knowledge. Just because someone lacks knowledge of a particular subject doesn’t mean they can’t apply their intelligence to help solve problems.

    Knowledge is wonderful, but it fades as techniques and technologies come and go. Intelligence sustains. Its borders extend beyond any technique or technology, and that makes all the difference.

  • Rachel Andrew on the Business of Web Dev: Managing Feature Requests 

    I started my business as a web development consultancy, building sites for clients. As we have moved to become a product company since launching Perch, we’ve had to learn many things. Not least of those has been how to manage feature requests when the “owners” of what you are building number in the thousands rather than a single client.

    When you are building a site for a client, and they ask for a feature that will be complicated or time-consuming to build, or make the UI harder to use, you can push back on it. If they then insist on the addition and are happy to pay for it, you build it. After all, it’s their project. If it adds extra complexity, they are the ones who have to live with it.

    With a product used out of the box by thousands of customers, adding every suggestion is impossible. What seems intuitive to one user baffles another. Adding too many tiny features aimed at meeting very exact requirements soon leads to a complex and confusing UI and adds to the time it takes to get up to speed on the product. This is especially important with my own product, as one of our core values is simplicity. In this column I outline some of the key things we have learned while adding features to Perch over the last five and a half years.

    What problem are you trying to solve?

    People will tend to make very precise feature requests. What they are doing is offering a solution, rather than explaining a problem. Most developers will be familiar with being asked if they can “just add a button here” with no thought to the underlying requirements of that option. When you have a product, you can expect many such requests every day.

    Customers aren’t spending their spare time dreaming up ideas for your product. They ask for a feature because their project has a requirement, and so will propose a solution based on their use case at that time. To get past the specific, you need to get to the problem that the user is having. What are they trying to achieve by way of the solution they have proposed? By asking those questions you can find out the real need, and sometimes you can help them solve it right away, without having to add an extra feature.

    Moving from the specific to the general

    Once you have a problem outlined, and you have discovered the use case of something that is not possible or is only partly possible in your product, what should you do? It’s tempting to jump in and start coding, especially in the early days of a product. You start to worry. A customer has identified somewhere your product is lacking, what if they go away? At this point you need to put that anxiety to one side, and rather than react by immediately starting to code the new feature, decide how any addition fits into the goals for the product and the needs of the majority of customers.

    It is likely that if you have managed to define a more general use case, other people will have similar requirements. If you don’t know what those are yet, then add the feature to a list for consideration. At Perch many feature requests sit in our backlog as we collect more requests for similar features. We can then try and develop a feature that will solve the more general problem. It might be very different to the specific solutions suggested by those customers, but it solves problems they have all experienced.

    What will make the most difference to the most people?

    If you have a popular product, it is easy to feel overwhelmed by feature requests. What do you do when you have a large number of valid requests that you agree would be great additions? It can feel as if whatever you do you will let someone down.

    Sometimes feature requests have a natural order of dependencies—you need to add one feature to enable something else. However, quite often you can find yourself with a backlog of equally interesting, sought-after features. Which to develop first? I tend to make that call based on which of these features would help out the most customers. This approach also gives you a good response to the vocal proponent of a feature that is of use only to a few customers. You can explain that you are ordering requests based on the number of people who need the feature.

    Build for your “ideal”—not your noisiest—customers

    In particular, I want to build features useful to those customers who fit our “ideal customer” profile. Perch has always been aimed at the professional designer and developer market. It assumes, for example, that the person building the site knows how to write HTML. We have a significant number of people, however, who dearly wish to use Perch, but who are tied to a WYSIWYG website building tool and believe Perch should support that. They can be very vocal about their disappointment that we will not build tools into Perch for “non-coders,” implying that we are wrong in turning away all of this business.

    Supporting these customers through the software would make Perch a very different tool, one that would be far less appealing to the front-end developer and web designer audience we serve. When considering feature requests, we always come back to that audience. Which of these features would make the most difference to the most people in that group?

    Only 25 percent of people with a Perch license ever raise a support ticket or post to the forum. Only 10 percent do so more than once. The majority of our customers are happily using the product and buying licenses for new sites without ever speaking to us. Be careful to seek out the opinions of the happy majority—don’t move your product away from something they love and find useful due to a few noisy people.

    Be willing to say no

    While every feature should be considered, logged, and grouped with other similar requirements, it is important to remember that sometimes you do need to say no. The product needs to be owned by someone, a person or a team with the ability to decide that a feature shouldn’t be added.

    Keep in mind the core problems your product sets out to solve and the profile of your target customers when making decisions about features. By doing so, you create a filter for new ideas, and also a way of explaining your decisions to customers who may feel disappointed in your choice.

    Realize you are not your customer

    Like so many other products that have been launched by consultancies, we built Perch to scratch our own itch. Version 1 was very much the product we needed: a product for people who cared about web standards and structured content. We then had to be willing to learn from feedback. We had to accept that some of the things we thought we should decline were real requirements for the people we felt were an ideal fit for the product.

    I believe software should be opinionated. We continue to promote best practices and modern web standards through the implementation of our product. We do this even when those values aren’t seen as important by many of our customers, as they really are the heart of the product. By keeping those core values in mind, digging down to the problems rather than accepting the first solution, and listening to our key customers, we continue to move forward while maintaining the simplicity we aimed for at the start.

  • That Pixel Design is so Hot Right Now 

    There’s a certain comfort, and often inherent cool, in things categorized as “retro.” A ’69 Ford Mustang. A greaser pompadour. Betty White. Pixel design.

    It’s no secret that pixel art is experiencing a resurgence in the digital form by way of video games like Mojang’s Minecraft and Superbrothers’ Sword & Sworcery. The latter game afforded me some inspiration; from what I was seeing online, the style of pixel art it brought to the masses—and popularized in the tangible realm by artists like Michael Myers from drawsgood—increasingly defined the style, and constraints, pixel art was created under. “Ever wonder what the Star Wars characters would look like pixelized?” Elongated single-pixel limbs, very stylized body shape. “[x designer] shows us what The Avengers would look like in pixel form!” Elongated single-pixel limbs, very stylized body shape.

    This got me to thinking of the varied styles and creations of pixel desktop icons from the mid ’90s. During that timeframe, customizing your Mac’s interface was easy and open, and doing so spoke as much to personalization as it did to thumbing your nose at the beige PC towers that defined the norm. While control panels such as Kaleidoscope quickly skinned your entire UI with a single click, some of us dove into desktop iconography, crafting pixel-based mini-mosaics under ridiculous constraints: 256 colors on a 32x32 grid, made in the resource editing app ResEdit.

    The lost medium of pixel art

    If you’re under 30, you’re likely unaware of, or have very finite exposure to, the origins of pixel-based desktop icon design.

    ResEdit itself was rudimentary, yet incredibly robust. Last officially released by Apple in 1994, it was primarily a tool for developers to create and edit resources in the resource fork architecture Macs used to rely on. One such resource, “ICON,” was our focus. Pixel by pixel, we employed common practices as the means to incredibly disparate stylistic ends:

    • Manually dithering via finite tonal variations to simulate depth
    • Stacking pixel units of a shape’s outline conservatively, to avoid “jaggies” and smooth edges
    • Faux anti-aliasing of harsh edges via the six available non-alpha-transparent grey swatches

    From system-level icons to household objects to movie characters to original creations, a varied community of creators crafted downloadable icons that graced the desktops of millions of users the world over.

    With time, nostalgia and grey hairs increase in tandem. To the former, I had a thought: get the band back together.

    Eight icon examples from the 1990s
    1990s desktop icons at scale, and zoomed 400 percent. From left to right, by: Mathew Halpern, Justin Dauer, How Bowers, Brian Brasher, Gedeon Maheux, Ian Harrington, Søren Karstensen, Ilona Melis.

    In a world where icons were king…

    And lo, this is how The Dead Pixel Society came into being: a global collection of ’90s-era icon designers, reunited. We agreed on the general idea for what we wanted to accomplish pretty quickly: to create under the same archaic constraints. But what was our theme? A specific movie or TV show? Too limiting for all participants’ tastes. Finally, considering our “retro medium,” we arrived at an ultimate thought: what if we had still been designing icons all these years, and our tools had never evolved? What would those icons look like? This evolved into a mission statement:

    We as The Dead Pixel Society honor the humble pixel with icon creations we would have done had we continued designing these past 18 years, under the exact same archaic constraints: 256 colors, pixel-by-pixel, on a 32x32 canvas.

    Today, the first Dead Pixel Society collaborative gallery is complete and live, netting out at 72,704 pixels over 71 icons from 12 icon artists in three countries, created over the span of 90 days.

    Eight icon examples from 2014
    2014 Dead Pixel Society icons at scale, and zoomed 400 percent. From left to right, by: Mathew Halpern, Justin Dauer, How Bowers, Brian Brasher, Gedeon Maheux, Ian Harrington, Søren Karstensen, Ilona Melis.

    These icons aren’t just imitations—they’re a designer’s interpretation of the subject matter. Anyone can ⌘-C / ⌘-V a JPG into Photoshop, switch the color mode to Indexed, and call it a day. Free-handing original subject matter—or externally referencing source material—evolves the process from replication into illustration. Take Mathew Halpern’s “Grumpy Cat” icon, for example. His mastery of the medium is best appreciated via this speed art video, and gives a sense of how insanely iterative pixel-icon design is:

    Sharpen your pencil tool and join us

    The first Dead Pixel Society project was a litmus test: “Is this possible?” Can this finite group of icon designers come together, create new icons under near 20-year-old constraints, hit a deadline, and do good work—all in their spare time? It could have very much failed.

    Instead, they focused, shook off the rust, and—via Photoshop’s “Mac OS” color palette—once again crafted pure pixel perfection. Within a couple hours of launch, Apple icon designer Louie Mantia created a pixel version of his Garage Band icon. Tweets came in from around the world with dedications, creations, and the echoing of a desire to submit to the gallery.

    Which brings me to the next Dead Pixel Society collaboration (DPS2: Electric Pixealoo): “Versus.” That’s where you come in. For this theme, we’re opening the doors to new designers, and exploring multiple versions of a single icon. For example, how would my take on a Steve Jobs icon look versus Gedeon Maheux’s? (Spoiler: mine would be shittier).

    It takes some time getting adjusted to what you’re quite literally boxed into, so now’s the time to start acclimating to pixel claustrophobia. If you’re a seasoned pixel artist, the finite color palette and size constraints could prove an interesting challenge. Perhaps you’re a vector-focused icon designer; switching from bezels and paths to manual dithering and faux anti-aliasing may tickle your fancy.

    For more background to it all (spoken with silky-smooth voices, no less), direct your ears to The Big Web Show No. 121. While the contextual derivation of this iteration of pixel design is Grunge-era Mac desktop icons, the draw is the limitations—and blowing them out of the water. It’s about collaboration, and being humbled by the abilities of your peers.

    Creative outlets and passion projects like The Dead Pixel Society help us avoid burnout. They energize and refuel us. As we formalize round two, we would love to have you be part of it.

  • Overwhelmed by Code 

    I was recently chatting with a friend and he was talking about all the things he wanted to learn. I was exhausted just hearing the list and realized that I am either getting old or I am getting tired; I’m not sure which.

    There is a constant pressure to learn new things and keep up with all the latest ideas: new frameworks, new platforms, new ideas of how to write code, they just keep coming out. In addition, the ebb and flow of what is desired from a front-end developer keeps changing. It used to be that knowing CSS and HTML was enough, then jQuery came along, then responsive techniques, then Node.js and then Angular, Ember, etc., etc., etc. That list, right there, it tires me out.

    So lately I’ve had to do some evaluating. What do I want to focus on, what do I love about the web? What do I actually want to learn, versus what I think I should learn. And to be honest, what I really like about the web, it isn’t always whatever is the sexy new hotness—it’s the bread and butter that makes sites easier for everyone to access and use. I love responsive design, I care about accessibility, and lately I’ve gotten really interested in performance as it pertains to CSS styles and load times.

    There is a lot of pressure out there: to learn new things, to spend all your time coding, to be the super developer. I now believe that to be impossible and unhealthy. It means you aren’t living a balanced life and it also means that you’re living under constant stress and pressure.

    So I’ve started devoting the time I have for learning new things to learning the things that I like, that matter to me, and hopefully that will show in my work and in my writing. It may not be sexy and it may not be the hottest thing on the web right now, but it’s still relevant and important to making a great site or application. So instead of feeling overwhelmed by code, maybe take a step back, evaluate what you actually enjoy learning, and focus on that.

  • Why Responsive Images Matter 

    For the first few years of my career I’d joke that I “type for a living.” That was selling myself short, though, I know—making websites is a complicated gig. It’s more accurate, I think, to say that I’m wrong for a living. I’ve been wrong about damn near everything about this job so far. I’m probably—no, definitely—wrong about plenty of things, as we speak.

    I’m spectacular at job interviews, before anyone asks.

    I should be more specific here: I’ve spent a good part of my career being wrong about matters of browsing context, but I don’t think I’m the only one. It hasn’t been all that long since the days of fixed-width websites, and the era of “looks best in Internet Explorer.” Back then, I was up to my neck in each of those industry-wide wrongnesses, sure that we were doing the right thing promoting the better browser and keeping apace with all the the hottest new CRT monitor sizes. It sure felt like I was right, at those times. It seemed like there were plenty of reasons to think so.

    I’ve been wrong about context more recently than either of those—before web standards changed the way we think about browser support, and before responsive web design changed the way we think about our layouts. For a time—and with just as little realization of how wrong I was—I’d chosen a single familiar, comfortable context for the sites I’d build. I was building websites for my context: the browsing conditions that I was used to. I was doing my work on a fast computer and a high-speed internet connection—that’s what the web was, to me. I have the privilege of assuming high bandwidth and stable networks—I can assume that sending out a request will always result in something being sent back. But we can’t make assumptions about bandwidth that way, any more than we can make development decisions based on a cursory look around our offices, saying, “Everyone here has a pretty big display,” or, “Everyone here is using Firefox.”

    I’m not the only one who made this mistake, either: not long ago, a full 72 percent of responsive websites were sending the same amount of data to mobile and desktop users, while only about six percent of responsive sites were taking significant steps in tailoring assets to mobile devices. Unfortunately, that last statistic doesn’t really track with reality: seventy one percent of mobile users expect websites to load almost as fast, or faster, than everywhere else.

    The people building the web have it the easiest on the web, and perhaps as a result: the average web page is now more than 1.8 megabytes, with images alone accounting for a full megabyte of that.

    That’s more than a case of us creating an minor inconvenience for ourselves. Building enormous websites means us shifting the burden of our mistakes onto every single user that visits our site. It’s us saying that we’re willing to build something that isn’t for some users, because that’s most comfortable for us—no different from “best viewed in Internet Explorer 5.5” or “best viewed at 600x800,” but much more costly.

    That’s not what I want the web to be. That’s not what I want this job to be. The meaning I take from this gig doesn’t come from getting a div to show up in the right place—it comes from knowing that working just a little harder can mean that entire populations just setting foot on the web for the first time will be able to tap into the collected knowledge of the whole of mankind. That’s the philosophy that started the crusade for “responsive images.” Building massive, resource-heavy sites means excluding millions of users worldwide that have only ever known the web by way of feature phones or slightly better. These are users paying for every kilobyte they consume, already keeping tabs on which sites they need to avoid day-to-day because of the cost of visiting them, and not some nebulous hand-wavy “bandwidth cost” either—actual economic cost.

    If every single one of you were convinced that this is a big deal, it still wouldn’t be enough—there are too few of us, and the steps to solve these problems in our daily work aren’t as clear-cut as they need to be. This is something I’ve wanted solved at the browser level for a long time now. I want a feature we could make a part of our everyday workflow—something we all just do as a matter of course, baked right into HTML5.

    That solution is here, thanks to the efforts of developers like Eric Portis. In our latest issue, Eric’s “Responsive Images in Practice” forgoes the rocky history and web standards minutia involved in the search for a native “responsive images” solution and cuts right to what matters most: putting those solutions to use so we can build a better web for our users. Those users will never see any difference; they won’t care what combination of responsive image techniques we used or which use cases we needed to address. They’ll see images, same as they would before. What those users will notice is that the web feels faster.

    Responsive web design is still pretty new, in the grand scheme of things. We’re all still getting the hang of it, myself included. There are plenty more things for us to be wrong about, I’m sure, but I’m excited to find them with you all. Because every time we discover we’ve been wrong about some matter of context on the web, we find a way to fix it together. And the web gets a little stronger, a little faster, and a little more inclusive as a result.

  • The $PATH to Enlightenment 

    Open source software always involves a bit of tedious setup. While it may seem like it distracts from the end goal (solving problems using the tools), the setup process is often an opportunity to get more comfortable with one of the main tools of our trade: the command line.

    The command line is inherently spooky to many people—it’s the arcane technology wielded by “hackers” and “computer wizards” in popular culture. In reality, though, it isn’t that cool. It’s a set of ridiculously simple tools created by Bell (now AT&T) employees to accomplish mostly simple tasks in the 1970s. It’s about as “space-age” as your microwave oven.

    It’s also extremely useful—like going from building a house by hand to using power tools. And through a few concepts and metaphors, we can shine a light on the darkest corners of this command line.

    One of the most important of these concepts is the Path.

    Several front-end frameworks, CSS preprocessors, JavaScript libraries, and other web development tools rely on either Ruby or Node.js being installed on your machine. Bower is one such tool. Invariably, these tools will lead you to interact with the Path. That’s because the Path will need to be aware of all the tools you install for your development environment in order for your command line to function properly.

    Understanding how the Path works may feel like a step backward, but the more often you use command-line tools, the greater the chances the Path will cause you problems. Before you lose hours of your day—or start throwing heavy things at your screen—let’s walk through the basics of using the Path.

    A humble little variable

    $PATH, as denoted by the dollar-sign prefix and the shouty uppercase, is a Unix environment variable. What is stored inside this variable is a colon-delimited list of directory paths. Something like:


    If you’re a variable-naming aficionado, you might wonder why it’s not named $PATHS, since it contains multiple paths. If I had to guess, the singular name probably refers to “the load path composed of multiple individual paths.” Let’s go with that.

    Now, if you’re curious which other kinds of environment variables exist on your system, you can type in the env command in your own command line prompt. Hit Enter and you will see a list of all the environment variables that currently exist.

    Since $PATH is a variable, it can be modified as you wish, on the fly. For instance, you could run this in your shell:

    $ export PATH=banana

    What does this do? Well, try to run the export command above in a new window inside your terminal or in whichever shell app you use, such as Terminal on OS X.

    Next, type any basic Unix command like ls (list directory contents). You’ll now see -bash: ls: command not found when ls used to work like a charm.

    This sneaky sabotage is useful because now we know that without the content inside our $PATH, shit just goes…bananas.

    But why? Because as many load paths do (including in programming languages and web frameworks like Rails), this Path determines what can be executed in your shell. If your shell can’t find anything to match the name you typed, it can’t run it.

    Oh, by the way, just quit and restart your shell application in order to restore all your commands. This was a temporary sabotage. Just be careful to never save this inside your ~/.bash_profile. That would be really bad.

    A tale of so many binaries

    In Unix, some executable programs are called binaries. That’s honestly a pretty poor name since it focuses on their format instead of their function. When you write a Unix program to accomplish a task, you sometimes need to compile its source code before it can be executed. This compiling process creates the binary. Instead of using plain text (like source code), these files use some binary format to make instructions easier for a computer to process.

    Unix comes with multiple directories in which to store binaries. You can see which directory is the default used to load binaries in the /etc/paths file.

    # the cat command can print the content of a file
    $ cat /etc/paths 

    The file contains one directory per line. The paths are listed in a meaningful order. When a binary is found in one path, it is loaded. If a binary with the same name is found in another path, it is ignored. Therefore, paths listed earlier take precedence over paths listed later.

    This is why it’s common to have problems when trying to install a binary for something that already exists on your system. In the case of OS X, if you try to install a different version of Git than the one that comes with the system, you’ll run into such an issue. That’s a bummer because Git 2.0 is really nice.

    If I cd (change directory) into /usr/bin—a common directory to store binaries—and run ls, I receive more than 1,000 results. That’s not really helpful. That said, if I use [grep](http://en.wikipedia.org/wiki/Grep) with ls | grep git instead, I can filter only the results of the ls command that contain git.

    $ ls | grep git 

    Sure enough, there was a binary for Git inside of /usr/bin. A clean OS X installation should return /usr/bin/git when you run which git:

    $ which git 

    Why is mine different, then? We can have an even better idea of what’s going on by using the -a option when using which:

    $ which -a git

    This tells me that there are two versions of Git installed on my system. Only the first one is used when I execute git commands on my command line.

    Changing paths

    Using a package manager for OS X called Homebrew, I installed my own version of Git because I like to have control over the tools I use every day and update them when I feel like it. I could update the system-installed Git from OS X, but I have no idea what other binaries or apps depend on it.

    We saw that binary files are looked up depending on the order stored in a file called /etc/paths, so why not change that order?

    Inside of the /etc/paths file, I can see that the /usr/local/bin folder in which my Homebrew-installed version of Git is located comes last. This means the git binary inside /usr/bin will take precedence over it, and my fancy new version of Git will be ignored. That’s no good.

    Now, you could try to modify the order in /etc/paths so that it suits your needs by putting /usr/local/bin at the very top. The Homebrew-installed version of Git would then load first. But despite how many times you see this advice repeated in Stack Overflow discussions, don’t do it. Ever. Configurations stored in /etc/ affect the entire system. They’re not there to be changed by individual users (yes, even if you’re the only one using your machine), and you could very well cause some unforeseen issues by tinkering in there. For instance, some utility used by OS X could be relying on the original order of /etc/paths.

    Instead, you should modify the $PATH in your environment, using your .bash_profile—the one stored in /Users/yourusername/.bash_profile.

    All you need to do to ensure /usr/local/bin is looked into first is to include the following in your .bash_profile:

    # inside /Users/olivierlacan/.bash_profile
    export PATH=/usr/local/bin:$PATH

    This exports a new $PATH environment variable by printing the existing one and simply prepending the /usr/local/bin path on the left of all other paths. After you save your ~/.bash_profile and restart your shell, this is what you should see when you call echo on the $PATH:

    $ echo $PATH

    As you can see, /usr/local/bin is mentioned twice in the $PATH, and that’s fine. Since it’s mentioned first, all the binaries that will be loaded the first time around will be ignored when it is visited last. I honestly wish there were a safe and simple way to change the order of paths, but most solutions I’m aware of are a bit too complex. You could always override the default $PATH altogether, but that’s assuming you know exactly what you’re doing and what paths to include.

    A fork in the path

    Now that you’ve changed the $PATH to your liking, you can check that the proper binary is being called when you use the git command:

    $ which git 
    $ git --version 
    git version 2.0.0
    /usr/bin/git --version git version (Apple Git-48)

    There you go. Git 2.0.0 (the Homebrew-installed version) is now the one answering git commands, and the Apple-installed version recedes in the background. If you’d rather not use git 2.0.0, you can simply uninstall it and the default version will take over seamlessly.

    Protect your path

    A host of utilities for developers (and designers) will automatically inject code into your .bash_profile upon installation. Often they don’t even mention it to you, so if you find odd paths listed in your profile, that may explain why loading a new session (which happens when you open a new shell window or tab) takes more time than it should: a bloated $PATH might take a while to load.

    Here’s my path today:


    It’s a little hard to read, so I tend to break it into lines. You can do this easily with the tr command (translate characters):

    $ echo $PATH | tr ':' '\n'

    There’s a lot of stuff going on here, but it’s much easier to understand with some verticality. Try it out, and if you don’t know why one of those lines is in your $PATH, make it your goal to figure it out. You might just learn something useful.

    Being more aware of your $PATH and how it functions may not seem like the most tangible piece of knowledge. Yet, as a web craftsperson you’ll likely have to interact with command line tools while you work—and someday, something may go wrong with one of these tools. Now that you know what your Path is, what it looks like when it’s clean, how to modify it properly, and how to check that it’s aware of your tools, there’s a good chance you’ll spend minutes instead of hours to get back on your own path: the one where you build things for people.

  • Responsive Images in Practice 
    The devil has put a penalty on all things we enjoy in life.
    Albert Einstein

    Sixty-two percent of the weight of the web is images, and we’re serving more image bytes every day. That would be peachy if all of those bytes were being put to good use. But on small or low-resolution screens, most of that data is waste.

    Why? Even though the web was designed to be accessed by everyone, via anything, it was only recently that the device landscape diversified enough to force an industry-wide movement toward responsive design. When we design responsively our content elegantly and efficiently flows into any device. All of our content, that is, except for bitmaps. Bitmap images are resolution-fixed. And their vessel—the venerable img with its sadly single src—affords no adaptation.

    Faced with a Sophie’s choice—whether to make their pages fuzzy for some or slow for all—most designers choose the latter, sending images meant to fill the largest, highest-resolution screens to everybody. Thus, waste.

    But! After three years of debate, a few new pieces of markup have emerged to solve the responsive images problem:

    • srcset
    • sizes
    • picture
    • and our old friend source (borrowed from audio and video)

    These new elements and attributes allow us to mark up multiple, alternate sources, and serve each client the source that suits it best. They’ve made their way into the official specs and their first full implementation—in Chrome 38—shipped in September. With elegant fallbacks and a polyfill to bridge the gap, we can and should implement responsive images now. So, let’s!

    Let’s take an existing web page and make its images responsive. We’ll do so in three passes, applying each piece of the new markup in turn:

    1. We’ll ensure that our images scale efficiently with srcset and sizes.
    2. We’ll art direct our images with picture and source media.
    3. We’ll supply an alternate image format using picture and source type.

    In the process we’ll see firsthand the dramatic performance gains that the new features enable.

    The status quo

    I guess I don’t so much mind being old, as I mind being fat and old.
    Benjamin Franklin (or was it Peter Gabriel?)

    We take as our subject a little web page about crazy quilts. It’s a simple, responsive page. There isn’t much to get in the way of its primary content: giant images (of quilts!). We want to show both the overall design of each quilt and as much intricate detail as possible. So, for each, we present two images:

    1. the whole quilt, fit to the paragraph width
    2. a detail that fills 100 percent of the viewport width

    How would we size and mark up our images without the new markup?

    First up: the whole quilts. To ensure that they’ll always look sharp, we need to figure out their largest-possible layout size. Here’s the relevant CSS:

    * {
    	box-sizing: border-box;
    body {
    	font-size: 1.25em;
    figure {
    	padding: 0 1em;
    	max-width: 33em;
    img { 
    	display: block;
    	width: 100%;

    We can calculate the img’s largest-possible display width by taking the figure’s max-width, subtracting its padding, and converting ems to pixels:

      100% <img> width
    x ( 33em <figure> max-width
       - 2em <figure> padding )
    x 1.25em <body> font-size
    x 16px default font-size
    = 620px

    Or, we can cheat by making the window really big and peeking at the dev tools:

    (I prefer the second method.)

    Either way we arrive at a maximum, full-quilt img display width of 620px. We’ll render our source images at twice that to accommodate 2x screens: 1,240 pixels wide.

    But what to do about our detail images? They expand to fill the whole viewport, whose size has no fixed upper limit. So let’s pick something big-ish with a standard-y feel to it and render them at oh, say, up to 1,920 pixels wide.

    When we render our images at those sizes our status-quo page weighs in at a hefty 3.5MB. All but 5.7kB of that is images. We can intuit that many of those image bytes constitute invisible overhead when delivered to small, low-resolution screens—but how many? Let’s get to work.

    First pass: scaling with scrset and sizes

    Teatherball with a tennis ball for his shoelaces

    Naturally adapt to have more than two faces

    Kool AD, Dum Diary

    The first problem we’ll tackle: getting our images to scale efficiently across varying viewport widths and screen resolutions. We’ll offer up multiple resolutions of our image, so that we can selectively send giant sources to giant and/or high-resolution screens and smaller versions to everyone else. How? With srcset.

    Here’s one of our full-viewport-width detail images:

    	alt="Detail of the above quilt, highlighting the embroidery and exotic stitchwork." />

    quilt_2-detail.jpg measures 1,920 pixels wide. Let’s render two smaller versions to go along with it and mark them up like so:

    	srcset="quilt_2/detail/large.jpg  1920w, 
    	        quilt_2/detail/medium.jpg  960w,
    	        quilt_2/detail/small.jpg   480w"
    	alt="Detail of the above quilt, highlighting the embroidery and exotic stitchwork.">

    The first thing to note about this img is that it still has a src, which will load in browsers that don’t support the new syntax.

    For more capable clients, we’ve added something new: a srcset attribute, which contains a comma-separated list of resource URLs. After each URL we include a “width descriptor,” which specifies each image’s pixel width. Is your image 1024 x 768? Stick a 1024w after its URL in srcset. srcset-aware browsers use these pixel widths and everything else that they know about the current browsing environment to pick a source to load out of the set.

    How do they choose? Here’s my favorite thing about srcset: we don’t know! We can’t know. The picking logic has been left intentionally unspecified.

    The first proposed solutions to the responsive image problem attempted to give authors more control. We would be in charge, constructing exhaustive sets of media queries—contingency plans listing every combination of screen size and resolution, with a source custom-tailored for each.

    srcset saves us from ourselves. Fine-grained control is still available when we need it (more on that later), but most of the time we’re better off handing over the keys and letting the browser decide. Browsers have a wealth of knowledge about a person’s screen, viewport, connection, and preferences. By ceding control—by describing our images rather than prescribing specific sources for myriad destinations—we allow the browser to bring that knowledge to bear. We get better (future-friendly!) functionality from far less code.

    There is, however, a catch: picking a sensible source requires knowing the image’s layout size. But we can’t ask browsers to delay choosing until the page’s HTML, CSS, and JavaScript have all been loaded and parsed. So we need to give browsers an estimate of the image’s display width using another new attribute: sizes.

    How have I managed to hide this inconvenient truth from you until now? The detail images on our example page are a special case. They occupy the full width of the viewport—100vw—which just so happens to be the default sizes value. Our full-quilt images, however, are fit to the paragraph width and often occupy significantly less real estate. It behooves us to tell the browser exactly how wide they’ll be with sizes.

    sizes takes CSS lengths. So:


    ...says to the browser: this image will display at a fixed width of 100px. Easy!

    Our example is more complex. While the quilt imgs are styled with a simple width: 100% rule, the figures that house them have a max-width of 33em.

    Luckily, sizes lets us do two things:

    1. It lets us supply multiple lengths in a comma-separated list.
    2. It lets us attach media conditions to lengths.

    Like this:

    sizes="(min-width: 33em) 33em, 100vw"

    That says: is the viewport wider than 33em? This image will be 33em wide. Otherwise, it’ll be 100vw.

    That’s close to what we need, but won’t quite cut it. The relative nature of ems makes our example tricky. Our page’s body has a font-size of 1.25em, so “1em” in the context of our figure’s CSS will be 1.25 x the browser’s default font size. But within media conditions (and therefore, within sizes), an em is always equal to the default font size. Some multiplication by 1.25 is in order: 1.25 x 33 = 41.25.

    sizes="(min-width: 41.25em) 41.25em,

    That captures the width of our quilts pretty well, and frankly, it’s probably good enough. It’s 100 percent acceptable for sizes to provide the browser with a rough estimate of the img’s layout width; often, trading a tiny amount of precision for big gains in readability and maintainability is the right choice. That said, let’s go ahead and make our example exact by factoring in the em of padding on either side of the figure: 2 sides x 1.25 media-condition-ems each = 2.5ems of padding to account for.

    	srcset="quilt_3/large.jpg  1240w, 
    	        quilt_3/medium.jpg  620w,
    	        quilt_3/small.jpg   310w"
    	sizes="(min-width: 41.25em) 38.75em,
    	       calc(100vw - 2.5em)"
    	alt="A crazy quilt whose irregular fabric scraps are fit into a lattice of diamonds." />

    Let’s review what we’ve done here. We’ve supplied the browser with large, medium, and small versions of our image using srcset and given their pixel widths using w descriptors. And we’ve told the browser how much layout real estate the images will occupy via sizes.

    If this were a simple example, we could have given the browser a single CSS length like sizes="100px" or sizes="50vw". But we haven’t been so lucky. We had to give the browser two CSS lengths and state that the first length only applies when a certain media condition is true.

    Thankfully, all of that work wasn’t in vain. Using srcset and sizes, we’ve given the browser everything that it needs to pick a source. Once it knows the pixel widths of the sources and the layout width of the img, the browser calculates the ratio of source-to-layout width for each source. So, say sizes returns 620px. A 620w source would have 1x the img’s px. A 1240w source would have 2x. 310w? 0.5x. The browser figures out those ratios and then picks whichever source it pleases.

    It’s worth noting that the spec allows you to supply ratios directly and that sources without a descriptor get assigned a default ratio of 1x, allowing you to write markup like this:

    <img src="standard.jpg" srcset="retina.jpg 2x, super-retina.jpg 3x" />

    That’s a nice, compact way to supply hi-DPI imagery. But! It only works for fixed-width images. All of the images on our crazy-quilts page are fluid, so this is the last we’ll hear about x descriptors.

    Measuring up

    Now that we’ve rewritten our crazy-quilts page using srcset and sizes, what have we gained, in terms of performance?

    Our page’s weight is now (gloriously!) responsive to browsing conditions. It varies, so we can’t represent it with a single number. I reloaded the page a bunch in Chrome and charted its weight across a range of viewport widths:

    The flat, gray line up top represents the status-quo weight of 3.5MB. The thick (1x screen) and thin (2x) green lines represent the weight of our upgraded srcset’d and size’d page at every viewport width between 320px and 1280px.

    On 2x, 320px-wide screens, we’ve cut our page’s weight by two-thirds—before, the page totaled 3.5MB; now we’re only sending 1.1MB over the wire. On 320px, 1x screens, our page is less than a tenth the weight it once was: 306kB.

    From there, the byte sizes stair-step their way up as we load larger sources to fill larger viewports. On 2x devices we take a significant jump at viewport widths of ~350px and are back to the status-quo weight after 480px. On 1x screens, the savings are dramatic; we’re saving 70 to 80 percent of the original page’s weight until we pass 960px. There, we top out with a page that’s still ~40 percent smaller than what we started with.

    These sorts of reductions—40 percent, 70 percent, 90 percent—should stop you in your tracks. We’re trimming nearly two and a half megabytes off of every Retina iPhone load. Measure that in milliseconds or multiply it by thousands of pageviews, and you’ll see what all of the fuss is about.

    Second pass: picture and art direction

    srcset if you’re lazy, picture if you’re crazy™
    Mat Marquis

    So, for images that simply need to scale, we list our sources and their pixel widths in srcset, let the browser know how wide the img will display with sizes, and let go of our foolish desire for control. But! There will be times when we want to adapt our images in ways that go beyond scaling. When we do, we need to snatch some of that source-picking control right back. Enter picture.

    Our detail images have a wide aspect ratio: 16:9. On large screens they look great, but on a phone they’re tiny. The stitching and embroidery that the details should show off are too small to make out.

    It would be nice if we could “zoom in” on small screens, presenting a tighter, taller crop.

    This kind of thing—tailoring image content to fit specific environments—is called “art direction.” Any time we crop or otherwise alter an image to fit a breakpoint (rather than simply resizing the whole thing), we’re art directing.

    If we included zoomed-in crops in a srcset, there’s no telling when they’d get picked and when they wouldn’t. Using picture and source media, we can make our wishes explicit: only load the wide, rectangular crops when the viewport is wider than 36em. On smaller viewports, always load the squares.

    	<!-- 16:9 crop -->
    		media="(min-width: 36em)"
    		srcset="quilt_2/detail/large.jpg  1920w,
    		        quilt_2/detail/medium.jpg  960w,
    		        quilt_2/detail/small.jpg   480w" />
    	<!-- square crop -->
    		srcset="quilt_2/square/large.jpg  822w,
    		        quilt_2/square/medium.jpg 640w,
    		        quilt_2/square/small.jpg  320w" />
    		alt="Detail of the above quilt, highlighting the embroidery and exotic stitchwork." />

    A picture element contains any number of source elements and one img. The browser goes over the picture’s sources until it finds one whose media attribute matches the current environment. It sends that matching source’s srcset to the img, which is still the element that we “see” on the page.

    Here’s a simpler case:

    	<source media="(orientation: landscape)" srcset="landscape.jpg" />
    	<img src="portrait.jpg" alt="A rad wolf." />

    On landscape-oriented viewports, landscape.jpg is fed to the img. When we’re in portrait (or if the browser doesn’t support picture) the img is left untouched, and portrait.jpg loads.

    This behavior can be a little surprising if you’re used to audio and video. Unlike those elements, picture is an invisible wrapper: a magical span that’s simply feeding its img a srcset.

    Another way to frame it: the img isn’t a fallback. We’re progressively enhancing the img by wrapping it in a picture.

    In practice, this means that any styles that we want to apply to our rendered image need to be set on the img, not on the picture. picture { width: 100% } does nothing. picture > img { width: 100% } does what you want.

    Here’s our crazy-quilts page with that pattern applied throughout. Keeping in mind that our aim in employing picture was to supply small-screened users with more (and more useful) pixels, let’s see how the performance stacks up:

    Not bad! We’re sending a few more bytes to small 1x screens. But for somewhat complicated reasons having to do with the sizes of our source images, we’ve actually extended the range of screen sizes that see savings at 2x. The savings on our first-pass page stopped at 480px on 2x screens, but after our second pass, they continue until we hit 700px.

    Our page now loads faster and looks better on smaller devices. And we’re not done with it yet.

    Third pass: multiple formats with source type

    The 25-year history of the web has been dominated by two bitmap formats: JPEG and GIF. It took PNGs a painful decade to join that exclusive club. New formats like WebP and JPEG XR are knocking at the door, promising developers superior compression and offering useful features like alpha channels and lossless modes. But due to img’s sadly single src, adoption has been slow—developers need near-universal support for a format before they can deploy it. No longer. picture makes offering multiple formats easy by following the same source type pattern established by audio and video:

    	<source type="image/svg+xml" srcset="logo.svg" />
    	<img src="logo.png" alt="RadWolf, Inc." />

    If the browser supports a source’s type, it will send that source’s srcset to the img.

    That’s a straightforward example, but when we layer source type-switching on top of our existing crazy-quilts page, say, to add WebP support, things get hairy (and repetitive):

    	<!-- 16:9 crop -->
    		media="(min-width: 36em)"
    		srcset="quilt_2/detail/large.webp  1920w,
    		        quilt_2/detail/medium.webp  960w,
    		        quilt_2/detail/small.webp   480w" />
    		media="(min-width: 36em)"
    		srcset="quilt_2/detail/large.jpg  1920w,
    		        quilt_2/detail/medium.jpg  960w,
    		        quilt_2/detail/small.jpg   480w" />
    	<!-- square crop -->
    		srcset="quilt_2/square/large.webp   822w,
    		        quilt_2/square/medium.webp  640w,
    		        quilt_2/square/small.webp   320w" />
    		srcset="quilt_2/square/large.jpg   822w,
    		        quilt_2/square/medium.jpg  640w,
    		        quilt_2/square/small.jpg   320w" />
    		alt="Detail of the above quilt, highlighting the embroidery and exotic stitchwork." />

    That’s a lot of code for one image. And we’re dealing with a large number of files now too: 12! Three resolutions, two formats, and two crops per image really add up. Everything we’ve gained in performance and functionality has come at a price paid in complexity upfront and maintainability down the road.

    Automation is your friend; if your page is going to include massive code blocks referencing numerous alternate versions of an image, you’d do well to avoid authoring everything by hand.

    So is knowing when enough is enough. I’ve thrown every tool in the spec at our example. That will almost never be prudent. Huge gains can be had by employing any one of the new features in isolation, and you should take a long, hard look the complexities of layering them before whipping out your claw and committing to the kitchen sink.

    That said, let’s take a look at what WebP can do for our quilts.

    An additional 25 to 30 percent savings on top of everything we’ve already achieved—not just at the low end, but across the board—certainly isn’t anything to sneeze at! My methodology here is in no way rigorous; your WebP performance may vary. The point is: new formats that provide significant benefits versus the JPEG/GIF/PNG status quo are already here, and will continue to arrive. picture and source type lower the barrier to entry, paving the way for image-format innovation forevermore.

    size the day

    This is the secret of perfection:

    When raw wood is carved, it becomes a tool;


    The perfect carpenter leaves no wood to be carved.

    27. Perfection, Tao Te Ching

    For years, we’ve known what’s been weighing down our responsive pages: images. Huge ones, specially catered to huge screens, which we’ve been sending to everyone. We’ve known how to fix this problem for a while too: let us send different sources to different clients. New markup allows us to do exactly that. srcset lets us offer multiple versions of an image to browsers, which, with a little help from sizes, pick the most appropriate source to load out of the bunch. picture and source let us step in and exert a bit more control, ensuring that certain sources will be picked based on either media queries or file type support.

    Together, these features let us mark up adaptable, flexible, responsive images. They let us send each of our users a source tailored to his or her device, enabling huge performance gains. Armed with a superb polyfill and an eye toward the future, developers should start using this markup now!