EW Resource

Newsfeeds

There's a huge resource of Newsfeeds on the Net which brings up-to-date information and content on a wide range of subjects.

Here are just a few relating to web development.



A List Apart: The Full Feed
  • It Was Just A Thing 

    A little less than two months ago, I wrote about the most dangerous word in software development: just. A lot of assumptions hide behind that seemingly harmless word, but there’s another side to it.

    “It was just a thing we built to deploy our work to staging.”

    “It was just a little plugin we built to handle responsive tab sets.”

    “It was just a way to text a bunch of our friends at the same time.”

    Some of the best and most useful things we build have humble beginnings. Small side projects start with a sapling of an idea—something that can be built in a weekend, but will make our work a little easier, our lives a little better.

    We focus on solving a very specific problem, or fulfilling a very specific need. Once we start using the thing we’ve built, we realize its full potential. We refine our creation until it becomes something bigger and better. By building, using, and refining, we avoid the pitfalls of assumptions made by the harmful use of the word “just” that I warned about:

    Things change when something moves from concept to reality. As Dave Wiskus said on a recent episode of Debug, “everything changes when fingers hit glass.”

    But the people who build something shouldn’t be the only ones who shape its future. When Twitter was founded, it was just a way to text a bunch of friends at once. The way that people used Twitter in the early days helped determine its future. Retweets, @username mentions, and hashtags became official parts of Twitter because of those early usage patterns.

    Embrace the small, simple, focused start, and get something into people’s hands. Let usage patterns inform refinements, validate assumptions, and guide you to future success. It’s more than okay to start by building “just a thing”—in fact, I suggest it.

  • Before You Hire Designers 

    Before you hire a designer, set up the situation this person needs to be effective. Bringing any employee into an unprepared environment where they don’t have the tools or authority to succeed is unfair to them and a huge waste of your hard-earned money. It also burdens the other employees who aren’t sure what to do with this new person.

    A few years ago, I made plans with a friend for breakfast. She was late. When she finally got there, she apologized, saying she’d been cleaning up for the housecleaner.

    “Why in the world would you clean up for a housecleaner?!?” I asked.

    “So she can actually clean, you idiot.”

    This made no sense to me, but I let it go. Otherwise, we would’ve argued about it for hours. About a year later, I got busy enough with work that my house looked like it could star in an episode of Hoarders, so I hired a cleaner. After a few visits, I found myself cleaning up piles and random junk so that she could get to the stuff I actually wanted her to get to.

    I called my friend and said, “I get why you had to clean up for the cleaner now.”

    “I told you you were an idiot.”

    (My friends are great.)

    The moral of this story is you can’t drop a designer into your environment and expect them to succeed. You’ve got to clearly lay out your expectations, but you also have to set the stage so your designers come in and get to the stuff you need them to do.

    Introducing a new discipline to your workplace

    Let’s assume you don’t have a designer on staff. People have been going about their business and getting their work done, and now you’re introducing a designer. Even if your employees have been begging you to hire a designer, this creates a challenge. People are creatures of habit and comfort. As difficult as they claimed their jobs were without a designer, having one still means giving up control of things. This isn’t easy. All the complaining about having to do someone else’s job is about to turn into complaining about giving their work to someone else. People are awesome.

    A designer will absolutely change what your company produces, and they’ll also affect how your company operates. You’ll need to adjust your workflows for this new person, as well as being open to having them adjust your workflow once they arrive.

    Before you throw someone into the mix, sit the company down and explain why you’re hiring a designer, how the company benefits, and what the designer’s role and responsibilities are. Explain how adding this skill set to your group makes everyone’s job easier. (Including possibly going home earlier!) Thank them for going without a designer for so long. Talk to them about things that they no longer need to undertake because of the new designer. Tell them to expect some bumps as the designer gets integrated into the fold.

    Then back your designer up when those bumps occur.

    Your designer can’t do shit without support from the person up top. If their job is to go in and change the way people work, the way the product behaves, and the way people interact with each other (all of which design will do), that’s gonna ruffle a few feathers. When a colleague runs into your office and says, “The designer is changing things!” a well-placed “That’s exactly what I’m paying the designer to do” sets the perfect tone. Remember, designers aren’t out there doing it for their own well-being. They’re your representative.

    As tough as introducing a designer may be, it’s infinitely easier than introducing a designer into a workplace where a bad designer has been nesting. We’re talking industrial-sized smudge sticks. I once took a job where coworkers would walk to my desk and ask me to whip up signs for their yard sales. When I informed them that wasn’t my job, they replied that the previous designer always did that stuff. I reminded them that the previous designer got fired for not meeting his deadlines. Eventually, they stopped asking. Had I been more willing to bend to their requests, we would’ve forever established that designers are the people who make yard sale signs for coworkers.

    Clear the table of any shenanigans like that before your new designer starts. Delivering this message is much easier coming from you. Don’t pass it off to the new person.

    Understanding what designers are responsible for

    This may sound obvious: a designer is responsible for design, right? By design, I’m talking about not just how something looks, but also how it manifests the solution to the problem it solves. Remember that nice young designer who worked at a big company—the one who wasn’t invited to strategy meetings? By the time work got to him, the decisions were set down to the smallest details and all he did was execute. He wasn’t designing. He was executing on someone else’s design.

    In truth, he needed to assert himself. But this chapter is about you. Design is the solution to a problem, something you pay a professional to handle. A designer is, by definition, uniquely qualified to solve those problems; they’re trained to come up with solutions you may not even see. Your designer should champ at the bit to be involved in strategic discussions.

    Make sure to use your designer’s skill set completely. Make sure they’re involved in strategy discussions. Make sure they’re involved in solving the problem and not executing a solution that’s handed to them. Most of all, make sure they see this as part of their job. If they don’t, your design will only ever be as good as what people who aren’t designers think up.

    Giving designers the authority and space they need

    Just as it’s absolutely clear what authority your office manager, accountant, and engineers carry, make sure your company understands what authority your designer has. Let’s go ahead and extend the definition of authority to “things they own.” In the same way the bookkeeper owns the books and the engineer owns the code. (Yes, I get that technically you own it all. Work with me here.)

    Trust your designers. Give them the authority to make decisions they’re singularly qualified to make. Before you bring a designer into the company, decide what authority they have over parts of your workflow or product. Do they have the last call on user-interface decisions? Do they need to get input from other stakeholders? (Always a good idea.) Do they need approval from every stakeholder? (Always a political shit show. Trust me.)

    The right answer depends on the type of organization you run and the skill level of the designer. But whatever that call is, empower your designer with the maximum amount of agency to do their job well. No one tells the accountant how to do their job, but I’ve been in a hundred workplaces where people told the designer how to do theirs.

    A designer with backbone and experience won’t have any problem carving out the room they need to work, but they can’t do so if you don’t grant them the authority. Otherwise, you run the risk of bringing someone in to follow the whims of those around them. That’s not a full member of the team. That’s a glorified Xerox machine, an asset used by the rest of the company whenever they need some pixels pushed around.

    That’s how someone who’s supposed to work on your website’s UI ends up making Lost Cat flyers for Betty in HR.

    Equipping designers with the tools they need

    This should go without saying, except I once spent the first two weeks at a job spinning through a draconian requisition process to get copies of Photoshop and BBEdit, which the company considered nonessential software. Someone from IT gave me a one-hour demo on how I could harness PowerPoint to do anything I needed Photoshop for. (I know I should’ve stopped him, but at some point my annoyance faded in favor of fascination at how much he’d thought this out.)

    Like any craftsperson, your designer is only as good as their tools. Make sure they have what they need. Yes, it’s fair to ask them to justify their use. No, you don’t need to understand what everything does. Trust that they do.

    Measuring success

    How well you prepare your team for a designer, how well your designer gets along with everyone, and how professionally they behave means exactly jack squat if your designer doesn’t succeed in their goals. Before bringing any employee on board, you should know how you’ll measure their success. Will it be hard metrics? Do you expect sales or conversions on the website to increase a certain number? Is the goal to deliver a big upcoming project on time and under budget?

    Your business needs vary, so I can’t give you a magical equation for design success. But I can say: whatever your success metric is, make sure your designer both knows about it and has the authority to accomplish it.

    I do have a story for you though. I took a contract-to-hire job once, and the creative director sat me down on my first day and told me that he wasn’t sure what to expect of me and how I’d fit in with the rest of the studio. (Someone didn’t get their house in order.) At the end of the contract period, he’d evaluate whether to keep me around. I was young and stupid, so I didn’t press much and decided to blend in as much as possible (rookie mistake). When my contract was up, the creative director called me into his office and said I hadn’t performed the way they’d expected. Which was odd, because neither of us really understood what had been expected. I felt shitty, wondering what I could’ve done better. And honestly, I’m sure the creative director felt shitty too, because he realized he hadn’t properly set expectations for success.

    So yeah. Don’t do that. It should never be a surprise to anyone working for you that they’re doing badly. Or doing well for that matter. Let them know what they need to do to succeed. Let them know they’re succeeding. If they’re not succeeding, help them adjust course. And finally, let them know once they’ve succeeded.

    Writing the job description

    The most important thing about readying for a designer is figuring out how your company or organization benefits from their involvement. What will you be able to do once they’re here? Picture yourselves a year in the future. What do you hope to accomplish? Write those things down. They’re the basis for the job description you’re about to write.

    Make a list of what you need this person to do. Not the technical skills they should have, but the needs you hope those skills will fulfill. Do you need branding? Interface design? Illustrations? Forms? What kind of business are you in? Is it editorial? Are you a retailer that needs a catalog designed? Don’t forget to take care of your mobile needs. Trust me, you have mobile needs. (Trust me, you’ve had them since yesterday.)

    The result of this exercise may look something like this: “We need a designer with mobile experience that can do branding and interface design for complex data.” The longer that list gets, the more you’ll pay for a designer, and this exercise may help you realize that you need more than one person. A capable illustrator who can build a responsive site and understands agile workflow is a rare unicorn indeed.

    Now let’s go find us some designers!

  • Making Our Events More Inclusive For Those Under 21 (and Also Everyone Else) 

    On Saturday, Benjamin Hollway, a 16 year old front-end developer, wrote a post about his recent experiences attending industry events. He’s been coding since he was eight, and earlier this year he was shortlisted for Netmag’s Emerging Talent category. Yet none of the people in this category are able to participate fully in the sort of activities most of us take for granted.

    Last week, Benjamin attended an event I spoke at in London. He’d saved up to buy a ticket and travel up to the conference, and after the event he followed everyone to the after party to chat about the conference and meet some of the speakers. Everyone was allowed in, but he was turned away at the door and had to head back home early.

    This isn’t the first time he’s experienced this, and I remember far too well the same happening to me as well. Four years ago, I wrote about some of the difficulties I’d experienced as a young developer when it came to attending events. A lot of the meetups I wanted to go to were held in bars, and if there was someone checking IDs at the door, I couldn’t go.

    After parties are a really important part of a conference. They’re where we get to network, ask speakers questions about the talk they’ve just given, and generally have a good time meeting like-minded people. But so many of these after parties, and even events, are held in pubs and bars, meaning they’re completely off-limits to young people.

    I feel lucky that I live in a country where I could access most events when I turned 18 (although I have been prevented from going into others that are held in 21-or-over bars). In other countries, I wouldn’t be able to attend some events until I was 21.

    I know a lot of amazingly smart designers and developers who are under 18, and many of them are physically prevented from attending an industry event or after party after traveling all the way up and forking out often hundreds of pounds out of their own pocket to attend. The more young people we encourage to join the fold, the more we are excluding from these events.

    Holding events in age-restricted venues doesn’t just exclude those under 21. It also turns away people who don’t drink for medical and personal reasons, or because of their faith, such as Muslims. They can’t simply wait until they get older before they can attend, some of people will never be able to attend.

    If you’re an event or meetup organizer, please don’t exclude young designers and developers by holding your event in age-restricted venues. When London Web Standards realized that young developers who wanted to go couldn’t attend, they switched to holding their events in offices, making them accessible to both young people and people who would be excluded because of their faith, or for other reasons. They were delighted when young developers started to turn up to their events.

    There are a lot more creative things to do around an event that don’t involve hanging around at a noisy bar, which is something Rachel Andrew wrote about last year:

    Photo or history walks around cities can be attractive to a lot of people in our industry and need no more organizing than someone who knows the area and can take attendees around local landmarks and interesting spots for photographs. New Adventures earlier this year had a photo walk, and a typography walk round Brighton was organized around Ampersand conference.

    Finally, how about taking Benjamin’s suggestion and asking young people to speak at your event? They have a huge amount to offer, and will help suggest ways to make your event more open, not just to those under 18, but also to groups of people you may not have even considered.

    Oh, and if your event is open to young people, please add it to the Lanyrd list I’ve created for events open to those under 21 so that others can find it.

  • This week's sponsor: Hack Reactor 

    Hack Reactor is now Online or Onsite. Take our 12-week immersive JavaScript program from home with Hack Reactor Online.

  • Shellshock: A Bigger Threat than Heartbleed? 

    Time to update those Linux servers again. A newly-discovered Linux flaw may be more pervasive, and more dangerous, than last spring’s Heartbleed.

    A newly discovered security bug in a widely used piece of Linux software, known as “Bash,” could pose a bigger threat to computer users than the “Heartbleed” bug that surfaced in April, cyber experts warned on Wednesday.

    ...

    Hackers can exploit a bug in Bash to take complete control of a targeted system, security experts said. The “Heartbleed” bug allowed hackers to spy on computers, but not take control of them.

    “Bash” Software Bug May Pose Bigger Threat Than “Heartbleed”, Re/code

    This new vulnerability, being called Shellshock, has been found in use on public servers, meaning the threat is not theoretical. A patch has been released, but according to Ars Technica, it’s unfortunately incomplete.

  • Antoine Lefeuvre on The Web, Worldwide: The Culinary Model of Web Design 

    We call ourselves information architects, web designers or content strategists, among other job titles in the industry, including the occasional PHP ninja or SEO rockstar. The web does owe a lot to fields like architecture, industrial design, or marketing. I still haven’t met an interaction cook or maitre d’optimization, though. No web makers turn to chefs for inspiration, one might say.

    Well, some do. Let me take you, s’il vous plaît, to Lyon, France, where people think sliced bread is the greatest thing since the internet.

    Just a hundred miles from the web’s birthplace at CERN in Geneva lies Lyon, France’s second biggest city. It’s no internet mecca, but that doesn’t mean there are no lessons to be learned from how people make the web there. Unlike many places in the world where the latest new thing is everyone’s obsession, entrepreneurs in Lyon are quite interested in… the nineteenth century! What they’re analyzing is their city’s greatest success, its cuisine.

    If Lyon’s food scene today is one the world’s best—even outshining Paris’ according to CNN, this is thanks to the Mères lyonnaises movement. These “mothers” were house cooks for Lyon’s rich people, who decided to emancipate and launch their own start-ups: humble restaurants aiming at top-quality food, not fanciness. The movement begun in the nineteenth century only grew bigger in the twentieth, when the Mères passed on their skills and values to the next generation. Their most famous heir is superstar chef Paul Bocuse, who has held the Michelin three-star rating longer than any other, and who began as the apprentice of Mère Eugénie Brazier, the mother of modern French cooking and one of the very first three-star chefs in 1928. “There’s a real parallel between the ecosystem the Mères started and what we want to achieve,” says Grégory Palayer, president of the aptly named local trade association La Cuisine du Web. To recreate the Mères’ recipe for success, the toqués—the nickname meaning both “chef’s hat” and “crazy” that’s given to La Cuisine du Web members—have identified its ingredients: networking, media support, funding, and transmitting skills and knowledge. Not to mention a secret plus: joie de vivre. “Parisians and Europeans are often surprised to see we can spend two hours having lunch,” says Grégory. “This is how we conduct business here!”

    Lyon’s designers too have their nineteenth-century hero in Auguste Escoffier, the celebrity chef of his age. He began his career as a kitchen boy in his uncle’s restaurant and ended up running the kitchens in London’s most luxurious hotels. Renowned as “the Chef of Kings and the King of Chefs,” Escoffier was also a serial designer: his creations include Peach Melba, Crêpe Suzette, and the Cuisine classique style. He even experimented in a culinary form of design under constraint while in the army during the 1870 Franco-Prussian War, using horse meat for ordinary meals to save scarce beef for the wounded, and inventing 1,001 recipes with turnip, the only readily available vegetable on the front lines. Escoffier did much to improve and structure his industry. He was the first head of the WACS, the chefs’ W3C, and revolutionized not only French cooking, but the way restaurants worldwide are run, by championing documentation, standardization, and professionalism.

    In his talk “Interaction Béchamel” at the Interaction 14 conference in Amsterdam, Lyon’s IxDA leader Guillaume Berry explained how the life and work of Escoffier could influence web design. Guillaume comes from a family of food lovers and makers. Himself a visual designer and an amateur cook, he is greatly inspired in his daily work by cuisine. “It’s all about quality ingredients and preparing them. I’ve realized this while chopping vegetables—a task often neglected or disliked.” The web’s raw ingredients are copy, images, videos: “Even a starred chef won’t be able to cook a proper dish with low-quality ingredients. Don’t expect a web designer to do wonders without great content.”

    Just as Escoffier took Ritz customers on a kitchen tour, Guillaume recommends explaining to your clients how their site or app has been cooked. The more open and understood our design processes are, the more their value will be recognized. Have you ever been running late and prepared dinner in a rush? I have and it was, unsurprisingly, a disaster. So tell your clients their website is nothing but a good meal; it takes time to make it a memorable experience.

    Looking back at other industries helps us see what’s ahead in ours. What could be the web’s answer to slow food, organic farming, or rawism? “How many interactions a day is it healthy for us to have?” asks Guillaume. He adds, “Cooks have a huge responsibility because depending on how they prepare the food they can make people sick.” Are we designers that powerful? Oh yes, and more—we destroyed the world, after all.

    No, the web industry isn’t free of junk food. When we create apps that make a smartphone obsolete after two years: junk food. When we believe email is dead and Facebook is the new communication standard: junk food. When we design only for the latest browsers and fastest connections: junk food.

    If we’re ready to move from “more” to “better,” let’s remember these simple rules from Eugénie Brazier: 1. Pick your ingredients very carefully; 2. Home-made first; 3. A flashy presentation won’t save a poor dish.

  • Getting Started With CSS Audits 

    This week I wrote about conducting CSS audits to organize your code, keeping it clean and performant—resulting in faster sites that are easier to maintain. Now that you understand the hows and whys of auditing, let’s take a look at some more resources that will help you maintain your CSS architecture. Here are some I’ve recently discovered and find helpful.

    Organizing CSS

    • Harry Roberts has put together a fantastic resource for thinking about how to write large CSS systems, CSS Guidelines.
    • Interested in making the style guide part of the audit easier? This Github repo includes a whole bunch of info on different generators.

    Help from task runners

    Do you like task runners such as grunt or gulp? Andy Osmani’s tutorial walks through using all kinds of task runners to find unused CSS selectors: Spring Cleaning Unused CSS Selectors.

    Accessibility

    Are you interested in auditing for accessibility as well (hopefully you are!)? There are tools for that, too. This article helps you audit your site for accessibility— it’s a great outline of exactly how to do it.

    Performance

    • Sitepoint takes a look at trimming down overall page weight, which would optimize your site quite a bit.
    • Google Chrome’s dev tools include a built-in audit tool, which suggests ways you could improve performance. A great article on HTML5 Rocks goes through this tool in depth.

    With these tools, you’ll be better prepared to clean up your CSS, optimize your site, and make the entire experience better for users. When talking about auditing code, many people are focusing on performance, which is a great benefit for all involved, but don’t forget that maintainability and speedier development time come along with a faster site.

  • Client Education and Post-Launch Success 

    What our clients do with their websites is just as important as the websites themselves. We may pride ourselves on building a great product, but it’s ultimately up to the client to see it succeed or fail. Even the best website can become neglected, underused, or messy without a little education and training.

    Too often, my company used to create amazing tools for clients and then send them out into the world without enough guidance. We’d watch our sites slowly become stale, and we’d see our strategic content overwritten with fluffy filler.

    It was no one’s fault but our own.

    As passionate and knowledgeable web enthusiasts, it’s literally our job to help our clients succeed in any way we can, even after launch. Every project is an opportunity to educate clients and build a mutually beneficial learning experience.

    Meeting in the middle

    If we want our clients to use our products to their full potential, we have to meet them in the middle. We have to balance our technical expertise with their existing processes and skills.

    At my company, Brolik, we learned this the hard way.

    We had a financial client whose main revenue came from selling in-depth PDF reports. Customers would select a report, generating an email to an employee who would manually create and email an unprotected PDF to the customer. The whole process would take about two days.

    To make the process faster and more secure, we built an advanced, password-protected portal where their customers could purchase and access only the reports they’d paid for. The PDFs themselves were generated on the fly from the content management system. They were protected even after they were downloaded and only viewable with a unique username and password generated with the PDF.

    The system itself was technically advanced and thoroughly solved our client’s needs. When the job was done, we patted ourselves on the back, added the project to our portfolio, and moved on to the next thing.

    The client, however, was generally confused by the system we’d built. They didn’t quite know how to explain it to their customers. Processes had been automated to the point where they seemed untrustworthy. After about a month, they asked us if we’d revert back to their previous system.

    We had created too large of a process change for our client. We upended a large part of their business model without really considering whether they were ready for a new approach.

    From that experience, we learned not only to create online tools that complement our clients’ existing business processes, but also that we can be instrumental in helping clients embrace new processes. We now see it as part of our job to educate our clients and explain the technical and strategic thought behind all of our decisions.

    Leading by example

    We put this lesson to work on a more recent project, developing a site-wide content tagging system where images, video, and other media could be displayed in different ways based on how they were tagged.

    We could have left our clients to figure out this new system on their own, but we wanted to help them adopt it. So we pre-populated content and tags to demonstrate functionality. We walked through the tagging process with as many stakeholders as we could. We even created a PDF guide to explain the how and why behind the new system.

    In this case, our approach worked, and the client’s cumbersome media management time was significantly reduced. The difference between the outcome of the two projects was simply education and support.

    Education and support can, and usually does, take the form of setting an example. Some clients may not fully understand the benefits of a content strategy, for instance, so you have to show them results. Create relevant and well-written sample blog posts for them, and show how they can drive website traffic. Share articles and case studies that relate to the new tools you’re building for them. Show them that you’re excited, because excitement is contagious. If you’re lucky and smart enough to follow Geoff Dimasi’s advice and work with clients who align with your values, this process will be automatic, because you’ll already be invested in their success.

    We should be teaching our clients to use their website, app, content management system, or social media correctly and wisely. The more adept they are at putting our products to use, the better our products perform.

    Dealing with budgets

    Client education means new deliverables, which have to be prepared by those directly involved in the project. Developers, designers, project managers, and other team members are responsible for creating the PDFs, training workshops, interactive guides, and other educational material.

    That means more organizing, writing, designing, planning, and coding—all things we normally bill for, but now we have to bill in the name of client education.

    Take this into account at the beginning of a project. The amount of education a client needs can be a consideration for taking a job at all, but it should at least factor into pricing. Hours spent helping your client use your product is billable time that you shouldn’t give away for free.

    At Brolik, we’ve helped a range of clients—from those who have “just accepted that the Web isn’t a fad” (that’s an actual quote from 2013), to businesses that have a team of in-house developers. We consider this information and price accordingly, because it directly affects the success of the entire product and partnership. If they need a lot of education but they’re not willing to pay for it, it may be smart to pass on the job.

    Most clients actually understand this. Those who are interested in improving their business are interested in improving themselves as well. This is the foundation for a truly fulfilling and mutually beneficial client relationship. Seek out these relationships.

    It’s sometimes challenging to justify a “client education” line item in your proposals, however. If you can’t, try to at least work some wiggle room into your price. More specifically, try adding a 10 percent contingency for “Support and Training” or “Onboarding.”

    If you can’t justify a price increase at all, but you still want the job, consider factoring in a few client education hours and their opportunity cost as part of your company’s overall marketing budget. Teaching your client to use your product is your responsibility as a digital business.

    This never ends (hopefully)

    What’s better than arming your clients with knowledge and tools, pumping them up, and then sending them out into the world to succeed? Venturing out with them!

    At Brolik, we’ve started signing clients onto digital strategy retainers once their websites are completed. Digital strategy is an overarching term that covers anything and everything to grow a business online. Specifically for us, it includes audience research, content creation, SEO, search and display advertising, website maintenance, social media, and all kinds of analysis and reporting.

    This allows us to continue to educate (and learn) on an ongoing basis. It keeps things interesting—and as a bonus, we usually upsell more work.

    We’ve found that by fostering collaboration post-launch, we not only help our clients use our product more effectively and grow their business, but we also alleviate a lot of the panic that kicks in right before a site goes live. They know we’ll still be there to fix, tweak, analyze, and even experiment.

    This ongoing digital strategy concept was so natural for our business that it’s surprising it took us so long to implement it. After 10 years making websites, we’ve only offered digital strategy for the last two, and it’s already driving 50 percent of our revenue.

    It pays to be along for the ride

    The extra effort required for client education is worth it. By giving our clients the tools, knowledge, and passion they need to be successful with what we’ve built for them, we help them improve their business.

    Anything that drives their success ultimately drives ours. When the tools we build work well for our clients, they return to us for more work. When their websites perform well, our portfolios look better and live longer. Overall, when their business improves, it reflects well on us.

    A fulfilling and mutually beneficial client relationship is good for the client and good for future business. It’s an area where we can follow our passion and do what’s right, because we get back as much as we put in.

  • CSS Audits: Taking Stock of Your Code 

    Most people aren’t excited at the prospect of auditing code, but it’s become one of my favorite types of projects. A CSS audit is really detective work. You start with a site’s code and dig deeper: you look at how many stylesheets are being called, how that affects site performance, and how the CSS itself is written. Your goal is to look for ways to improve on what’s there—to sleuth out fixes to make your codebase better and your site faster.

    I’ll share tips on how to approach your own audit, along with the advantages of taking a full inventory of your CSS and various tools.

    Benefits of an audit

    An audit helps you to organize your code and eliminate repetition. You don’t write any code during an audit; you simply take stock of what’s there and document recommendations to pass off to a client or discuss with your team. These recommendations ensure new code won’t repeat past mistakes. Let’s take a closer look at other benefits:

    • Reduce file sizes. A complete overview of the CSS lets you take the time to find ways to refactor the code: to clean it up and perhaps cut down on the number of properties. You can also hunt for any odds and ends, such as outdated versions of browser prefixes, that aren’t in use anymore. Getting rid of unused or unnecessary code trims down the file people have to download when they visit your site.
    • Ensure consistency with guidelines. As you audit, create documentation regarding your styles and what’s happening with the site or application. You could make a formal style guide, or you could just write out recommendations to note how different pieces of your code are used. Whatever form your documentation takes, it’ll save anyone coming onto your team a lot of time and trouble, as they can easily familiarize themselves with your site’s CSS and architecture.
    • Standardize your code. Code organization—which certainly attracts differing opinions—is essential to keeping your codebase more maintainable into the future. For instance, if you choose to alphabetize your properties, you can readily spot duplicates, because you’d end up with two sets of margin properties right next to each other. Or you may prefer to group properties according to their function: positioning, box model-related, etc. Having a system in place helps you guard against repetition.
    • Increase performance. I’ve saved the best for last. Auditing code, along with combining and zipping up stylesheets, leads to markedly faster site speeds. For example, Harry Roberts, a front-end architect in the UK who conducts regular audits, told me about a site he recently worked on:
      I rebuilt Fasetto.com with a view to improving its performance; it went from 27 separate stylesheets for a single-page site (mainly UI toolkits like Bootstrap, etc.) down to just one stylesheet (which is actually minified and inlined, to save on the HTTP request), which weighs in at just 5.4 kB post-gzip.

      This is a huge win, especially for people on slower connections—but everyone gains when sites load quickly.

    How to audit: take inventory

    Now that audits have won you over, how do you go about doing one? I like to start with a few tools that provide an overview of the site’s current codebase. You may approach your own audit differently, based on your site’s problem areas or your philosophy of how you write code (whether OOCSS or BEM). The important thing is to keep in mind what will be most useful to you and your own site.

    Once I’ve diagnosed my code through tools, I examine it line by line.

    Tools

    The first tool I reach for is Nicole Sullivan’s invaluable Type-o-matic, an add-on for Firebug that generates a JSON report of all the type styles in use across a site. As an added bonus, Type-o-matic creates a visual report as it runs. By looking at both reports, you know at a glance when to combine type styles that are too similar, eliminating unnecessary styles. I’ve found that the detail of the JSON report makes it easy to see how to create a more reusable type system.

    In addition to Type-o-matic, I run CSS Lint, an extremely flexible tool that flags a wide range of potential bugs from missing fallback colors to shorthand properties for better performance. To use CSS Lint, click the arrow next to the word “Lint” and choose the options you want. I like to check for repeated properties or too many font sizes, so I always run Maintainability & Duplication along with Performance. CSS Lint then returns recommendations for changes; some may be related to known issues that will break in older browsers and others may be best practices (as the tool sees them). CSS Lint isn’t perfect. If you run it leaving every option checked, you are bound to see things in the end report that you may not agree with, like warnings for IE6. That said, this is a quick way to get a handle on the overall state of your CSS.

    Next, I search through the CSS to review how often I repeat common properties, like float or margin. (If you’re comfortable with the command line, type grep along with instructions and plug in something like grep “float” styles/styles.scss to find all instances of “float”.) Note any properties you may cut or bundle into other modules. Trimming your properties is a balancing act: to reduce the number of repeated properties, you may need to add more classes to your HTML, so that’s something you’ll need to gauge according to your project.

    I like to do this step by hand, as it forces me to walk through the CSS on my own, which in turn helps me better understand what’s going on. But if you’re short on time, or if you’re not yet comfortable with the command line, tools can smooth the way:

    • CSS Dig is an automated script that runs through all of your code to help you see it visually. A similar tool is StyleStats, where you type in a url to survey its CSS.
    • CSS Colorguard is a brand-new tool that runs on Node and outputs a report based on your colors, so you know if any colors are too alike. This helps limit your color palette, making it easier to maintain in the future.
    • Dust-Me Selectors is an add-on for Firebug in Firefox that finds unused selectors.

    Line by line

    After you run your tools, take the time to read through the CSS; it’s worth it to get a real sense of what’s happening. For instance, comments in the code—that tools miss—may explain why some quirk persists.

    One big thing I double-check is the depth of applicability, or how far down an attribute string applies. Does your CSS rely on a lot of specificity? Are you seeing long strings of selectors, either in the style files themselves or in the output from a preprocessor? A high depth of applicability means your code will require a very specific HTML structure for styles to work. If you can scale it back, you’ll get more reusable code and speedier performance.

    Review and recommend

    Now to the fun part. Once you have all your data, you can figure out how to improve the CSS and make some recommendations.

    The recommendation document doesn’t have to be heavily designed or formatted, but it should be easy to read. Splitting it into two parts is a good idea. The first consists of your review, listing the things you’ve found. If you refer to the results of CSS Lint or Type-o-matic, be sure to include either screenshots or the JSON report itself as an attachment. The second half contains your actionable recommendations to improve the code. This can be as simple as a list, with items like “Consolidate type styles that are closely related and create mixins for use sitewide.”

    As you analyze all the information you’ve collected, look for areas where you can:

    • Tighten code. Do you have four different sets of styles for a call-out box, several similar link styles, or way too many exceptions to your standard grid? These are great candidates for repeatable modular styles. To make consolidation even easier, you could use a preprocessor like Sass to turn them into mixins or extend, allowing styles to be applied when you call them on a class. (Just check that the outputted code is sensible too.)
    • Keep code consistent. A good audit makes sure the code adheres to its own philosophy. If your CSS is written based on a particular approach, such as BEM or OOCSS, is it consistent? Or do styles veer from time to time, and are there acceptable deviations? Make sure you document these exceptions, so others on your team are aware.

    If you’re working with a client, it’s also important to explain the approaches you favor, so they understand where you’re coming from—and what things you may consider as issues with the code. For example, I prefer OOCSS, so I tend to push for more modularity and reusability; a few classes stacked up (if you aren’t using a preprocessor) don’t bother me. Making sure your client understands the context of your work is particularly crucial when you’re not on the implementation team.

    Hand off to the client

    You did it! Once you’ve written your recommendations (and taken some time to think on them and ensure they’re solid), you can hand them off to the client—be prepared for any questions they may have. If this is for your team, congratulations: get cracking on your list.

    But wait—an audit has even more rewards. Now that you’ve got this prime documentation, take it a step further: use it as the springboard to talk about how to maintain your CSS going forward. If the same issues kept popping up throughout your code, document how you solved them, so everyone knows how to proceed in the future when creating new features or sections. You may turn this document into a style guide. Another thing to consider is how often to revisit your audit to ensure your codebase stays squeaky clean. The timing will vary by team and project, but set a realistic, regular schedule—this a key part of the auditing process.

    Conducting an audit is a vital first step to keeping your CSS lean and mean. It also helps your documentation stay up to date, allowing your team to have a good handle on how to move forward with new features. When your code is structured well, it’s more performant—and everyone benefits. So find the time, grab your best sleuthing hat, and get started.

  • Rian van der Merwe on A View from a Different Valley: Work Life Imbalance 

    I’m old enough to remember when laptops entered the workforce. It was an amazing thing. At first only the select few could be seen walking around with their giant black IBMs and silver Dells. It took a few years, but eventually every new job came with the question we all loved to hear: “desktop or laptop?”

    I was so happy when I got my first laptop at work. “Man,” I thought, “now I can work anywhere, any time!” It was fun for a while, until I realized that now I could work anywhere, any time. Slowly our office started to reflect this newfound freedom. Work looked less and less like work, and more and more like home. Home offices became a big thing, and it’s now almost impossible to distinguish between home offices of famous designers and the workspaces (I don’t think we even call them “offices” any more) of most startups.

    Work and life: does it blend?

    There is a blending of work and life that woos us with its promise of barbecues at work and daytime team celebrations at movie theaters, but we’re paying for it in another way: a complete eradication of the line between home life and work life. “Love what you do,” we say. “Get a job you don’t want to take a vacation from,” we say—and we sit back and watch the retweets stream in.

    I don’t like it.

    I don’t like it for two reasons.

    It makes us worse at our jobs

    There’s plenty of research that shows when employers place strict limits on messaging, employees are happier and enjoy their work more. And productivity isn’t affected negatively at all. Clive Thompson’s article about this for Mother Jones is a great overview of what we know about the handful of experiments that have been done to research the effects of messaging limits.

    But that’s not even the whole story. It’s not just that constantly thinking about work makes us more stressed, it’s also that our fear of doing nothing—of not being productive every second of the day—is hurting us as well (we’ll talk about side projects another time). There’s plenty of research about this as well, but let’s stick with Jessica Stillman’s Bored at Work? Good. It’s a good overview of what scientists have found on the topic of giving your mind time to rest. In short, being idle tells your brain that it’s in need of something different, which stimulates creative thinking. So it’s something to be sought out and cherished—not something to be shunned.

    Sometimes when things clear away and you’re not watching anything and you’re in your car and you start going, oh no, here it comes, that I’m alone, and it starts to visit on you, just this sadness. And that’s why we text and drive. People are willing to risk taking a life and ruining their own because they don’t want to be alone for a second because it’s so hard.

    Louis C. K.

    It teaches that boundaries are bad

    The second problem I have with our constant pursuit of the productivity train is that it teaches us that setting boundaries to spend time with our friends and family = laziness. I got some raised eyebrows at work recently when I declined an invitation to watch a World Cup game in a conference room. But here’s the thing. If I watch the World Cup game with a bunch of people at work today, guess what I have to do tonight? I have to work to catch up, instead of spending time with my family. And that is not ok with me.

    I have a weird rule about this. Work has me—completely—between the hours of 8:30 a.m. and 6:00 p.m. It has 100 percent of my attention. But outside of those hours I consider it part of being a sane and good human to give my kids a bath, chat to my wife, read, and reflect on the day that’s past and the one that’s coming—without the pressure of having to be online all the time. I swear it makes me a better (and more productive) employee, but I can’t shake the feeling that I shouldn’t be writing this down because you’re just going to think I’m lazy.

    But hey, I’m going to face my fear and just come right out and say it: I try not to work nights. There. That felt good.

    It doesn’t always work out, and of course there are times when a need is pressing and I take care of it at night. I don’t have a problem with that. But I don’t sit and do email for hours every night. See, the time I spend with people is what gives my work meaning. I do what I do for them—for the people in my life, the people I know, and the people I don’t. If we never spend time away from our work, how can we understand the world and the people we make things for?

    Of course, the remaking of the contemporary tech office into a mixed work-cum-leisure space is not actually meant to promote leisure. Instead, the work/leisure mixing that takes place in the office mirrors what happens across digital, social and professional spaces. Work has seeped into our leisure hours, making the two tough to distinguish.

    Kate Losse, Tech aesthetics

    Permission to veg out

    So I guess this column is my attempt to give you permission to do nothing every once in a while. Not to be lazy, or not do your job. But to take the time you need to get better at what you do, and enjoy it a lot more.

    As this column evolves, I think this is what I’ll be talking about a lot. How to make the hours we have at work count more. How to think of what we do not as the tech business but the people business. How to give ourselves permission to experience the world around us and get inspiration for our work from that. How to be flâneur: wandering around with eyes wide open to inspiration.

  • Awkward Cousins 

    As an industry, we’re historically terrible at drawing lines between things. We try to segment devices based on screen size, but that doesn’t take into account hardware functionality, form factor, and usage context, for starters. The laptop I’m writing this on has the same resolution as a 1080p television. They’d be lumped into the same screen-size–dependent groups, but they are two totally different device classes, so how do we determine what goes together?

    That’s a simple example, but it points to a larger issue. We so desperately want to draw lines between things, but there are often too many variables to make those lines clean.

    Why, then, do we draw such strict lines between our roles on projects? What does the area of overlap between a designer and front-end developer look like? A front- and back-end developer? A designer and back-end developer? The old thinking of defined roles is certainly loosening up, but we still have a long way to go.

    The chasm between roles that is most concerning is the one between web designers/developers and native application designers/developers. We often choose a camp early on and stick to it, which is a mindset that may have been fueled by the false “native vs. web” battle a few years ago. It was positioned as an either-or decision, and hybrid approaches were looked down upon.

    The two camps of creators are drifting farther and farther apart, even as the products are getting closer and closer. John Gruber best described the overlap that users see:

    When I’m using Tweetbot, for example, much of my time in the app is spent reading web pages rendered in a web browser. Surely that’s true of mobile Facebook users, as well. What should that count as, “app” or “web”?

    I publish a website, but tens of thousands of my most loyal readers consume it using RSS apps. What should they count as, “app” or “web”?.

    The people using the things we build don’t see the divide as harshly as we do, if at all. More importantly, the development environments are becoming more similar, as well. Swift, Apple’s brand new programming language for iOS and Mac development, has a strong resemblance to the languages we know and love on the web, and that’s no accident. One of Apple’s top targets for Swift, if not the top target, is the web development community. It’s a massive, passionate, and talented pool of developers who, largely, have not done iOS or Mac work—yet.

    As someone who spans the divide regularly, it’s sad to watch these two communities keep at arm’s length like awkward cousins at a family reunion. We have so much in common—interests, skills, core values, and a ton of technological ancestry. The difference between the things we build is shrinking in the minds of our shared users, and the ways we build those things are aligning. I dream of the day when we get over our poorly drawn lines and become the big, happy community I know we can be.

    At the very least, please start reading each other’s blogs.

  • Watch: A New Documentary About Jeffrey Zeldman 
    You keep it by giving it away.
    Jeffrey Zeldman

    It’s a philosophy that’s always guided us at A List Apart: that we all learn more—and are more successful—when we share what we know with anyone who wants to listen. And it comes straight from our publisher, Jeffrey Zeldman.

    For 20 years, he’s been sharing everything he can with us, the people who make websites—from advice on table layouts in the ‘90s to Designing With Web Standards in the 2000s to educating the next generation of designers today. 

    Our friends at Lynda.com just released a documentary highlighting Jeffrey’s two decades of designing, organizing, and most of all sharing on the web. You should watch it.

    Jeffrey Zeldman: 20 years of Web Design and Community from lynda.com.

  • Git: The Safety Net for Your Projects 

    I remember January 10, 2010, rather well: it was the day we lost a project’s complete history. We were using Subversion as our version control system, which kept the project’s history in a central repository on a server. And we were backing up this server on a regular basis—at least, we thought we were. The server broke down, and then the backup failed. Our project wasn’t completely lost, but all the historic versions were gone.

    Shortly after the server broke down, we switched to Git. I had always seen version control as torturous; it was too complex and not useful enough for me to see its value, though I used it as a matter of duty. But once we’d spent some time on the new system, and I began to understand just how helpful Git could be. Since then, it has saved my neck in many situations.

    During the course of this article, I’ll walk through how Git can help you avoid mistakes—and how to recover if they’ve already happened.

    Every teammate is a backup

    Since Git is a distributed version control system, every member of our team that has a project cloned (or “checked out,” if you’re coming from Subversion) automatically has a backup on his or her disk. This backup contains the latest version of the project, as well as its complete history.

    This means that should a developer’s local machine or even our central server ever break down again (and the backup not work for any reason), we’re up and running again in minutes: any local repository from a teammate’s disk is all we need to get a fully functional replacement.

    Branches keep separate things separate

    When my more technical colleagues told me about how “cool” branching in Git was, I wasn’t bursting with joy right away. First, I have to admit that I didn’t really understand the advantages of branching. And second, coming from Subversion, I vividly remembered it being a complex and error-prone procedure. With some bad memories, I was anxious about working with branches and therefore tried to avoid it whenever I could.

    It took me quite a while to understand that branching and merging work completely differently in Git than in most other systems—especially regarding its ease of use! So if you learned the concept of branches from another version control system (like Subversion), I recommend you forget your prior knowledge and start fresh. Let’s start by understanding why branches are so important in the first place.

    Why branches are essential

    Back in the days when I didn’t use branches, working on a new feature was a mess. Essentially, I had the choice between two equally bad workflows:

    (a) I already knew that creating small, granular commits with only a few changes was a good version control habit. However, if I did this while developing a new feature, every commit would mingle my half-done feature with the main code base until I was done. It wasn’t very pleasant for my teammates to have my unfinished feature introduce bugs into the project.

    (b) To avoid getting my work-in-progress mixed up with other topics (from colleagues or myself), I’d work on a feature in my separate space. I would create a copy of the project folder that I could work with quietly—and only commit my feature once it was complete. But committing my changes only at the end produced a single, giant, bloated commit that contained all the changes. Neither my teammates nor I could understand what exactly had happened in this commit when looking at it later.

    I slowly understood that I had to make myself familiar with branches if I wanted to improve my coding.

    Working in contexts

    Any project has multiple contexts where work happens; each feature, bug fix, experiment, or alternative of your product is actually a context of its own. It can be seen as its own “topic,” clearly separated from other topics.

    If you don’t separate these topics from each other with branching, you will inevitably increase the risk of problems. Mixing different topics in the same context:

    • makes it hard to keep an overview—and with a lot of topics, it becomes almost impossible;
    • makes it hard to undo something that proved to contain a bug, because it’s already mingled with so much other stuff;
    • doesn’t encourage people to experiment and try things out, because they’ll have a hard time getting experimental code out of the repository once it’s mixed with stable code.

    Using branches gave me the confidence that I couldn’t mess up. In case things went wrong, I could always go back, undo, start fresh, or switch contexts.

    Branching basics

    Branching in Git actually only involves a handful of commands. Let’s look at a basic workflow to get you started.

    To create a new branch based on your current state, all you have to do is pick a name and execute a single command on your command line. We’ll assume we want to start working on a new version of our contact form, and therefore create a new branch called “contact-form”:

    $ git branch contact-form
    

    Using the git branch command without a name specified will list all of the branches we currently have (and the “-v” flag provides us with a little more data than usual):

    $ git branch -v
    
    Git screen showing the current branches of contact-form.

    You might notice the little asterisk on the branch named “master.” This means it’s the currently active branch. So, before we start working on our contact form, we need to make this our active context:

    $ git checkout contact-form
    

    Git has now made this branch our current working context. (In Git lingo, this is called the “HEAD branch”). All the changes and every commit that we make from now on will only affect this single context—other contexts will remain untouched. If we want to switch the context to a different branch, we’ll simply use the git checkout command again.

    In case we want to integrate changes from one branch into another, we can “merge” them into the current working context. Imagine we’ve worked on our “contact-form” feature for a while, and now want to integrate these changes into our “master” branch. All we have to do is switch back to this branch and call git merge:

    $ git checkout master
    $ git merge contact-form
    

    Using branches

    I would strongly suggest that you use branches extensively in your day-to-day workflow. Branches are one of the core concepts that Git was built around. They are extremely cheap and easy to create, and simple to manage—and there are plenty of resources out there if you’re ready to learn more about using them.

    Undoing things

    There’s one thing that I’ve learned as a programmer over the years: mistakes happen, no matter how experienced people are. You can’t avoid them, but you can have tools at hand that help you recover from them.

    One of Git’s greatest features is that you can undo almost anything. This gives me the confidence to try out things without fear—because, so far, I haven’t managed to really break something beyond recovery.

    Amending the last commit

    Even if you craft your commits very carefully, it’s all too easy to forget adding a change or mistype the message. With the —amend flag of the git commit command, Git allows you to change the very last commit, and it’s a very simple fix to execute. For example, if you forgot to add a certain change and also made a typo in the commit subject, you can easily correct this:

    $ git add some/changed/files
    $ git commit --amend -m "The message, this time without typos"
    

    There’s only one thing you should keep in mind: you should never amend a commit that has already been pushed to a remote repository. Respecting this rule, the “amend” option is a great little helper to fix the last commit.

    (For more detail about the amend option, I recommend Nick Quaranto’s excellent walkthrough.)

    Undoing local changes

    Changes that haven’t been committed are called “local.” All the modifications that are currently present in your working directory are “local” uncommitted changes.

    Discarding these changes can make sense when your current work is… well… worse than what you had before. With Git, you can easily undo local changes and start over with the last committed version of your project.

    If it’s only a single file that you want to restore, you can use the git checkout command:

    $ git checkout -- file/to/restore
    

    Don’t confuse this use of the checkout command with switching branches (see above). If you use it with two dashes and (separated with a space!) the path to a file, it will discard the uncommitted changes in a given file.

    On a bad day, however, you might even want to discard all your local changes and restore the complete project:

    $ git reset --hard HEAD
    

    This will replace all of the files in your working directory with the last committed revision. Just as with using the checkout command above, this will discard the local changes.

    Be careful with these operations: since local changes haven’t been checked into the repository, there is no way to get them back once they are discarded!

    Undoing committed changes

    Of course, undoing things is not limited to local changes. You can also undo certain commits when necessary—for example, if you’ve introduced a bug.

    Basically, there are two main commands to undo a commit:

    (a) git reset

    Illustration showing how the `git reset` command works.

    The git reset command really turns back time. You tell it which version you want to return to and it restores exactly this state—undoing all the changes that happened after this point in time. Just provide it with the hash ID of the commit you want to return to:

    $ git reset -- hard 2be18d9
    

    The —hard option is the easiest and cleanest approach, but it also wipes away all local changes that you might still have in your working directory. So, before doing this, make sure there aren’t any local changes you’ve set your heart on.

    (b) git revert

    Illustration showing how the `git revert` command works.

    The git revert command is used in a different scenario. Imagine you have a commit that you don’t want anymore—but the commits that came afterwards still make sense to you. In that case, you wouldn’t use the git reset command because it would undo all those later commits, too!

    The revert command, however, only reverts the effects of a certain commit. It doesn’t remove any commits, like git reset does. Instead, it even creates a new commit; this new commit introduces changes that are just the opposite of the commit to be reverted. For example, if you deleted a certain line of code, revert will create a new commit that introduces exactly this line, again.

    To use it, simply provide it with the hash ID of the commit you want reverted:

    $ git revert 2be18d9
    

    Finding bugs

    When it comes to finding bugs, I must admit that I’ve wasted quite some time stumbling in the dark. I often knew that it used to work a couple of days ago—but I had no idea where exactly things went wrong. It was only when I found out about git bisect that I could speed up this process a bit. With the bisect command, Git provides a tool that helps you find the commit that introduced a problem.

    Imagine the following situation: we know that our current version (tagged “2.0”) is broken. We also know that a couple of commits ago (our version “1.9”), everything was fine. The problem must have occurred somewhere in between.

    Illustration showing the commits between working and broken versions.

    This is already enough information to start our bug hunt with git bisect:

    $ git bisect start
    $ git bisect bad
    $ git bisect good v1.9
    

    After starting the process, we told Git that our current commit contains the bug and therefore is “bad.” We then also informed Git which previous commit is definitely working (as a parameter to git bisect good).

    Git then restores our project in the middle between the known good and known bad conditions:

    Illustration showing that the bisect begins between the versions.

    We now test this version (for example, by running unit tests, building the app, deploying it to a test system, etc.) to find out if this state works—or already contains the bug. As soon as we know, we tell Git again—either with git bisect bad or git bisect good.

    Let’s assume we said that this commit was still “bad.” This effectively means that the bug must have been introduced even earlier—and Git will again narrow down the commits in question:

    Illustration showing how additional bisects will narrow the commits further.

    This way, you’ll find out very quickly where exactly the problem occurred. Once you know this, you need to call git bisect reset to finish your bug hunt and restore the project’s original state.

    A tool that can save your neck

    I must confess that my first encounter with Git wasn’t love at first sight. In the beginning, it felt just like my other experiences with version control: tedious and unhelpful. But with time, the practice became intuitive, and gained my trust and confidence.

    After all, mistakes happen, no matter how much experience we have or how hard we try to avoid them. What separates the pro from the beginner is preparation: having a system in place that you can trust in case of problems. It helps you stay on top of things, especially in complex projects. And, ultimately, it helps you become a better professional.

    References

  • Running Code Reviews with Confidence 

    Growing up, I learned there were two kinds of reviews I could seek out from my parents. One parent gave reviews in the form of a shower of praise. The other parent, the one with a degree from the Royal College of Art, would put me through a design crit. Today the reviews I seek are for my code, not my horse drawings, but it continues to be a process I both dread and crave.

    In this article, I’ll describe my battle-tested process for conducting code reviews, highlighting the questions you should ask during the review process as well as the necessary version control commands to download and review someone’s work. I’ll assume your team uses Git to store its code, but the process works much the same if you’re using any other source control system.

    Completing a peer review is time-consuming. In the last project where I introduced mandatory peer reviews, the senior developer and I estimated that it doubled the time to complete each ticket. The reviews introduced more context-switching for the developers, and were a source of increased frustration when it came to keeping the branches up to date while waiting for a code review.

    The benefits, however, were huge. Coders gained a greater understanding of the whole project through their reviews, reducing silos and making onboarding easier for new people. Senior developers had better opportunities to ask why decisions were being made in the codebase that could potentially affect future work. And by adopting an ongoing peer review process, we reduced the amount of time needed for human quality assurance testing at the end of each sprint.

    Let’s walk through the process. Our first step is to figure out exactly what we’re looking for.

    Determine the purpose of the proposed change

    Our code review should always begin in a ticketing system, such as Jira or GitHub. It doesn’t matter if the proposed change is a new feature, a bug fix, a security fix, or a typo: every change should start with a description of why the change is necessary, and what the desired outcome will be once the change has been applied. This allows us to accurately assess when the proposed change is complete.

    The ticketing system is where you’ll track the discussion about the changes that need to be made after reviewing the proposed work. From the ticketing system, you’ll determine which branch contains the proposed code. Let’s pretend the ticket we’re reviewing today is 61524—it was created to fix a broken link in our website. It could just as equally be a refactoring, or a new feature, but I’ve chosen a bug fix for the example. No matter what the nature of the proposed change is, having each ticket correspond to only one branch in the repository will make it easier to review, and close, tickets.

    Set up your local environment and ensure that you can reproduce what is currently the live site—complete with the broken link that needs fixing. When you apply the new code locally, you want to catch any regressions or problems it might introduce. You can only do this if you know, for sure, the difference between what is old and what is new.

    Review the proposed changes

    At this point you’re ready to dive into the code. I’m going to assume you’re working with Git repositories, on a branch-per-issue setup, and that the proposed change is part of a remote team repository. Working directly from the command line is a good universal approach, and allows me to create copy-paste instructions for teams regardless of platform.

    To begin, update your local list of branches.

    git fetch
    

    Then list all available branches.

    git branch -a
    

    A list of branches will be displayed to your terminal window. It may appear something like this:

    * master
    remotes/origin/master
    remotes/origin/HEAD -> origin/master
    remotes/origin/61524-broken-link
    

    The * denotes the name of the branch you are currently viewing (or have “checked out”). Lines beginning with remotes/origin are references to branches we’ve downloaded. We are going to work with a new, local copy of branch 61524-broken-link.

    When you clone your project, you’ll have a connection to the remote repository as a whole, but you won’t have a read-write relationship with each of the individual branches in the remote repository. You’ll make an explicit connection as you switch to the branch. This means if you need to run the command git push to upload your changes, Git will know which remote repository you want to publish your changes to.

    git checkout --track origin/61524-broken-link
    

    Ta-da! You now have your own copy of the branch for ticket 61524, which is connected (“tracked”) to the origin copy in the remote repository. You can now begin your review!

    First, let’s take a look at the commit history for this branch with the command log.

    git log master..
    

    Sample output:

    Author: emmajane 
    Date: Mon Jun 30 17:23:09 2014 -0400
    
    Link to resources page was incorrectly spelled. Fixed.
    
    Resolves #61524.
    

    This gives you the full log message of all the commits that are in the branch 61524-broken-link, but are not also in the master branch. Skim through the messages to get a sense of what’s happening.

    Next, take a brief gander through the commit itself using the diff command. This command shows the difference between two snapshots in your repository. You want to compare the code on your checked-out branch to the branch you’ll be merging “to”—which conventionally is the master branch.

    git diff master
    

    How to read patch files

    When you run the command to output the difference, the information will be presented as a patch file. Patch files are ugly to read. You’re looking for lines beginning with + or -. These are lines that have been added or removed, respectively. Scroll through the changes using the up and down arrows, and press q to quit when you’ve finished reviewing. If you need an even more concise comparison of what’s happened in the patch, consider modifying the diff command to list the changed files, and then look at the changed files one at a time:

    git diff master --name-only
    git diff master <filename>
    

    Let’s take a look at the format of a patch file.

    diff --git a/about.html b/about.html
    index a3aa100..a660181 100644
    	--- a/about.html
    	+++ b/about.html
    @@ -48,5 +48,5 @@
    	(2004-05)
    
    - A full list of <a href="emmajane.net/events">public 
    + A full list of <a href="http://emmajane.net/events">public 
    presentations and workshops</a> Emma has given is available
    

    I tend to skim past the metadata when reading patches and just focus on the lines that start with - or +. This means I start reading at the line immediate following @@. There are a few lines of context provided leading up to the changes. These lines are indented by one space each. The changed lines of code are then displayed with a preceding - (line removed), or + (line added).

    Going beyond the command line

    Using a Git repository browser, such as gitk, allows you to get a slightly better visual summary of the information we’ve looked at to date. The version of Git that Apple ships with does not include gitk—I used Homebrew to re-install Git and get this utility. Any repository browser will suffice, though, and there are many GUI clients available on the Git website.

    gitk
    

    When you run the command gitk, a graphical tool will launch from the command line. An example of the output is given in the following screenshot. Click on each of the commits to get more information about it. Many ticket systems will also allow you to look at the changes in a merge proposal side-by-side, so if you’re finding this cumbersome, click around in your ticketing system to find the comparison tools they might have—I know for sure GitHub offers this feature.

    Screenshot of the gitk repository browser.

    Now that you’ve had a good look at the code, jot down your answers to the following questions:

    1. Does the code comply with your project’s identified coding standards?
    2. Does the code limit itself to the scope identified in the ticket?
    3. Does the code follow industry best practices in the most efficient way possible?
    4. Has the code been implemented in the best possible way according to all of your internal specifications? It’s important to separate your preferences and stylistic differences from actual problems with the code.

    Apply the proposed changes

    Now is the time to start up your testing environment and view the proposed change in context. How does it look? Does your solution match what the coder thinks they’ve built? If it doesn’t look right, do you need to clear the cache, or perhaps rebuild the Sass output to update the CSS for the project?

    Now is the time to also test the code against whatever test suite you use.

    1. Does the code introduce any regressions?
    2. Does the new code perform as well as the old code? Does it still fall within your project’s performance budget for download and page rendering times?
    3. Are the words all spelled correctly, and do they follow any brand-specific guidelines you have?

    Depending on the context for this particular code change, there may be other obvious questions you need to address as part of your code review.

    Do your best to create the most comprehensive list of everything you can find wrong (and right) with the code. It’s annoying to get dribbles of feedback from someone as part of the review process, so we’ll try to avoid “just one more thing” wherever we can.

    Prepare your feedback

    Let’s assume you’ve now got a big juicy list of feedback. Maybe you have no feedback, but I doubt it. If you’ve made it this far in the article, it’s because you love to comb through code as much as I do. Let your freak flag fly and let’s get your review structured in a usable manner for your teammates.

    For all the notes you’ve assembled to date, sort them into the following categories:

    1. The code is broken. It doesn’t compile, introduces a regression, it doesn’t pass the testing suite, or in some way actually fails demonstrably. These are problems which absolutely must be fixed.
    2. The code does not follow best practices. You have some conventions, the web industry has some guidelines. These fixes are pretty important to make, but they may have some nuances which the developer might not be aware of.
    3. The code isn’t how you would have written it. You’re a developer with battle-tested opinions, and you know you’re right, you just haven’t had the chance to update the Wikipedia page yet to prove it.

    Submit your evaluation

    Based on this new categorization, you are ready to engage in passive-aggressive coding. If the problem is clearly a typo and falls into one of the first two categories, go ahead and fix it. Obvious typos don’t really need to go back to the original author, do they? Sure, your teammate will be a little embarrassed, but they’ll appreciate you having saved them a bit of time, and you’ll increase the efficiency of the team by reducing the number of round trips the code needs to take between the developer and the reviewer.

    If the change you are itching to make falls into the third category: stop. Do not touch the code. Instead, go back to your colleague and get them to describe their approach. Asking “why” might lead to a really interesting conversation about the merits of the approach taken. It may also reveal limitations of the approach to the original developer. By starting the conversation, you open yourself to the possibility that just maybe your way of doing things isn’t the only viable solution.

    If you needed to make any changes to the code, they should be absolutely tiny and minor. You should not be making substantive edits in a peer review process. Make the tiny edits, and then add the changes to your local repository as follows:

    git add .
    git commit -m "[#61524] Correcting <list problem> identified in peer review."
    

    You can keep the message brief, as your changes should be minor. At this point you should push the reviewed code back up to the server for the original developer to double-check and review. Assuming you’ve set up the branch as a tracking branch, it should just be a matter of running the command as follows:

    git push
    

    Update the issue in your ticketing system as is appropriate for your review. Perhaps the code needs more work, or perhaps it was good as written and it is now time to close the issue queue.

    Repeat the steps in this section until the proposed change is complete, and ready to be merged into the main branch.

    Merge the approved change into the trunk

    Up to this point you’ve been comparing a ticket branch to the master branch in the repository. This main branch is referred to as the “trunk” of your project. (It’s a tree thing, not an elephant thing.) The final step in the review process will be to merge the ticket branch into the trunk, and clean up the corresponding ticket branches.

    Begin by updating your master branch to ensure you can publish your changes after the merge.

    git checkout master
    git pull origin master
    

    Take a deep breath, and merge your ticket branch back into the main repository. As written, the following command will not create a new commit in your repository history. The commits will simply shuffle into line on the master branch, making git log −−graph appear as though a separate branch has never existed. If you would like to maintain the illusion of a past branch, simply add the parameter −−no-ff to the merge command, which will make it clear, via the graph history and a new commit message, that you have merged a branch at this point. Check with your team to see what’s preferred.

    git merge 61524-broken-link
    

    The merge will either fail, or it will succeed. If there are no merge errors, you are ready to share the revised master branch by uploading it to the central repository.

    git push
    

    If there are merge errors, the original coders are often better equipped to figure out how to fix them, so you may need to ask them to resolve the conflicts for you.

    Once the new commits have been successfully integrated into the master branch, you can delete the old copies of the ticket branches both from your local repository and on the central repository. It’s just basic housekeeping at this point.

    git branch -d 61524-broken-link
    git push origin --delete 61524-broken-link
    

    Conclusion

    This is the process that has worked for the teams I’ve been a part of. Without a peer review process, it can be difficult to address problems in a codebase without blame. With it, the code becomes much more collaborative; when a mistake gets in, it’s because we both missed it. And when a mistake is found before it’s committed, we both breathe a sigh of relief that it was found when it was.

    Regardless of whether you’re using Git or another source control system, the peer review process can help your team. Peer-reviewed code might take more time to develop, but it contains fewer mistakes, and has a strong, more diverse team supporting it. And, yes, I’ve been known to learn the habits of my reviewers and choose the most appropriate review style for my work, just like I did as a kid.

  • Rachel Andrew on the Business of Web Dev: Getting to the Action 

    Freelancers and self-employed business owners can choose from a huge number of conferences to attend in any given year. There are hundreds of industry podcasts, a constant stream of published books, and a never-ending supply of sites all giving advice. It is very easy to spend a lot of valuable time and money just attending, watching, reading, listening and hoping that somehow all of this good advice will take root and make our business a success.

    However, all the good advice in the world won’t help you if you don’t act on it. While you might leave that expensive conference feeling great, did your attendance create a lasting change to your business? I was thinking about this subject while listening to episode 14 of the Working Out podcast, hosted by Ashley Baxter and Paddy Donnelly. They were talking about following through, and how it is possible to “nod along” to good advice but never do anything with it.

    If you have ever been sent to a conference by an employer, you may have been expected to report back. You might even have been asked to present to your team on the takeaway points from the event. As freelancers and business owners, we don’t have anyone making us consolidate our thoughts in that way. It turns out that the way I work gives me a fairly good method of knowing which things are bringing me value.

    Tracking actionable advice

    I’m a fan of the Getting Things Done technique, and live by my to-do lists. I maintain a Someday/Maybe list in OmniFocus into which I add items that I want to do or at least investigate, but that aren’t a project yet.

    If a podcast is worth keeping on my playlist, there will be items entered linking back to certain episodes. Conference takeaways might be a link to a site with information that I want to read. It might be an idea for an article to write, or instructions on something very practical such as setting up an analytics dashboard to better understand some data. The first indicator of a valuable conference is how many items I add during or just after the event.

    Having a big list of things to do is all well and good, but it’s only one half of the story. The real value comes when I do the things on that list, and can see whether they were useful to my business. Once again, my GTD lists can be mined for that information.

    When tickets go on sale for that conference again, do I have most of those to-do items still sat in Someday/Maybe? Is that because, while they sounded like good ideas, they weren’t all that relevant? Or, have I written a number of blog posts or had several articles published on themes that I started considering off the back of that conference? Did I create that dashboard, and find it useful every day? Did that speaker I was introduced to go on to become a friend or mentor, or someone I’ve exchanged emails with to clarify a topic I’ve been thinking about?

    By looking back over my lists and completed items, I can start to make decisions about the real value to my business and life of the things I attend, read, and listen to. I’m able to justify the ticket price, time, and travel costs by making that assessment. I can feel confident that I’m not spending time and money just to feel as if I’m moving forward, yet gaining nothing tangible to show for it.

    A final thought on value

    As entrepreneurs, we have to make sure we are spending our time and money on things that will give us the best return. All that said, it is important to make time in our schedules for those things that we just enjoy, and in particular those things that do motivate and inspire us. I don’t think that every book you read or event you attend needs to result in a to-do list of actionable items.

    What we need as business owners, and as people, is balance. We need to be able to see that the things we are doing are moving our businesses forward, while also making time to be inspired and refreshed to get that actionable work done.

    Footnotes

    • 1. Have any favorite hacks for getting maximum value from conferences, workshops, and books? Tell us in the comments!
  • 10 Years Ago in ALA: Pocket Sized Design 

    The web doesn’t do “age” especially well. Any blog post or design article more than a few years old gets a raised eyebrow—heck, most people I meet haven’t read John Allsopp’s “A Dao of Web Design” or Jeffrey Zeldman’s “To Hell With Bad Browsers,” both as relevant to the web today as when they were first written. Meanwhile, I’ve got books on my shelves older than I am; most of my favorite films came out before I was born; and my iTunes library is riddled with music that’s decades, if not centuries, old.

    (No, I don’t get invited to many parties. Why do you ask oh I get it)

    So! It’s probably easy to look at “Pocket-Sized Design,” a lovely article by Jorunn Newth and Elika Etemad that just turned 10 years old, and immediately notice where it’s beginning to show its age. Written at a time when few sites were standards-compliant, and even fewer still were mobile-friendly, Newth and Etemad were urging us to think about life beyond the desktop. And when I first re-read it, it’s easy to chuckle at the points that feel like they’re from another age: there’s plenty of talk of screens that are “only 120-pixels wide”; of inputs driven by stylus, rather than touch; and of using the now-basically-defunct handheld media type for your CSS. Seems a bit quaint, right?

    And yet.

    Looking past a few of the details, it’s remarkable how well the article’s aged. Modern users may (or may not) manually “turn off in-line image loading,” but they may choose to use a mobile browser that dramatically compresses your images. We may scoff at the idea of someone browsing with a stylus, but handheld video game consoles are impossibly popular when it comes to browsing the web. And while there’s plenty of excitement in our industry for the latest versions of iOS and Android, running on the latest hardware, most of the web’s growth is happening on cheaper hardware, over slower networks (PDF), and via slim data plans—so yes, 10 years on, it’s still true that “downloading to the device is likely to be [expensive], the processors are slow, and the memory is limited.”

    In the face of all of that, what I love about Newth and Etemad’s article is just how sensible their solutions are. Rather than suggesting slimmed-down mobile sites, or investing in some device detection library, they take a decidedly standards-focused approach:

    Linearizing the page into one column works best when the underlying document structure has been designed for it. Structuring the document according to this logic ensures that the page organization makes sense not only in Opera for handhelds, but also in non-CSS browsers on both small devices and the desktop, in voice browsers, and in terminal-window browsers like Lynx.

    In other words, by thinking about the needs of the small screen first, you can layer on more complexity from there. And if you’re hearing shades of mobile first and progressive enhancement here, you’d be right: they’re treating their markup—their content—as a foundation, and gently layering styles atop it to make it accessible to more devices, more places than ever before.

    So, no: we aren’t using @media handheld or display: none for our small screen-friendly styles—but I don’t think that’s really the point of Newth and Etemad’s essay. Instead, they’re putting forward a process, a framework for designing beyond the desktop. What they’re arguing is for a truly device-agnostic approach to designing for the web, one that’s as relevant today as it was a decade ago.

    Plus ça change, plus c’est la même chose.

  • Dependence Day: The Power and Peril of Third-Party Solutions 

    “Why don’t we just use this plugin?” That’s a question I started hearing a lot in the heady days of the 2000s, when open-source CMSes were becoming really popular. We asked it optimistically, full of hope about the myriad solutions only a download away. As the years passed, we gained trustworthy libraries and powerful communities, but the graveyard of crufty code and abandoned services grew deep. Many solutions were easy to install, but difficult to debug. Some providers were eager to sell, but loath to support.

    Years later, we’re still asking that same question—only now we’re less optimistic and even more dependent, and I’m scared to engage with anyone smart enough to build something I can’t. The emerging challenge for today’s dev shop is knowing how to take control of third-party relationships—and when to avoid them. I’ll show you my approach, which is to ask a different set of questions entirely.

    A web of third parties

    I should start with a broad definition of what it is to be third party: If it’s a person and I don’t compensate them for the bulk of their workload, they’re third party. If it’s a company or service and I don’t control it, it’s third party. If it’s code and my team doesn’t grasp every line of it, it’s third party.

    The third-party landscape is rapidly expanding. Github has grown to almost 7 million users and the WordPress plugin repo is approaching 1 billion downloads. Many of these solutions are easy for clients and competitors to implement; meanwhile, I’m still in the lab debugging my custom code. The idea of selling original work seems oddly…old-fashioned.

    Yet with so many third-party options to choose from, there are more chances than ever to veer off-course.

    What could go wrong?

    At a meeting a couple of years ago, I argued against using an external service to power a search widget on a client project. “We should do things ourselves,” I said. Not long after this, on the very same project, I argued in favor of a using a third party to consolidate RSS feeds into a single document. “Why do all this work ourselves,” I said, “when this problem has already been solved?” My inconsistency was obvious to everyone. Being dogmatic about not using a third party is no better than flippantly jumping in with one, and I had managed to do both at once!

    But in one case, I believed the third party was worth the risk. In the other, it wasn’t. I just didn’t know how to communicate those thoughts to my team.

    I needed, in the parlance of our times, a decision-making framework. To that end, I’ve been maintaining a collection of points to think through at various stages of engagement with third parties. I’ll tour through these ideas using the search widget and the RSS digest as examples.

    The difference between a request and a goal

    This point often reveals false assumptions about what a client or stakeholder wants. In the case of the search widget, we began researching a service that our client specifically requested. Fitted with ajax navigation, full-text searching, and automated crawls to index content, it seemed like a lot to live up to. But when we asked our clients what exactly they were trying to do, we were surprised: they were entirely taken by the typeahead functionality; the other features were of very little perceived value.

    In the case of the RSS “smusher,” we already had an in-house tool that took an array of feed URLs and looped through them in order, outputting x posts per feed in some bespoke format. They’re too good for our beloved multi-feed widget? But actually, the client had a distinctly different and worthwhile vision: they wanted x results from their array of sites in total, and they wanted them ordered by publication date, not grouped by site. I conceded.

    It might seem like an obvious first step, but I have seen projects set off in the wrong direction because the end goal is unknown. In both our examples now, we’re clear about that and we’re ready to evaluate solutions.

    To dev or to download

    Before deciding to use a third party, I find that I first need to examine my own organization, often in four particular ways: strengths, weaknesses, betterment, and mission.

    Strengths and weaknesses

    The search task aligned well with our strengths because we had good front-end developers and were skilled at extending our CMS. So when asked to make a typeahead search, we felt comfortable betting on ourselves. Had we done it before? Not exactly, but we could think through it.

    At that same time, backend infrastructure was a weakness for our team. We had happened to have a lot of turnover among our sysadmins, and at times it felt like we weren’t equipped to hire that sort of talent. As I was thinking through how we might build a feed-smusher of our own, I felt like I was tempting a weak underbelly. Maybe we’d have to set up a cron job to poll the desired URLs, grab feed content, and store that on our servers. Not rocket science, but cron tasks in particular were an albatross for us.

    Betterment of the team

    When we set out to achieve a goal for a client, it’s more than us doing work: it’s an opportunity for our team to better themselves by learning new skills. The best opportunities for this are the ones that present challenging but attainable tasks, which create incremental rewards. Some researchers cite this effect as a factor in gaming addiction. I’ve felt this myself when learning new things on a project, and those are some of my favorite work moments ever. Teams appreciate this and there is an organizational cost in missing a chance to pay them to learn. The typeahead search project looked like it could be a perfect opportunity to boost our skill level.

    Organizational mission

    If a new project aligns well with our mission, we’re going to resell it many times. It’s likely that we’ll want our in-house dev team to iterate on it, tailoring it to our needs. Indeed, we’ll have the budget to do so if we’re selling it a lot. No one had asked us for a feed-smusher before, so it didn’t seem reasonable to dedicate an R&D budget to it. In contrast, several other clients were interested in more powerful site search, so it looked like it would be time well spent.

    We’ve now clarified our end goals and we’ve looked at how these projects align with our team. Based on that, we’re doing the search widget ourselves, and we’re outsourcing the feed-smusher. Now let’s look more closely at what happens next for both cases.

    Evaluating the unknown

    The frustrating thing about working with third parties is that the most important decisions take place when we have the least information. But there are some things we can determine before committing. Familiarity, vitality, extensibility, branding, and Service Level Agreements (SLAs) are all observable from afar.

    Familiarity: is there a provider we already work with?

    Although we’re going to increase the number of third-party dependencies, we’ll try to avoid increasing the number of third-party relationships.

    Working with a known vendor has several potential benefits: they may give us volume pricing. Markup and style are likely to be consistent between solutions. And we just know them better than we’d know a new service.

    Vitality: will this service stick around?

    The worst thing we could do is get behind a service, only to have it shut down next month. A service with high vitality will likely (and rightfully) brag about enterprise clients by name. If it’s open source, it will have a passionate community of contributors. On the other hand, it could be advertising a shutdown. More often, it’s somewhere in the middle. Noting how often the service is updated is a good starting point in determining vitality.

    Extensibility: can this service adapt as our needs change?

    Not only do we have to evaluate the core service, we have to see how extensible it is by digging into its API. If a service is extensible, it’s more likely to fit for the long haul.

    APIs can also present new opportunities. For example, imagine selecting an email-marketing provider with an API that exposes campaign data. This might allow us to build a dashboard for campaign performance in our CMS—a unique value-add for our clients, and a chance to keep our in-house developers invested and excited about the service.

    Branding: is theirs strong, or can you use your own?

    White-labeling is the practice of reselling a service with your branding instead of that of the original provider. For some companies, this might make good sense for marketing. I tend to dislike white-labeling. Our clients trust us to make choices, and we should be proud to display what those choices are. Either way, you want to ensure you’re comfortable with the brand you’ll be using.

    SLAs: what are you getting, beyond uptime?

    For client-side products, browser support is a factor: every external dependency represents another layer that could abandon older browsers before we’re ready. There’s also accessibility. Does this new third-party support users with accessibility needs to the degree that we require? Perhaps most important of all is support. Can we purchase a priority support plan that offers fast and in-depth help?

    In the case of our feed-smusher service, there was no solution that ran the table. The most popular solution actually had a shutdown notice! There were a couple of smaller providers available, but we hadn’t worked with either before. Browser support and accessibility were moot since we’d be parsing the data and displaying it ourselves. The uptime concern was also diminished because we’d be sure to cache the results locally. Anyway, with viable candidates in hand, we can move on to more productive concerns than dithering between two similar solutions.

    Relationship maintenance

    If someone else is going to do the heavy lifting, I want to assume as much of the remaining burden as possible. Piloting, data collection, documentation, and in-house support are all valuable opportunities to buttress this new relationship.

    As exciting as this new relationship is, we don’t want to go dashing out of the gates just yet. Instead, we’ll target clients for piloting and quarantine them before unleashing it any further. Cull suggestions from team members to determine good candidates for piloting, garnering a mix of edge-cases and the norm.

    If the third party happens to collect data of any kind, we should also have an automated way to import a copy of it—not just as a backup, but also as a cached version we can serve to minimize latency. If we are serving a popular dependency from a CDN, we want to send a local version if that call should fail.

    If our team doesn’t have a well-traveled directory of provider relationships, the backstory can get lost. Let a few months pass, throw in some personnel turnover, and we might forget why we even use a service, or why we opted for a particular package. Everyone on our team should know where and how to learn about our third-party relationships.

    We don’t need every team member to be an expert on the service, yet we don’t want to wait for a third-party support staff to respond to simple questions. Therefore, we should elect an in-house subject-matter expert. It doesn’t have to be a developer. We just need somebody tasked with monitoring the service at regular intervals for API changes, shutdown notices, or new features. They should be able to train new employees and route more complex support requests to the third party.

    In our RSS feed example, we knew we’d read their output into our database. We documented this relationship in our team’s most active bulletin, our CRM software. And we made managing external dependencies a primary part of one team member’s job.

    DIY: a third party waiting to happen?

    Stop me if you’ve heard this one before: a prideful developer assures the team that they can do something themselves. It’s a complex project. They make something and the company comes to rely on it. Time goes by and the in-house product is doing fine, though there is a maintenance burden. Eventually, the developer leaves the company. Their old product needs maintenance, no one knows what to do, and since it’s totally custom, there is no such thing as a community for it.

    Once you decide to build something in-house, how can you prevent that work from devolving into a resented, alien dependency? 

    • Consider pair-programming. What better way to ensure that multiple people understand a product, than to have multiple people build it?
    • “Job-switch Tuesdays.” When feasible, we have developers switch roles for an entire day. Literally, in our ticketing system, it’s as though one person is another. It’s a way to force cross-training without doubling the hours needed for a task.
    • Hold code reviews before new code is pushed. This might feel slightly intrusive at first, but that passes. If it’s not readable, it’s not deployable. If you have project managers with a technical bent, empower them to ask questions about the code, too.
    • Bring moldy code into light by displaying it as phpDoc, JSDoc, or similar.
    • Beware the big. Create hourly estimates in Fibonacci increments. As a project gets bigger, so does its level of uncertainty. The Fibonacci steps are biased against under-budgeting, and also provide a cue to opt out of projects that are too difficult to estimate. In that case, it’s likely better to toe-in with a third party instead of blazing into the unknown by yourself.

    All of these considerations apply to our earlier example, the typeahead search widget. Most germane is the provision to “beware the big.” When I say “big,” I mean that relative to what usually works for a given team. In this case, it was a deliverable that felt very familiar in size and scope: we were being asked to extend an open-source CMS. If instead we had been asked to make a CMS, alarms would have gone off.

    Look before you leap, and after you land

    It’s not that third parties are bad per se. It’s just that the modern web team strikes me as a strange place: not only do we stand on the shoulders of giants, we do so without getting to know them first—and we hoist our organizations and clients up there, too.

    Granted, there are many things you shouldn’t do yourself, and it’s possible to hurt your company by trying to do them—NIH is a problem, not a goal. But when teams err too far in the other direction, developers become disenfranchised, components start to look like spare parts, and clients pay for solutions that aren’t quite right. Using a third party versus staying in-house is a big decision, and we need to think hard before we make it. Use my line of questions, or come up with one that fits your team better. After all, you’re your own best dependency.

  • One Step Ahead: Improving Performance with Prebrowsing 

    We all want our websites to be fast. We optimize images, create CSS sprites, use CDNs, cache aggressively, and gzip and minimize static content. We use every trick in the book.

    But we can still do more. If we want faster outcomes, we have to think differently. What if, instead of leaving our users to stare at a spinning wheel, waiting for content to be delivered, we could predict where they wanted to go next? What if we could have that content ready for them before they even ask for it?

    We tend to see the web as a reactive model, where every action causes a reaction. Users click, then we take them to a new page. They click again, and we open another page. But we can do better. We can be proactive with prebrowsing.

    The three big techniques

    Steve Souders coined the term prebrowsing (from predictive browsing) in one of his articles late last year. Prebrowsing is all about anticipating where users want to go and preparing the content ahead of time. It’s a big step toward a faster and less visible internet.

    Browsers can analyze patterns to predict where users are going to go next, and start DNS resolution and TCP handshakes as soon as users hover over links. But to get the most out of these improvements, we can enable prebrowsing on our web pages, with three techniques at our disposal:

    • DNS prefetching
    • Resource prefetching
    • Prerendering

    Now let’s dive into each of these separately.

    DNS prefetching

    Whenever we know our users are likely to request a resource from a different domain than our site, we can use DNS prefetching to warm the machinery for opening the new URL. The browser can pre-resolve the DNS for the new domain ahead of time, saving several milliseconds when the user actually requests it. We are anticipating, and preparing for an action.

    Modern browsers are very good at parsing our pages, looking ahead to pre-resolve all necessary domains ahead of time. Chrome goes as far as keeping an internal list with all related domains every time a user visits a site, pre-resolving them when the user returns (you can see this list by navigating to chrome://dns/ in your Chrome browser). However, sometimes access to new URLs may be hidden behind redirects or embedded in JavaScript, and that’s our opportunity to help the browser.

    Let’s say we are downloading a set of resources from the domain cdn.example.com using a JavaScript call after a user clicks a button. Normally, the browser would have to resolve the DNS at the time of the click, but we can speed up the process by including a dns-prefetch directive in the head section of our page:

    <link rel="dns-prefetch" href="http://cdn.example.com">
    

    Doing this informs the browser of the existence of the new domain, and it will combine this hint with its own pre-resolution algorithm to start a DNS resolution as soon as possible. The entire process will be faster for the user, since we are shaving off the time for DNS resolution from the operation. (Note that browsers do not guarantee that DNS resolution will occur ahead of time; they simply use our hint as a signal for their own internal pre-resolution algorithm.)

    But exactly how much faster will pre-resolving the DNS make things? In your Chrome browser, open chrome://histograms/DNS and search for DNS.PrefetchResolution. You’ll see a table like this:

    Histogram for DNS.PrefetchResolution

    This histogram shows my personal distribution of latencies for DNS prefetch requests. On my computer, for 335 samples, the average time is 88 milliseconds, with a median of approximately 60 milliseconds. Shaving 88 milliseconds off every request our website makes to an external domain? That’s something to celebrate.

    But what happens if the user never clicks the button to access the cdn.example.com domain? Aren’t we pre-resolving a domain in vain? We are, but luckily for us, DNS prefetching is a very low-cost operation; the browser will need to send only a few hundred bytes over the network, so the risk incurred by a preemptive DNS lookup is very low. That being said, don’t go overboard when using this feature; prefetch only domains that you are confident the user will access, and let the browser handle the rest.

    Look for situations that might be good candidates to introduce DNS prefetching on your site:

    • Resources on different domains hidden behind 301 redirects
    • Resources accessed from JavaScript code
    • Resources for analytics and social sharing (which usually come from different domains)

    DNS prefetching is currently supported on IE11, Chrome, Chrome Mobile, Safari, Firefox, and Firefox Mobile, which makes this feature widespread among current browsers. Browsers that don’t currently support DNS prefetching will simply ignore the hint, and DNS resolution will happen in a regular fashion.

    Resource prefetching

    We can go a little bit further and predict that our users will open a specific page in our own site. If we know some of the critical resources used by this page, we can instruct the browser to prefetch them ahead of time:

    <link rel="prefetch" href="http://cdn.example.com/library.js">
    

    The browser will use this instruction to prefetch the indicated resources and store them on the local cache. This way, as soon as the resources are actually needed, the browser will have them ready to serve.

    Unlike DNS prefetching, resource prefetching is a more expensive operation; be mindful of how and when to use it. Prefetching resources can speed up our websites in ways we would never get by merely prefetching new domains—but if we abuse it, our users will pay for the unused overhead.

    Let’s take a look at the average response size of some of the most popular resources on a web page, courtesy of the HTTP Archive:

    Chart of average response size of web page resources

    On average, prefetching a script file (like we are doing on the example above) will cause 16kB to be transmitted over the network (without including the size of the request itself). This means that we will save 16kB of downloading time from the process, plus server response time, which is amazing—provided it’s later accessed by the user. If the user never accesses the file, we actually made the entire workflow slower by introducing an unnecessary delay.

    If you decide to use this technique, prefetch only the most important resources, and make sure they are cacheable by the browser. Images, CSS, JavaScript, and font files are usually good candidates for prefetching, but HTML responses are not since they aren’t cacheable.

    Here are some situations where, due to the likelihood of the user visiting a specific page, you can prefetch resources ahead of time:

    • On a login page, since users are usually redirected to a welcome or dashboard page after logging in
    • On each page of a linear questionnaire or survey workflow, where users are visiting subsequent pages in a specific order
    • On a multi-step animation, since you know ahead of time which images are needed on subsequent scenes

    Resource prefetching is currently supported on IE11, Chrome, Chrome Mobile, Firefox, and Firefox Mobile. (To determine browser compatibility, you can run a quick browser test on prebrowsing.com.)

    Prerendering

    What about going even further and asking for an entire page? Let’s say we are absolutely sure that our users are going to visit the about.html page in our site. We can give the browser a hint:

    <link rel="prerender" href="http://example.com/about.html">
    

    This time the browser will download and render the page in the background ahead of time, and have it ready for the user as soon as they ask for it. The transition from the current page to the prerendered one would be instantaneous.

    Needless to say, prerendering is the most risky and costly of these three techniques. Misusing it can cause major bandwidth waste—especially harmful for users on mobile devices. To illustrate this, let’s take a look at this chart, also courtesy of the HTTP Archive:

    Graph of total transfer size and total requests to render a web page

    In June of this year, the average number of requests to render a web page was 96, with a total size of 1,808kB. So if your user ends up accessing your prerendered page, then you’ve hit the jackpot: you’ll save the time of downloading almost 2,000kB, plus server response time. But if you’re wrong and your user never accesses the prerendered page, you’ll make them pay a very high cost.

    When deciding whether to prerender entire pages ahead of time, consider that Google prerenders the top results on its search page, and Chrome prerenders pages based on the historical navigation patterns of users. Using the same principle, you can detect common usage patterns and prerender target pages accordingly. You can also use it, just like resource prefetching, on questionnaires or surveys where you know users will complete the workflow in a particular order.

    At this time, prerendering is only supported on IE11, Chrome, and Chrome Mobile. Neither Firefox nor Safari have added support for this technique yet. (And as with resource prefetching, you can check prebrowsing.com to test whether this technique is supported in your browser.)

    A final word

    Sites like Google and Bing are using these techniques extensively to make search instant for their users. Now it’s time for us to go back to our own sites and take another look. Can we make our experiences better and faster with prefetching and prerendering?

    Browsers are already working behind the scenes, looking for patterns in our sites to make navigation as fast as possible. Prebrowsing builds on that: we can combine the insight we have on our own pages with further analysis of user patterns. By helping browsers do a better job, we speed up and improve the experience for our users.

  • Valediction 

    When I first met Kevin Cornell in the early 2000s, he was employing his illustration talent mainly to draw caricatures of his fellow designers at a small Philadelphia design studio. Even in that rough, dashed-off state, his work floored me. It was as if Charles Addams and my favorite Mad Magazine illustrators from the 1960s had blended their DNA to spawn the perfect artist.

    Kevin would deny that label, but artist he is. For there is a vision in his mind, a way of seeing the world, that is unlike anyone else’s—and he has the gift to make you see it too, and to delight, inspire, and challenge you with what he makes you see.

    Kevin was part of a small group of young designers and artists who had recently completed college and were beginning to establish careers. Others from that group included Rob Weychert, Matt Sutter, and Jason Santa Maria. They would all go on to do fine things in our industry.

    It was Jason who brought Kevin on as house illustrator during the A List Apart 4.0 brand overhaul in 2005, and Kevin has worked his strange magic for us ever since. If you’re an ALA reader, you know how he translates the abstract web design concepts of our articles into concrete, witty, and frequently absurd situations. Above all, he is a storyteller—if pretentious designers and marketers haven’t sucked all the meaning out of that word.

    For nearly 10 years, Kevin has taken our well-vetted, practical, frequently technical web design and development pieces, and elevated them to the status of classic New Yorker articles. Tomorrow he publishes his last new illustrations with us. There will never be another like him. And for whatever good it does him, Kevin Cornell has my undying thanks, love, and gratitude.

  • My Favorite Kevin Cornell 

    After 200 issues—yes, two hundred—Kevin Cornell is retiring from his post as A List Apart’s staff illustrator. Tomorrow’s issue will be the last one featuring new illustrations from him.

    Sob.

    For years now, we’ve eagerly awaited Kevin’s illustrations each issue, opening his files with all the patience of a kid tearing into a new LEGO set.

    But after nine years and more than a few lols, it’s time to give Kevin’s beautifully deranged brain a rest.

    We’re still figuring out what comes next for ALA, but while we do, we’re sending Kevin off the best way we know how: by sharing a few of our favorite illustrations. Read on for stories from ALA staff, past and present—and join us in thanking Kevin for his talent, his commitment, and his uncanny ability to depict seemingly any concept using animals, madmen, and circus figures.

    Of all the things I enjoyed about working on A List Apart, I loved anticipating the reveal: seeing Kevin’s illos for each piece, just before the issue went live. Every illustration was always a surprise—even to the staff. My favorite, hands-down, was his artwork for “The Discipline of Content Strategy,” by Kristina Halvorson. In 2008, content was web design’s “elephant in the room” and Kevin’s visual metaphor nailed it. In a drawing, he encapsulated thoughts and feelings many had within the industry but were unable to articulate. That’s the mark of a master.

    —Krista Stevens, Editor-in-chief, 2006–2012

    In the fall of 2011, I submitted my first article to A List Apart. I was terrified: I didn’t know anyone on staff. The authors’ list read like a who’s who of web design. The archives were intimidating. But I had ideas, dammit. I hit send.

    I told just one friend what I’d done. His eyes lit up. “Whoa. You’d get a Kevin Cornell!” he said.

    Whoa indeed. I might get a Kevin Cornell?! I hadn’t even thought about that yet.

    Like Krista, I fell in love with Kevin’s illustration for “The Discipline of Content Strategy”—an illustration that meant the world to me as I helped my clients see their own content elephants. The idea of having a Cornell of my own was exciting, but terrifying. Could I possibly write something worthy of his illustration?

    Months later, there it was on the screen: little modular sandcastles illustrating my article on modular content. I was floored.

    Now, after two years as ALA’s editor in chief, I’ve worked with Kevin through dozens of issues. But you know what? I’m just as floored as ever.

    Thank you, Kevin, you brilliant, bizarre, wonderful friend.

    —Sara Wachter-Boettcher, Editor-in-chief

    It’s impossible for me to choose a favorite of Kevin’s body of work for ALA, because my favorite Cornell illustration is the witty, adaptable, humane language of characters and symbols underlying his years of work. If I had to pick a single illustration to represent the evolution of his visual language, I think it would be the hat-wearing nested egg with the winning smile that opened Andy Hagen’s “High Accessibility is Effective Search Engine Optimization.” An important article but not, perhaps, the juiciest title A List Apart has ever run…and yet there’s that little egg, grinning in his slightly dopey way.

    If my memory doesn’t fail me, this is the second appearance of the nested Cornell egg—we saw the first a few issues before in Issue 201, where it represented the nested components of an HTML page. When it shows up here, in Issue 207, we realize that the egg wasn’t a cute one-off, but the first syllable of a visual language that we’ll see again and again through the years. And what a language! Who else could make semantic markup seem not just clever, but shyly adorable?

    A wander through the ALA archives provides a view of Kevin’s changing style, but something visible only backstage was his startlingly quick progression from reading an article to sketching initial ideas in conversation with then-creative director Jason Santa Maria to turning out a lovely miniature—and each illustration never failed to make me appreciate the article it introduced in a slightly different way. When I was at ALA, Kevin’s unerring eye for the important detail as a reader astonished me almost as much as his ability to give that (often highly technical, sometimes very dry) idea a playful and memorable visual incarnation. From the very first time his illustrations hit the A List Apart servers he’s shared an extraordinary gift with its readers, and as a reader, writer, and editor, I will always count myself in his debt.

    —Erin Kissane, Editor-in-chief, contributing editor, 1999–2009

    So much of what makes Kevin’s illustrations work are the gestures. The way the figure sits a bit slouched, but still perched on gentle tippy toes, determinedly occupied pecking away on his phone. With just a few lines, Kevin captures a mood and moment anyone can feel.

    —Jason Santa Maria, Former creative director

    I’ve had the pleasure of working with Kevin on the illustrations for each issue of A List Apart since we launched the latest site redesign in early 2013. By working, I mean replying to his email with something along the lines of “Amazing!” when he sent over the illustrations every couple of weeks.

    Prior to launching the new design, I had to go through the backlog of Kevin’s work for ALA and do the production work needed for the new layout. This bird’s eye view gave me an appreciation of the ongoing metaphorical world he had created for the magazine—the birds, elephants, weebles, mad scientists, ACME products, and other bits of amusing weirdness that breathed life into the (admittedly, sometimes) dry topics covered.

    If I had to pick a favorite, it would probably be the illustration that accompanied the unveiling of the redesign, A List Apart 5.0. The shoe-shine man carefully working on his own shoes was the perfect metaphor for both the idea of design as craft and the back-stage nature of the profession—working to make others shine, so to speak. It was a simple and humble concept, and I thought it created the perfect tone for the launch.

    —Mike Pick, Creative director

    So I can’t pick one favorite illustration that Kevin’s done. I just can’t. I could prattle on about this, that, or that other one, and tell you everything I love about each of ’em. I mean, hell: I still have a print of the illustration he did for my very first ALA article. (The illustration is, of course, far stronger than the essay that follows it.)

    But his illustration for James Christie’s excellent “Sustainable Web Design” is a perfect example of everything I love about Kevin’s ALA work: how he conveys emotion with a few deceptively simple lines; the humor he finds in contrast; the occasional chicken. Like most of Kevin’s illustrations, I’ve seen it whenever I reread the article it accompanies, and I find something new to enjoy each time.

    It’s been an honor working alongside your art, Kevin—and, on a few lucky occasions, having my words appear below it.

    Thanks, Kevin.

    —Ethan Marcotte, Technical editor

    Kevin’s illustration for Cameron Koczon’s “Orbital Content” is one of the best examples I can think of to show off his considerable talent. Those balloons are just perfect: vaguely reminiscent of cloud computing, but tethered and within arm’s reach, and evoking the fun and chaos of carnivals and county fairs. No other illustrator I’ve ever worked with is as good at translating abstract concepts into compact, visual stories. A List Apart won’t be the same without him.

    —Mandy Brown, Former contributing editor

    Kevin has always had what seems like a preternatural ability to take an abstract technical concept and turn it into a clear and accessible illustration.

    For me, my favorite pieces are the ones he did for the 3rd anniversary of the original “Responsive Web Design” article…the web’s first “responsive” illustration? Try squishing your browser here to see it in action—Ed

    —Tim Murtaugh, Technical director

    I think it may be impossible for me to pick just one illustration of Kevin’s that I really like. Much like trying to pick your one favorite album or that absolutely perfect movie, picking a true favorite is simply folly. You can whittle down the choices, but it’s guaranteed that the list will be sadly incomplete and longer (much longer) than one.

    If held at gunpoint, however ridiculous that sounds, and asked which of Kevin’s illustrations is my favorite, close to the top of the list would definitely be “12 Lessons for Those Afraid of CSS Standards.” It’s just so subtle, and yet so pointed.

    What I personally love the most about Kevin’s work is the overall impact it can have on people seeing it for the first time. It has become commonplace within our ranks to hear the phrase, “This is my new favorite Kevin Cornell illustration” with the publishing of each issue. And rightly so. His wonderfully simple style (which is also deceptively clever and just so smart) paired with the fluidity that comes through in his brush work is magical. Case in point for me would be his piece for “The Problem with Passwords” which just speaks volumes about the difficulty and utter ridiculousness of selecting a password and security question.

    We, as a team, have truly been spoiled by having him in our ranks for as long as we have. Thank you Kevin.

    —Erin Lynch, Production manager

    The elephant was my first glimpse at Kevin’s elegantly whimsical visual language. I first spotted it, a patient behemoth being studied by nonplussed little figures, atop Kristina Halvorson’s “The Discipline of Content Strategy,” which made no mention of elephants at all. Yet the elephant added to my understanding: content owners from different departments focus on what’s nearest to them. The content strategist steps back to see the entire thing.

    When Rachel Lovinger wrote about “Content Modelling,” the elephant made a reappearance as a yet-to-be-assembled, stylized elephant doll. The unflappable elephant has also been the mascot of product development at the hands of a team trying to construct it from user research, strutted its stuff as curated content, enjoyed the diplomatic guidance of a ringmaster, and been impersonated by a snake to tell us that busting silos is helped by a better understanding of others’ discourse conventions.

    The delight in discovering Kevin’s visual rhetoric doesn’t end there. With doghouses, birdhouses, and fishbowls, Kevin speaks of environments for users and workers. With owls he represents the mobile experience and smartphones. With a team arranging themselves to fit into a group photo, he makes the concept of responsive design easier to grasp.

    Not only has Kevin trained his hand and eye to produce the gestures, textures, and compositions that are uniquely his, but he has trained his mind to speak in a distinctive visual language—and he can do it on deadline. That is some serious mastery of the art.

    —Rose Weisburd, Columns editor

  • Measure Twice, Cut Once 

    Not too long ago, I had a few rough days in support of a client project. The client had a big content release, complete with a media embargo and the like. I woke up on the day of the launch, and things were bad. I was staring straight into a wall of red.

    A response and downtime report

    Thanks to the intrinsic complexity of software engineering, these situations happen—I’ve been through them before, and I’ll certainly be through them again. While the particulars change, there are two guiding principles I rely on when I find myself looking up that hopelessly tall cliff of red.

    You can’t be at the top of your game while stressed and nervous about the emergency, so unless there’s an obvious, quick-to-deploy resolution, you need to give yourself some cover to work.

    What that means will be unique to every situation, but as strange as it may sound, don’t dive into work on the be-all and end-all solution right off the bat. Take a few minutes to find a way to provide a bit of breathing room for you to build and implement the long-term solution in a stable, future-friendly way.

    Ideally, the cover you’re providing shouldn’t affect the users too much. Consider beefing up your caching policies to lighten the load on your servers as much as possible. If there’s any functionality that is particularly taxing on your hardware and isn’t mission critical, disable it temporarily. Even if keeping the servers alive means pressing a button every 108 minutes like you’re Desmond from Lost, do it.

    After you’ve got some cover, work the problem slowly and deliberately. Think solutions through two or three times to be sure they’re the right course of action.

    With the pressure eased, you don’t have to rush through a cycle of building, deploying, and testing potential fixes. Rushing leads to oversight of important details, and typically, that cycle ends the first time a change fixes (or seemingly fixes) the issue, which can lead to sloppy code and weak foundations for the future.

    If the environment doesn’t allow you to ease the pressure enough to work slowly, go ahead and cycle your way to a hacky solution. But don’t forget to come back and work the root issue, or else temporary fixes will pile up and eat away at your system’s architecture like a swarm of termites.

    Emergencies often require more thought and planning than everyday development, so be sure to give yourself the necessary time. Reactions alone may patch an issue, but thoughtfulness can solve it.

     

  • How We Read 

    I want you to think about what you’re doing right now. I mean really think about it. As your eyes move across these lines and funnel information to your brain, you’re taking part in a conversation I started with you. The conveyance of that conversation is the type you’re reading on this page, but you’re also filtering it through your experiences and past conversations. You’re putting these words into context. And whether you’re reading this book on paper, on a device, or at your desk, your environment shapes your experience too. Someone else reading these words may go through the same motions, but their interpretation is inevitably different from yours.

    This is the most interesting thing about typography: it’s a chain reaction of time and place with you as the catalyst. The intention of a text depends on its presentation, but it needs you to give it meaning through reading.

    Type and typography wouldn’t exist without our need to express and record information. Sure, we have other ways to do those things, like speech or imagery, but type is efficient, flexible, portable, and translatable. This is what makes typography not only an art of communication, but one of nuance and craft, because like all communication, its value falls somewhere on a spectrum between success and failure.

    The act of reading is beautifully complex, and yet, once we know how, it’s a kind of muscle memory. We rarely think about it. But because reading is so intrinsic to every other thing about typography, it’s the best place for us to begin. We’ve all made something we wanted someone else to read, but have you ever thought about that person’s reading experience?

    Just as you’re my audience for this book, I want you to look at your audience too: your readers. One of design’s functions is to entice and delight. We need to welcome readers and convince them to sit with us. But what circumstances affect reading?

    Readability

    Just because something is legible doesn’t mean it’s readable. Legibility means that text can be interpreted, but that’s like saying tree bark is edible. We’re aiming higher. Readability combines the emotional impact of a design (or lack thereof ) with the amount of effort it presumably takes to read. You’ve heard of TL;DR (too long; didn’t read)? Length isn’t the only detractor to reading; poor typography is one too. To paraphrase Stephen Coles, the term readability doesn’t ask simply, “Can you read it?” but “Do you want to read it?”

    Each decision you make could potentially hamper a reader’s understanding, causing them to bail and update their Facebook status instead. Don’t let your design deter your readers or stand in the way of what they want to do: read.

    Once we bring readers in, what else can we do to keep their attention and help them understand our writing? Let’s take a brief look at what the reading experience is like and how design influences it.

    The act of reading

    When I first started designing websites, I assumed everyone read my work the same way I did. I spent countless hours crafting the right layout and type arrangements. I saw the work as a collection of the typographic considerations I made: the lovingly set headlines, the ample whitespace, the typographic rhythm (fig 1.1). I assumed everyone would see that too.

    A normal paragraph of text
    Fig 1.1: A humble bit of text. But what actually happens when someone reads it?

    It’s appealing to think that’s the case, but reading is a much more nuanced experience. It’s shaped by our surroundings (am I in a loud coffee shop or otherwise distracted?), our availability (am I busy with something else?), our needs (am I skimming for something specific?), and more. Reading is not only informed by what’s going on with us at that moment, but also governed by how our eyes and brains work to process information. What you see and what you’re experiencing as you read these words is quite different.

    As our eyes move across the text, our minds gobble up the type’s texture—the sum of the positive and negative spaces inside and around letters and words. We don’t linger on those spaces and details; instead, our brains do the heavy lifting of parsing the text and assembling a mental picture of what we’re reading. Our eyes see the type and our brains see Don Quixote chasing a windmill.

    Or, at least, that’s what we hope. This is the ideal scenario, but it depends on our design choices. Have you ever been completely absorbed in a book and lost in the passing pages? Me too. Good writing can do that, and good typography can grease the wheels. Without getting too scientific, let’s look at the physical process of reading.

    Saccades and fixations

    Reading isn’t linear. Instead, our eyes perform a series of back and forth movements called saccades, or lightning-fast hops across a line of text (fig 1.2). Sometimes it’s a big hop; sometimes it’s a small hop. Saccades help our eyes register a lot of information in a short span, and they happen many times over the course of a second. A saccade’s length depends on our proficiency as readers and our familiarity with the text’s topic. If I’m a scientist and reading, uh, science stuff, I may read it more quickly than a non-scientist, because I’m familiar with all those science-y words. Full disclosure: I’m not really a scientist. I hope you couldn’t tell.

    Paragraph showing saccades or the movement our eyes make as we read a line of text
    Fig 1.2: Saccades are the leaps that happen in a split second as our eyes move across a line of text.

    Between saccades, our eyes stop for a fraction of a second in what’s called a fixation (fig 1.3). During this brief pause we see a couple of characters clearly, and the rest of the text blurs out like ripples in a pond. Our brains assemble these fixations and decode the information at lightning speed. This all happens on reflex. Pretty neat, huh?

    Paragraph showing the fixations or stopping points our eyes make as we read a paragraph
    Fig 1.3: Fixations are the brief moments of pause between saccades.

    The shapes of letters and the shapes they make when combined into words and sentences can significantly affect our ability to decipher text. If we look at an average line of text and cover the top halves of the letters, it becomes very difficult to read. If we do the opposite and cover the bottom halves, we can still read the text without much effort (fig 1.4).

    Paragraph showing how the upper half of letters are still readable to the human eyes
    Fig 1.4: Though the letters’ lower halves are covered, the text is still mostly legible, because much of the critical visual information is in the tops of letters.

    This is because letters generally carry more of their identifying features in their top halves. The sum of each word’s letterforms creates the word shapes we recognize when reading.

    Once we start to subconsciously recognize letters and common words, we read faster. We become more proficient at reading under similar conditions, an idea best encapsulated by type designer Zuzana Licko: “Readers read best what they read most.”

    It’s not a hard and fast rule, but close. The more foreign the letterforms and information are to us, the more slowly we discern them. If we traveled back in time to the Middle Ages with a book typeset in a super-awesome sci-fi font, the folks from the past might have difficulty with it. But here in the future, we’re adept at reading that stuff, all whilst flying around on hoverboards.

    For the same reason, we sometimes have trouble deciphering someone else’s handwriting: their letterforms and idiosyncrasies seem unusual to us. Yet we’re pretty fast at reading our own handwriting (fig 1.5).

    Three paragraphs of handwritten text
    Fig 1.5: While you’re very familiar with your own handwriting, reading someone else’s (like mine!) can take some time to get used to.

    There have been many studies on the reading process, with only a bit of consensus. Reading acuity depends on several factors, starting with the task the reader intends to accomplish. Some studies show that we read in word shapes—picture a chalk outline around an entire word—while others suggest we decode things letter by letter. Most findings agree that ease of reading relies on the visual feel and precision of the text’s setting (how much effort it takes to discern one letterform from another), combined with the reader’s own proficiency.

    Consider a passage set in all capital letters (fig 1.6). You can become adept at reading almost anything, but most of us aren’t accustomed to reading lots of text in all caps. Compared to the normal sentence-case text, the all-caps text feels pretty impenetrable. That’s because the capital letters are blocky and don’t create much contrast between themselves and the whitespace around them. The resulting word shapes are basically plain rectangles (fig 1.7).

    Paragraph illustrating the difficulty of reading text in all caps
    Fig 1.6: Running text in all caps can be hard to read quickly when we’re used to sentence case.
    Paragraph showing how words are recognizable by the shapes they form
    Fig 1.7: Our ability to recognize words is affected by the shapes they form. All-caps text forms blocky shapes with little distinction, while mixed-case text forms irregular shapes that help us better identify each word.

    Realizing that the choices we make in typefaces and typesetting have such an impact on the reader was eye-opening for me. Small things like the size and spacing of type can add up to great advantages for readers. When they don’t notice those choices, we’ve done our job. We’ve gotten out of their way and helped them get closer to the information.

    Stacking the deck

    Typography on screen differs from print in a few key ways. Readers deal with two reading environments: the physical space (and its lighting) and the device. A reader may spend a sunny day at the park reading on their phone. Or perhaps they’re in a dim room reading subtitles off their TV ten feet away. As designers, we have no control over any of this, and that can be frustrating. As much as I would love to go over to every reader’s computer and fix their contrast and brightness settings, this is the hand we’ve been dealt.

    The best solution to unknown unknowns is to make our typography perform as well as it can in all situations, regardless of screen size, connection, or potential lunar eclipse. We’ll look at some methods for making typography as sturdy as possible later in this book.

    It’s up to us to keep the reading experience unencumbered. At the core of typography is our audience, our readers. As we look at the building blocks of typography, I want you to keep those readers in mind. Reading is something we do every day, but we can easily take it for granted. Slapping words on a page won’t ensure good communication, just as mashing your hands across a piano won’t make for a pleasant composition. The experience of reading and the effectiveness of our message are determined by both what we say and how we say it. Typography is the primary tool we use as designers and visual communicators to speak.

     

  • The Most Dangerous Word In Software Development 

    “Just put it up on a server somewhere.”

    “Just add a favorite button to the right side of the item.”

    “Just add [insert complex option here] to the settings screen.”

    Usage of the word “just” points to a lot of assumptions being made. A few months ago, Brad Frost shared some thoughts on how the word applies to knowledge.

    “Just” makes me feel like an idiot. “Just” presumes I come from a specific background, studied certain courses in university, am fluent in certain technologies, and have read all the right books, articles, and resources.

    He points out that learning is never as easy as it is made to seem, and he’s right. But there is a direct correlation between the amount of knowledge you’ve acquired and the danger of the word “just.” The more you know, the bigger the problems you solve, and the bigger the assumptions are that are hiding behind the word.

    Take the comment, “Just put it up on a server somewhere.” How many times have we heard that? But taking a side project running locally and deploying it on real servers requires time, money, and hard work. Some tiny piece of software somewhere will probably be the wrong version, and will need to be addressed. The system built locally probably isn’t built to scale perfectly.

    “Just” implies that all of the thinking behind a feature or system has been done. Even worse, it implies that all of the decisions that will have to be made in the course of development have already been discovered—and that’s never the case.

    Things change when something moves from concept to reality. As Dave Wiskus said on a recent episode of Debug, “everything changes when fingers hit glass.”

    The favorite button may look fine on the right side, visually, but it might be in a really tough spot to touch. What about when favoriting isn’t the only action to be taken? What happens to the favorite button then?

    Even once favoriting is built and in testing, it should be put through its paces again. In use, does favoriting provide enough value to warrant is existence? After all, “once that feature’s out there, you’re stuck with it.”

    When you hear the word “just” being thrown around, dig deep into that statement and find all of the assumptions made within it. Zoom out and think slow.

    Your product lives and dies by the decisions discovered between ideation and creation, so don’t just put it up on a server somewhere.