A List Apart

Subscribe to A List Apart feed
Articles for people who make web sites.
Updated: 1 hour 5 min ago

Breaking the Deadlock Between User Experience and Developer Experience

Thu, 09/13/2018 - 07:01

In early 2013, less than 14% of all web traffic came from mobile devices; today, that number has grown to 53%. In other parts of the world the difference is even more staggering: in African countries, more than 64% of web traffic is from mobile devices; in India, nearly 78% of traffic is mobile. This is a big deal, because all 248 million new internet users in 2017 lived outside the United States.

And while internet connections are getting faster, there are still dozens of countries that access the web at speeds of less than 2 Mbps. Even in developed nations, people on mobile devices see spotty coverage, flaky wifi connections, and coverage interruptions (like train tunnels or country roads).

This means we can no longer talk about user experience (UX) without including performance as a first-class requirement. A Google study found that 53% of mobile users abandon a page if it takes longer than three seconds to load—and none of us are willing to lose half our traffic, right?

User experience and performance are already aligned—in theory

User experience designers and researchers lay a solid foundation for building modern web apps. By thinking about who the user is, what they’re trying to accomplish, and what environments they might be in when using our app, we already spot several performance necessities: a commuter, for example, will be accessing the app from their phone or a low-speed public wifi connection with spotty coverage.

For that type of user, we know to focus on a fast load time—remember, three seconds or more and we’ll lose half our visitors—in addition to an experience that works well, even on unstable connections. And since downloading huge files will also take a long time for this user, reducing the amount of code we ship becomes necessary as well.

UX and performance have issues in practice

My sister loves dogs. Once, as a kid, she attack-hugged our dog and loved it so hard that it panicked and bit her.

The web community’s relationship with UX is not unlike my sister’s with dogs: we’re trying so hard to love our users that we’re making them miserable.

Our efforts to measure and improve UX are packed with tragically ironic attempts to love our users: we try to find ways to improve our app experiences by bloating them with analytics, split testing, behavioral analysis, and Net Promoter Score popovers. We stack plugins on top of third-party libraries on top of frameworks in the name of making websites “better”—whether it’s something misguided, like adding a carousel to appease some executive’s burning desire to get everything “above the fold,” or something truly intended to help people, like a support chat overlay. Often the net result is a slower page load, a frustrating experience, and/or (usually “and”) a ton of extra code and assets transferred to the browser.

The message we appear to be sending is, “We care so much about your experience as a user that we’re willing to grind it to a halt so we can ask you about it, and track how you use the things we build!”

Making it worse by trying to make it better

We’re not adding this bloat because we’re intentionally trying to ruin the experience for our users; we’re adding it because it’s comprised of tools that solve hard development problems, and so we don’t have to reinvent the wheel.

When we add these tools, we’re still trying to improve the experience, but we’ve now shifted our focus to a different user: developers. There’s a large ecosystem of products and tools aimed toward making developers’ lives easier, and it’s common to roll up these developer-facing tools under the term developer experience, or DX.

Stacking tools upon tools may solve our problems, but it’s creating a Jenga tower of problems for our users. This paradox—that the steps we take to make it easier to help our users are inadvertently making the experience worse for them—leads to what Nicole Sullivan calls a “deadlock between developer experience [and] user experience.”

This tweet by Nicole Sullivan inspired this article. Developer experience goes beyond the tech stack

Let’s talk about cooking experience (CX). When I’m at home, I enjoy cooking. (Stick with me; I have a point.) I have a cast-iron skillet, a gas range, and a prep area that I have set up just the way I like it. And if you’re fortunate enough to find yourself at my table for a weekend brunch, you’re in for one of the most delicious breakfast sandwiches of your life.

However, when I’m traveling, I hate cooking. The cookware in Airbnbs is always cheap IKEA pans and dull knives and cooktops with uneven heat, and I don’t know where anything is. Whenever I try to cook in these environments, the food comes out edible, but it’s certainly not great.

It might be tempting to say that if I need my own kitchen to produce an edible meal, I’m just not a great cook. But really, the high-quality tools and well-designed environment in my kitchen at home creates a better CX, which in turn leads to my spending more time focused on the food and less time struggling with my tools.

In the low-quality kitchens, the bad CX means I’m unable to focus on cooking, because I’m spending too much time trying to manage the hot spots in the pan or searching the drawers and cabinets for a utensil.

Good developer experience is having the freedom to forget

Like in cooking, if our development tools are well-suited to the task at hand, we can do excellent work without worrying about the underlying details.

When I wrote my first lines of HTML and CSS, I used plain old Notepad. No syntax highlighting, autocorrect, or any other assistance available. Just me, a reference book, and a game of Where’s Waldo? to find the tag I’d forgotten to close. The experience was slow, frustrating, and painful.

Today, I use an editor that not only offers syntax highlighting but also auto-completes my variable names, formats my code, identifies potential problems, helps me debug my code as I type, and even lets me share my current editing session with a coworker to get help debugging a problem. An enormous number of incremental improvements now exist that let us forget about the tiny details, instead letting us focus on the task at hand. These tools aim to make the right thing the easy thing, leading developers to follow best practices by default because our tools are designed to do the right thing on our behalf.

It’s hard to overstate the impact on my productivity that modern development environments have made.

And that’s just my editor.

UX and DX are at odds with each other

There is no one-size-fits-all way to build an app, but most developer tools are built with a one-size-fits-all approach. To make this work, most tools are built to solve one thing in a general purpose way, such as date management or cryptography. This, of course, necessitates stacking multiple tools together to achieve our goals. From a DX standpoint, this is amazing: we can almost always find an open source solution to problems that aren’t ultra-specific to the project we’re working on.

However, stacking a half-dozen tools to improve our DX harms the UX of our apps. Add a few kilobytes for this tool, a few more for that tool, and before we know it we’re shipping mountains of code. In today’s front-end landscape, it’s not uncommon to see apps shipping multiple megabytes of JavaScript—just open the Network tab of your browser’s developer tools, and softly weep as you notice your favorite sites dump buckets of JavaScript into your browser.

I left forbes.com open for thirty minutes. It sent 3,273 requests and loaded 10 MB of JavaScript.

In addition to making pages slower to download, scripts put strain on our users’ devices. For someone on a low-powered phone (for example, a cheap smartphone, or an older iPhone), the download time is only the first barrier to viewing the app; after downloading, the device has to parse all that JavaScript. As an example, 1 MB of JavaScript takes roughly six seconds to parse on a Samsung Galaxy Note II.

On a 3G connection, adding 1 MB of JavaScript can mean adding ten or more seconds to your app’s download-and-parse time. That’s bad UX.

Patching the holes in our UX comes at a price

Of course, we can solve some of these problems. We can manually optimize our apps by loading only the pieces we actually use. We can find lightweight copies of libraries to reduce our overall bundle size. We can add performance budgets, tests, and other checks to alert us if the codebase starts getting too large.

But now we’re adding audits, writing bespoke code to manage the foundation of our apps, and moving into the uncharted, unsupported territory of wiring unrelated tooling together—which means we can’t easily find help online. And once we step outside the known use cases for a given abstraction, we’re on our own.

Once we find ourselves building and managing bespoke solutions to our problems, many of the DX benefits we were previously enjoying are lost.

Good UX often necessitates bad DX

There are a number of frameworks that exist to help developers get up and running with almost no overhead. We’re able to start building an app without first needing to learn all the boilerplate and configuration work that goes into setting up the development environment. This is a popular approach to front-end development—often referred to as “zero-config” to signify how easy it is to get up and running—because it removes the need to start from scratch. Instead of spending our time setting up the foundational code that doesn’t really vary between projects, we can start working on features immediately.

This is true, in the beginning, until our app steps outside the defined use cases, which it likely will. And then we’re plunged into a world of configuration tuning, code transpilers, browser polyfills, and development servers—even for seasoned developers, this can be extremely overwhelming.

Each of these tools on its own is relatively straightforward, but trying to learn how to configure a half-dozen new tools just so you can start working is a very real source of fatigue and frustration. As an example, here’s how it feels to start a JavaScript project from scratch in 2018:

  • install Node and npm;
  • use npm to install Yarn;
  • use Yarn to install React, Redux, Babel (and 1–5 Babel plugins and presets), Jest, ESLint, webpack, and PostCSS (plus plugins);
  • write configuration files for Babel, Jest, ESLint, webpack, and PostCSS;
  • write several dozen lines of boilerplate code to set up Redux;
  • and finally start doing things that are actually related to the project’s requirements.

This can add up to entire days spent setting up boilerplate code that is nearly identical between projects. Starting with a zero-config option gets us up and running much faster, but it also immediately throws us into the deep end if we ever need to do something that isn’t a standard use case.

And while the open source developers who maintain these abstractions do their best to meet the needs of everyone, if we start looking at improving the UX of our individual apps, there’s a high likelihood that we’ll find ourselves off the beaten path, buried up to our elbows in Byzantine configuration files, cursing the day we chose web development as a career.

Someone always pays the cost

On the surface, it might look like this is just the job: web developers are paid to deliver a good UX, so we should just suck it up and suffer through the hard parts of development.  Unfortunately, this doesn’t pan out in practice.

Developers are stretched thin, and most companies can’t afford to hire a specialist in accessibility, performance, and every other area that might affect UX. Even a seasoned developer with a deep understanding of her stack would likely struggle to run a full UX audit on every piece of an average web app. There are too many things to do and never enough time to do it all. That’s a recipe for trouble, and it results in things falling through the cracks.

Under time pressure, this gets worse. Developers cut corners by shipping code that’s buggy with // FIXME oh god I'm so sorry attached. They de-prioritize UX concerns—for example, making sure screen reader users can, you know, read things—as something “to revisit later.” They make decisions in the name of hitting deadlines and budgets that, ultimately, force our users to pay the cost of our DX.

Developers do the best they can with the available time and tools, but due more to a lack of time and resources than to negligence, when there’s a trade-off to be made between UX and DX, all too often the cost rolls downhill to the users.

How do we break the deadlock between DX and UX?

While it’s true that someone always pays the cost, there are ways to approach both UX and DX that keep the costs low or—in best-case scenarios—allow developers to pay the cost once, and reap the DX benefits indefinitely without any trade-offs in the resulting UX.

Understand the cost of an outstanding user experience

In any given project, we should use the ideal UX as our starting point. This ideal UX should be built from user research, lo-fi testing, and an iterative design process so we can be sure it’s actually what our users want.

Once we know what the ideal UX is, we should start mapping UX considerations to technical tasks. This is the process of breaking down abstract concepts like “feels fast” into concrete metrics: how can we measure that a UX goal has been met?

By converting our UX goals into measurable outcomes, we can start to get an idea of the impact, both from a UX and a DX perspective.

A prioritization matrix to determine whether the benefits outweigh the cost of a particular task. Source: Nielsen Norman Group

From a planning perspective, we can get an idea of which tasks will have the largest impact on UX, and which will require the highest level of effort from the developers. This helps us understand the costs and the relative trade-offs: if something is high-effort and low-impact, maybe it’s OK to let the users pay that cost. But if something will have a high impact on UX, it’s probably not a good idea to skip it in favor of good DX.

Consider the cost when choosing solutions

Once we’re able to understand the relative cost and trade-offs of a given task, we can start to analyze it in detail. We already know how hard the problem is to solve, so we can start looking at the how of solving it. In general terms, there are three major categories of solving problems:

  • Invent your own solution from scratch.
  • Research what the smartest people in the community are doing, and apply their findings to a custom solution.
  • Leverage the collective efforts of the open source community by using a ready-made solution.

Each category comes with trade-offs, and knowing whether the costs outweigh the benefits for any given problem requires working through the requirements of the project. Without a clear map of what’s being built—and what it will cost to build it—any decisions made about tools are educated guesses at best.

When to invent your own solution

Early in my stint as a front-end architect at IBM, I led a project to roll out a GraphQL layer for front-end teams to rapidly build apps in our microservice-based architecture. We started with open source tools, but at the time nothing existed to solve the particular challenges we were facing. We ended up building GrAMPS, which we open sourced in late 2017, to scratch our particular itch.

In this situation, building something custom was our lowest-cost option: we knew that GraphQL would solve a critical problem for us, but no tools existed for running GraphQL in a microservice architecture. The cost of moving away from microservices was prohibitively high, and the cost of keeping things the way they were wasn’t manageable in the long term. Spending the time to create the tooling we needed paid dividends through increased productivity and improved DX for our teams.

The caveat in this story, though, is that IBM is a rare type of company that has deep pockets and a huge team. Letting a team of developers work full-time to create low-level tooling—tools required just to start working on the actual goal—is rarely feasible.

And while the DX improved for teams that worked with the improved data layer we implemented, the DX for our team as we built the tools was pretty rough.

Sometimes the extra effort and risk is worth it long-term, but as Eric Lee says, every line of code you write is a liability, not an asset. Before choosing to roll a custom solution, give serious thought to whether you have the resources to manage that liability.

When to apply research and lessons from experts in the field

A step further up the tooling ladder, we’re able to leverage and implement the research of industry experts. We’re not inventing solutions anymore; we’re implementing the solutions designed by the foremost experts in a given field.

With a little research, we have access to industry best practices for accessibility thanks to experts like Léonie Watson and Marcy Sutton; for web standards via Jeffrey Zeldman and Estelle Weyl; for performance via Tim Kadlec and Addy Osmani.

By leveraging the collective knowledge of the web’s leading experts, we get to not only learn what the current best practices are but also become experts ourselves by implementing those best practices.

But the web moves fast, and for every solution we have time to research and implement, a dozen more will see major improvements. Keeping up with best practices becomes a thankless game of whack-a-mole, and even the very best developers can’t keep up with the entire industry’s advancements. This means that while we implement the latest techniques in one area of our app, other areas will go stale, and technical debt will start to pile up.

While learning all of these new best practices feels really great, the DX of implementing those solutions can be pretty rough—in many cases making the cost higher than a given team can afford.

Continued learning is an absolutely necessary part of being a web developer—we should always be working to learn and improve—but it doesn’t scale if it’s our only approach to providing a great UX. To paraphrase Jem Young, we have to look at the trade-offs, and we should make the decision that improves the team’s DX. Our goal is to make the team more productive, and we need to know where to draw the line between understanding the down-and-dirty details of each piece of our app and shipping a high-quality experience to our users in a reasonable amount of time.

To put it another way: keeping up with industry best practices is an excellent tool for weighing the trade-offs between building in-house solutions or using an existing tool, but we need to make peace with the fact that there’s simply no way we can keep up with everything happening in the industry.

When to use off-the-shelf solutions

While it’s overwhelming to try and keep up with the rapidly changing front-end landscape, the ever-evolving ecosystem of open source tools is also an incredible source of prepaid DX.

There are dozens of incredibly smart, incredibly passionate people working to solve problems on the web, and many of those solutions are open source. This gives developers like you and me unprecedented access to prepaid solutions: the community has already paid the cost, so you and I can deliver an amazing UX without giving up our DX.

This class of tooling was designed with both UX and DX in mind. As best practices evolve, each project has hundreds of contributors working together to ensure that these tools are always using the best possible approach. And each has generated an ecosystem of tutorials, examples, articles, and discussion to make the DX even better.

By taking advantage of the collective expertise of the web community, we’re able to sidestep all the heartache and frustration of figuring these things out; the open source community has prepaid the cost on our behalf. We can enjoy a dramatically improved DX, confident that many of the hardest parts of creating good UX are taken care of already.

The trade-off—because there is always at least one—is that we need to accept and work within the assumptions and constraints of these frameworks to get the great DX. Because again, as soon as we step outside the happy path, we’re on our own again. Before adopting any solution—whether it’s open source, SaaS, or bespoke—it’s important to have a thorough understanding of what we’re trying to accomplish and to compare and contrast that understanding to the goals and limitations of a proposed tool. Otherwise we’re running a significant risk: that today’s DX improvements will become tomorrow’s technical debt.

If we’re willing to accept that trade-off, we find ourselves in a great position: we get to confidently ship apps, knowing that UX is a first-class consideration at every level of our stack, and we get to work in an environment that’s optimized to give our teams an incredible DX.

Deadlock is a (solvable) design problem

It’s tempting to frame UX and DX as opposing forces in a zero-sum game: for one to get better, the other needs to get worse. And in many apps, that certainly appears to be the case.

DX at the expense of UX is a design problem. If software is designed to make developers’ lives easier without considering the user, it’s no wonder that problems arise later on. If the user’s needs aren’t considered at the core of every decision, we see problems creep in: instead of recognizing that users will abandon our sites on mobile if they take longer than three seconds to load, our projects end up bloated and take twice that long to load on 4G—and even longer on 3G. We send hundreds of kilobytes of bloat, because optimizing images or removing unused code is tedious. Simply put: we get lazy, and our users suffer for it.

Similarly, if a team ignores its tools and focuses only on delivering great UX, the developers will suffer. Arduous quality assurance checklists full of manual processes can ensure that the UX of our projects is top-notch, but it’s a slog that creates a terrible, mind-numbing DX for the teams writing the code. In an industry full of developers who love to innovate and create, cumbersome checklists tend to kill employee engagement, which is ultimately bad for the users, the developers, and the whole company.

But if we take a moment at the outset of our projects to consider both sides, we’re able to spot trade-offs, and make intelligent design decisions before problems emerge. We can treat both UX and DX as first-class concerns, and prevent putting them at odds with each other—or, at least, we can minimize the trade-offs when conflicts happen. We can provide an excellent experience for our users while also creating a robust suite of tools and frameworks that make development enjoyable and maintainable for the entire lifespan of the project.

Whether we do that by choosing existing tools to take work off our plates, by spending an appropriate amount of time properly planning custom solutions, or some combination thereof, we can make a conscious effort to make smart design decisions, so we can keep users and developers happy.

Design with Difficult Data

Thu, 09/06/2018 - 07:21

You’ve been asked to design a profile screen for a mobile or web app. It will need to include an avatar, a name, a job title, and a location. You fire up Sketch or Figma. Maybe you pull out your drafting pencil or head straight to markup and CSS.

What’s your go-to fake name?

Regardless of your choice in tools, you’re probably going to end up with some placeholder data. Are you the type that uses your own name, or do you conjure up your old friend, Mr. Lorem Ipsum? Maybe you have a go-to fake name, like Sophia J. Placeholder.

For me, it’s Nuno Bettencourt. Or Nuno Duarte Gil Mendes Bettencourt, more formally. Nuno played guitar in the subtly-named early 90s band Extreme. To the younger among you, he was a touring musician with Rihanna. None of that matters for our purposes here today, except that he has a fairly long name.

It may not seem like it matters what you put in for a placeholder name. It won’t end up in the final product—it’s just a variable. Well, it does matter. The text you start with will subtly influence your approach to layout and style. It may limit the scope of options you allow yourself to consider, or more dangerously, obscure actual limits that you’ll run into later.

A few obvious solutions may spring to mind: use a long placeholder name; use real data in your design. While these are a good start, it’s worth exploring more deeply how these and other practices can both improve your design process and help produce more durable products.

It’s more than just fake names

This is about more than just fake names. It’s also fake addresses! Fake headlines! Fake photos! When we design around limited data, the limitations bleed into our designs.

The inability to deal with long strings of text is the most basic and maybe most common way components can fail when coming in contact with real data. You thought the tab would be labelled “Settings”? Well, now it’s called “Application Preferences.” Oh, and the product launches tomorrow.

Length is just one of many ways that real text and data can strain a weak design. You may also encounter unanticipated line-breaks or even text that’s too short. Beware of the following areas where we tend to cheat with easy placeholder data.

Account profile photos

Don’t expect people to have a studio quality self-portrait with a solid white background (and be suspicious of those who do!). Many people may not be interested in uploading a photo for their account at all. Others may try to squeeze in their much-too-wide company logo into that little square or circular area. You can’t account for all possible data, but if you incorporate some of these less-than-visually-ideal cases early in your design process, your output will be that much more resilient.

Thumbnails for videos and photos

Not all thumbnails will be in the aspect ratio you’ve anticipated. Some might include text or images that clash unexpectedly with the surrounding page. A common issue I’ve seen is a nicely designed home page with a company logo prominently displayed at the top. Then, the video arrives and the default poster image for the video also includes the company logo. Now your beautiful home page layout looks like a logo salad.

Wild variations in amounts

Watch for elements containing lists where the amount of items in those lists may vary significantly. Imagine a layout with cards where each card includes a set of tags. One card may have three tags while another may have twenty-five. Tabular data can also suffer both aesthetically and in legibility when one particular cell varies wildly in length from the others.

Missing elements

You may create a nice layout for your site header that scales beautifully from your phone to your 27” display. Then you discover it needs a Support menu item—but there’s no room! I often start a wireframe by compiling two lists. The first contains the goals a visitor to this screen needs to accomplish. The second has the elements that need to live on this screen. Be sure to include all of the elements—from the primary content to advertisements, and down to a privacy link in the footer. It’s particularly easy to spot a site that was designed without accounting for the advertisements it includes.

Viewport sizes

Beyond placeholder data, we have a tendency to present our designs at the most flattering viewport sizes. Or rather, we design our layouts to look best at the sizes we choose for our mockups—particularly when we design with tools built around fixed frame sizes. In the neglected troughs of responsive design, we find two common pitfalls: the stretched mobile layout and the squished desktop design layout.

The stretched mobile layout

The squished desktop layout

Flexible design can be more accessible design

We can’t spend endless amounts of our time (and our clients’ money) on every edge case imaginable. We can be more mindful of the influence of the canvas on which we create, the tools we use, and the data we design around.

It’s necessary to focus attention and testing on the ways in which your site will most commonly be accessed. Things don’t have to be, and never will be, perfect at every screen size. Letting go of control and embracing this fluidity is part of designing for the web.

Designing with flexibility can also make your design more accessible. Those with vision impairments (which is most of us at some point in our lives) may browse with a customized minimum font size. Others may browser at a zoom level that triggers responsive layouts intended for mobile devices even on a large desktop browser.

Avoid the disappointing reveal

There are enough factors that can already contribute to clients and stakeholders having unrealistic expectations and being disappointed by the eventual implementation. Don’t add to this potential mismatch of expectations by showing designs that look flawless, only to have the client review them in the harsh light of real data.

While you may need to convince people of the merits of your design, you’ll only set yourself up for failure if you choose to showcase an unrealistic design. Instead, indulge initially by showing the layout with ideal data Then show how durable and flexible the design is by showing variations with difficult data. This not only helps people understand your design but also the value of your process and expertise.

When I was a kid, I distinctly remember a door-to-door vacuum salesman jumping on a vacuum cleaner to demonstrate the durability of his product. We didn’t need a new vacuum (the immediate flaw in the whole door-to-door model), but the image stuck with me. Jump on your designs! Throw them against the wall! Fill them with garbage and show how well they hold up.

For example, when showing a design to a client, show them how it adapts to various viewport widths and default font sizes. Showing a client how their site responds to browser sizes can also help them let go of the need to polish designs solely for the particular device and size they happen to use. If you’ve got a robust way of dealing with long names on a profile page, show it off! This can help your client understand that there is a whole other dimension of work (and time, and money) beyond what’s visible in a static screenshot.

Garbage in, gold out?

The old computer science adage reads, “garbage in, garbage out.” Instead, aim for “garbage in, hrm … not bad.” A better adage to lean on may be Postel’s law, also known as the robustness principle: “Be conservative in what you do, be liberal in what you accept from others.” If you imagine your evil twin trying to pick apart your design, how would they break it? Maybe squish the browser to a narrow size, and enter some unusually long headlines (garbage in). Your design should respond nicely to the narrow width, and gracefully reduce the font size of particularly long headlines (gold out).

With practice, you can internalize some of this process. You’ll come to know instinctively what pitfalls come with a given visual design. Much in the same way experts in accessibility or internationalization learn to quickly spot the common pitfalls that limit the universality of designs. While our intuition can help us, it can also trick us—be sure to test, and see how real people work with your product.

Even as you do hone your ability to anticipate and avoid common mistakes, your mind will constantly be pulling toward the path of least resistance. Like endurance athletes training at high altitudes, exercising with ankle weights, or the pro baseball player taking practice swings with a weighted bat, we must continue to artificially increase the strain on our work.

Real data isn’t good enough

Much has been written on the benefits of designing with real data. My colleague Daniel Burka writes:

Try not to gloss over complexity. Design work in the real world is pretty hard. If you design a fake graph, put in realistic data. If you fake redesign a site, … don’t just magically remove an ad unit. If you create a sexy fake login screen, don’t forget to include a way to recover lost passwords or usernames. … Write real copy. Lorem ipsum is for amateurs.

Daniel is right—especially when it comes to interface elements where the meaning of the text is inextricable from the function. When it comes to design elements that may display widely variable contents (profile photos or names, for example), you can do better than using real data. Go beyond realistic data. Get difficult data.

If you are able to pull in real data, dig through it for the worst cases. If you can handle the worst, the common cases will be a breeze.

When redesigning an existing screen, take advantage of the current and historical data available. Dig into the extremes of the data, finding the longest and shortest titles. If you’re designing with thumbnails of photos or videos, grab a random set of real thumbnails and throw away those you know are easy to design around.

When you don’t have existing data, and even when you do, create difficult examples. Write headlines that push up to and beyond the limits of what the screen can accommodate. Create thumbnail images that have their own built-in border or shadow, and see how they clash with what you’ve got in place.

Sometimes difficult data can (and should) be fixed

While your design should be as robust as possible, you may sometimes turn up edge cases that needn’t be so. In designing a list page with a client, we looked at their complete archive of data to see how the length of the item titles varied. The shortest title was 8 characters, and the longest was over 320, but only a handful were over 80.

We worked with the client to create a design that catered to the maximum 80-character titles. Some editorial surgery was then performed on those few outliers to get them in under the limit. They ended up being better titles as a result.

When dealing with content that is managed by your company, team, or client, it is also worth codifying the practices into a style guide. You needn’t spend all of your energy designing around difficult data that’s coming from down the hall.

Internationalization

I’ve had the privilege of working with teams at Mozilla, where pages are translated into as many as eighty languages. With such broad localization efforts, we learned to build page layouts and designs that supported both non-Latin character sets and languages with right-to-left text direction.

Supporting both left-to-right and right-to-left languages requires more than just allowing text strings to reverse. The entire visual structure of your layout and design needs to be able to flip horizontally. Rather than being a frustrating limitation, you’ll find this and other similar constraints will help you develop design superpowers.

In anticipation of the longer text strings in languages like German, some designers developed a process where Latin text is generated at twice the length of the source text. The W3C has a handy article on common length ratios across languages.

Capitalization can also be problematic in some locales—especially when forced with CSS. If your design really relies on text-transform: uppercase or text-transform: lowercase, either revisit the design to be more flexible, or handle capitalization in the source text (rather than via CSS) so a localization team can maintain control over capitalization.

MDN is a great resource for more on designing for localization.

Beware of your own cultural blindness when it comes to placeholder data during the design process. Design cheating often affects those least like yourself.

Whenever possible, design with difficult data

Much has been written (and should be read) about how our tools can help us design with real data. With modern design and prototyping tools, HTML/CSS/JS prototypes, and even static mockups, we only cheat ourselves if we aren’t pushing our designs to the breaking point.

There’s always a balance to strike between making something quick and over-building. As with all things in design and on the web, it depends. It depends on the data, the audience, the project, and the goals.

Schedule and budget are the common excuses for not delivering more robust design components. Especially on larger projects, though, learning to incorporate more difficult data into your early design process can save you time in the long run.

Like that long-distance runner who improves by training in the thin air of high altitudes, by building with difficult data from the very beginning, you’ll become a stronger designer. You’ll be more aware of where and how your design may break, and be better able to communicate your process and decisions.

 

@media (min-width: 768px) { .b_col { box-model: border-box; float: left; padding: 15px; width: 50%; } } .b_col { text-align: center } .b_col img { max-height: 300px }

Conversational Semantics

Thu, 08/30/2018 - 08:00

As Alexa, Cortana, Siri, and even customer support chat bots become the norm, we have to start carefully considering not only how our content looks but how it could sound. We can—and should—use HTML and ARIA to make our content structured, sensible, and most importantly, meaningful.

Content, confined

Most bots and digital assistants work from specially-coded data sets, APIs, and models, but there are more than 4.5 billion pages of content on the web, trapped, in many cases, within our websites. Articles, stories, blog posts, educational materials, books, and marketing messages—all on the web, but in many cases unusable in a non-visual context. A few projects—search spiders most notably—are working to turn our messy, unstructured web pages into something usable. But we can do more—a lot more—to facilitate that and enable our web pages to be more usable by both real people and the computers that power voice-based user experiences.

Let’s release our content from the screen and empower it to go anywhere and everywhere. We can help it find its way into virtual assistants and other voice-response technologies—and even voiceless chat bots—without having to code and re-code that content over and over into multiple, redundant formats. We can even enable our users to actively engage with our content by filling in forms and manipulating widgets on the web purely via voice. It’s all possible, but we need to start by taking a long, hard look at our markup.

Consider this em element:

I’m <em>really</em> happy to see you.

Sure, it is visually rendered as italics, but it also adds emphasis to the content within. HTML is chock full of elements that are useful for conveying meaning, nuance, and relationships. Being aware of them enables us to author more expressive documents. Ignoring them can undermine the usability of the content we’re marking up. When we create a web page, we need to be mindful of the conversation we are creating with our customers in the process, and choose elements with intent and care.

One of the best indicators for how HTML will make it into our virtual assistants is another assistive technology: screen readers. Not only do screen readers do as their name implies, they also enable users to rapidly navigate a page in various ways, and provide mechanisms that translate visual design constructs—proximity, proportion, etc.—into useful information. At least they do when documents are authored thoughtfully.

So, let’s jump in and look at some solid examples of how we can both create more meaningful documents and empower them to be more usable in “headless” UIs.

Powerful phrases

We’ll start by looking at what are called “phrasing” elements. The emphasis you saw earlier is an example of this element type. We used to call them “inline” elements because, by default, they are visibly displayed as inline text. But “phrasing” is a much more accurate description of the role they play in our web pages, because, well, they mark up phrases.

We saw this example earlier:

I’m <em>really</em> happy to see you.

Here, the word “really” is marked for emphasis. I’m unaware of any current speech synthesizer that audibly emphasizes text like we do, but it’s still early days in the grand scheme of things. I’m sure it’ll happen—there’s been a lot of focus on building more human-sounding voices—and it could sound something like this:

     

Your browser doesn’t support HTML5 audio, but you can download the MP3 instead.

  Mimicking Emphasis using speechSynthesis

Sometimes emphasis is not enough. When we want to indicate that content is vital for our customers to pay attention to, the strong element is the right way to go. “Strong” means “of strong importance.”

Please fill out the form below to contact us. <strong>All fields are required.</strong>

Visually, em and strong are displayed as italics (as mentioned previously) and bold, respectively.

I’m really happy to see you.
Please fill out the form below to contact us. All fields are required.

Now we also have the i and b elements, which are rendered exactly the same as em and strong, respectively. In the early days of the web, that led many of us—myself included—to believe they were interchangeable. And with b and i being shorter to write, they proliferated on the web. Semantically, however, the i and b elements are quite different from their doppelgängers.

The i element is similar to the emphasis element, but more generic. It is used to indicate an alternate voice or mood. It could be used to indicate sarcasm, idiomatic remarks, and shifts in language.

It's a terrible movie and it made $200 million. <i>Go figure!</i> She is admired for her energy and <i lang="fr">joie de vivre</i>.

In the latter example, you might also notice that I’ve indicated that the phrase “joie de vivre” is in another language—French—using the lang attribute. This attribute lets the digital assistant know it may want to shift its pronunciation.

     

Your browser doesn’t support HTML5 audio, but you can download the MP3 instead.

  Supporting Language Shifts in speechSynthesis

Admittedly, replicating this using the speechSynthesis API is still a little rough, but with time, this too will no doubt improve.

The b element is used for content that should be set apart—or “stylistically offset”—from the surrounding text. It does not indicate that the phrase is of any greater importance though. I like to use it for names of people and products. Keywords would be another option. Books, films, and other media have their own element, which I’ll get to in a moment.

For 12 years and running, over 100,000 companies have adopted the <b>Basecamp</b> way of working. Not just tried, but signed up, said “ah-ha!”, and never looked back. There’s nothing else like <b>Basecamp</b>.

Functionally, the b element is a lot like a span—generic phrasing content albeit with a shorter tag.

Since I mentioned movies and books, I’ll quickly bring up the cite element, which is for the title of cited or referenced works.

I wrote the book <cite>Adaptive Web Design</cite>. If you like this article, you’ll find in-depth information about semantics (and a whole lot more) in there. Specialized syntax

HTML has other specialized phrasing constructs, such as abbr for abbreviations and acronyms. Traditionally, we’d recommended using title to provide an expansion:

<abbr title="Hypertext Markup Language">HTML</abbr> is the standard markup language for creating web pages and web applications.

Sadly—as with many things on the web—black hat SEO practices involving title spurred screen readers to ignore the attribute altogether. Visual browsers do still provide tooltips, so they’re not completely useless, but given that screen readers don’t pay attention to the title attribute currently, it’s pretty unlikely they will be surfaced by a virtual assistant.

To be honest, it’s best to avoid title altogether. For the purposes of absolute clarity, you should introduce and explain important abbreviations and acronyms the first time they are used. There’s even an element that signals a defining context: dfn.

<dfn id="dfn-html">Hypertext Markup Language (HTML)</dfn> is the standard markup language for creating web pages and web applications.

For more technical writing, the kbd and code elements can be quite useful. They indicate keys a user might need to press and words and phrases that are used in writing software or coding documents:

Press <kbd>Tab</kbd> to move from link to link within a document. The <code>kbd</code> element is used to indicate keyboard key names.

Then there’s the span element, which is used for generic phrases, as I noted earlier. It’s a meaningless element, so will not be spoken in any way differently by default.

There is <span>nothing particularly interesting</span> in this sentence.

There are more phrasing elements, but these are the ones you’re most likely to want in most projects.

Clear connections

Links are also phrasing elements, but I want to call them out specifically because they provide a much richer set of options for fine-tuning how our users interact with our pages.

The primary way we use links is to connect related content. It’s incredibly important to choose meaningful words and phrases as link text. Links that read generically like “click here” and “read more” are not terribly useful, especially when the text of every link is being read out to you—which is a key way headless UI users skim web pages. Make it clear where you are linking. Restructure sentences if you need to in order to provide good link text.

If you are drawn to “read more” style links for their brevity, you can have your cake and eat it too by including non-visible text within a link. This gives you brief, uniform links from a visual standpoint, but also lets you provide context in headless scenarios. Here’s an example from my site’s navigation. I’ve broken it up across a few lines to make it a little easier to follow:

<a href="/speaking-engagements/"> <b class="hidden">A List of My</b> Speaking <b class="hidden">Engagements</b> </a>

Within the link, I have two b elements classified as “hidden.” In my CSS, I hide the content within them from sighted users, but I hide them in a way that they remain available to assistive technology. So a sighted user will only see “speaking,” but a screen reader or digital assistant will read “a list of my speaking engagements.”

You could also offer an expansion with aria-label on the anchor element. If that “aria-” bit in aria-label looks weird to you, it comes from the Accessible Rich Internet Applications (ARIA) spec, an ongoing effort to map complex operating-system-like UI constructs into accessible ones. I chose the hidden text route to give myself the flexibility to display the hidden content in certain scenarios.

Some of you may be wondering why I didn’t bring up aria-label when I mentioned the abbr element. It seems like a good fit, and the aria-label spec currently allows the attribute on abbr elements. The issue isn’t the spec, but rather the reality that the info in aria-label isn’t always exposed by browsers or sought out by assistive technology on elements like abbr. With good reason, they’ve been much more focused on exposing aria-label (and it’s kin) on interactive elements, landmarks, and widgets.

It’s worth noting that hidden text in links can cause issues for folks who rely on a combination of screens and dictation software to interact with their computers. If the link text that’s displayed does not match the actual link text in the markup, a user saying the visible link text—like the word “Speaking” in the case of my site’s navigation—won’t actually activate the link. It’s also worth reiterating the importance of quality link text; don’t use aria-label to paper over poorly-worded links or unnecessary redundancy like “read more.”

We can also use links to reference content within the current document or even at a specifically-identified position in another document:

To illustrate the concept of layering styles, perhaps it’s best to start at the beginning: with no style applied. <a href="#figure-3-3">Figure 3.3</a> shows the lodging article in Safari with only the default browser styles applied. … <figure id="figure-3-3"> … </figure>

At the tail end of this code sample, we have a figure element that is referenced elsewhere in the document. Rather than leaving it up to the reader to find “Figure 3.3,” we can use a fragment identifier to jump the reader directly to the reference. Adding a unique id attribute to each important element in your design makes it easy for you—or others—to link directly to them.

As with the i element example I shared earlier, you can inform your readers about the language of a linked page using hreflang:

<a href="…" hreflang="es"><i lang="es"> <b class="hidden">Lea esta página en</b> español </i></a>

That’s Spanish for “read this page in Spanish,” and the link points to a Spanish-language translation of the page. The hidden content approach is in use here, too, with sighted users only seeing “español.”

You can indicate the kind of content being linked to, using the type attribute:

<a href="giant.mp4" type="video/mp4">Download this movie</a>

And we also have the download keyword, which informs the browser that the file in question should be downloaded rather than presented. Again, a simple attribute that makes a simple HTML document capable of doing so much more:

<a href="giant.mp4" type="video/mp4" download>Download this movie</a>

When encountering this type of link in a voice context, your digital assistant could prompt you to save the file to a connected storage account, like Dropbox. That’s pretty cool, but it’s worth noting that browsers will ignore the download attribute on cross-origin links for security purposes. Unfortunately that means you can’t use this approach to download files from your Content Delivery Network (CDN).

Anchor elements also support non-web “pseudo” protocols. Two of the most common examples are “mailto:” for email links and “tel:” for phone numbers, but “sms:” and “webcal:” are also common.

<a href="mailto:mail@domain.com">Send me an email</a> <a href="tel:18009346489">Call Comcast Customer Service</a>

Some operating systems (and browsers) allow installed apps to register custom protocols that can provide access to in-app functionality. A word of caution though: unrecognized protocols may prompt the user to search for an application that can use it.

All of this phrasing content is great, but I’ve spent a good deal of time in the weeds. Let’s pull back a bit and look at documents themselves.

Sound structure

As you’re no doubt aware, headless UIs place a greater cognitive load on our users. It’s hard to keep track of where you are in an interface when you can’t see it. It can also be challenging to move around when you can’t gather information about the interface based on visual cues. The more complex an interface is, the more challenging this becomes.

The same is true in visual interfaces, which is why “mobile first” thinking encourages us to focus each page on a single task. This reduces the noise and raises the signal. But most web pages are the antithesis of clear and straightforward. As our screen sizes enlarged, we found more stuff to fill that space. Sharing links, related content, cross-promotions, and so on. Sometimes it’s easy to lose sight of the actual content.

To combat this, screen readers provide numerous mechanisms that enable users to gather information about the UI and move through it efficiently. One of the most common involves moving the focus carat from one interactive element to another. Traditionally that movement is done via the keyboard Tab key, but it’s also possible via voice using keywords like “next” and “previous.” In most documents, users are moving from link to link. This is why it’s so important to offer informative link text.

<p>This twist is what <a href="https://en.wikipedia.org/wiki/John_Harsanyi">John Harsanyi</a>—an early game theorist—refers to as the “<a href="https://en.wikipedia.org/wiki/Veil_of_ignorance">Veil of Ignorance</a>,” and what Rawls found, time and time again, was that individuals participating in the experiment would gravitate toward creating the most egalitarian societies.</p>

It’s worth noting that form elements—buttons, inputs, etc.—are also part of the default tab order of a web page.

Elements that would not traditionally be focusable can be included in the tab order by adding a tabindex attribute with a value of “0” (zero) to them. This ensures critical interface components are not accidentally bypassed by users who are skimming an interface by tabbing. Incidentally, it can also give sighted users keyboard control over scrollable elements.

Another mode of document traversal is browsing by heading. The various heading levels in HTML create a natural document outline, and assistive technologies can enable users to skim content using these headings:

<h1>This is the title of the page</h1> … <h2>This titles a section</h2> … <h3>This titles a subsection</h3> … etc.

Since only the contents of the heading elements are read out in this mode, it’s best to avoid cutesy marketing phrases, and stick to summarizing the contents of a section.

More recently, document “landmarks” have come along, providing quick access to key parts of the page. Landmark elements were first introduced as part of ARIA. Using the role attribute, you can define the function of specific regions of a page. Consider the following:

<div id="nav"> <ul> <li> <a href="/about/"><b class="hidden">A Bit </b>About<b class="hidden"> Me</b></a> </li> … </ul> </div>

In this example, the navigation list is sitting in a div with an id of “nav.” While that’s a meaningful identifier for the purposes of styling, scripting, and anchoring, the div is not actually exposed to assistive technology as navigation. Adding a role of “navigation”, however, makes that function explicit:

<div id="nav" role="navigation"> <ul> <li> <a href="/about/"><b class="hidden">A Bit </b>About<b class="hidden"> Me</b></a> </li> … </ul> </div>

There are numerous role values that qualify as landmarks:

  • banner
  • navigation
  • search
  • main
  • complementary
  • contentinfo

Landmarks also give users the opportunity to jump directly to a location within an interface, which is incredibly helpful. In a voice context, a user might be able to ask their digital assistant to “read me the navigation for this page” or “search for wooden baby toys,” and the assistant could use these landmarks to quickly respond to those commands.

It’s worth noting that most of these landmarks have equivalent HTML elements. This is because HTML5 and ARIA were being developed at the same time, and both were looking to address the same limitations of the web. Here’s a rundown of ARIA landmark roles with HTML equivalents:

Each HTML5 element shown here is automatically assigned its corresponding ARIA role by modern browsers and is recognized by modern assistive technologies. However, in older browser and assistive technology combinations, the automatic role assignment may not happen. That’s why it’s not uncommon to see nav elements with a “navigation” role or similar even though validators will flag it as unnecessary.

One last bit I want to touch on before I wrap up is the div element.

<div> This is simply a generic division of content. </div>

We often employ a div when we want to group some elements together. That’s fine, but div is a meaningless element that adds nothing to the interface in terms of context. By contrast, other organizational elements do add value to a page:

  • p - a paragraph; a voice synthesizer will naturally pause between them
  • ol - a list of items whose order matters
  • ul - a list of items whose order doesn’t matter
  • li - an item in a list
  • dl - a list of terms and their associated descriptions
  • dt - a term described within a description list
  • dd - a description of a term (or terms) in a description list
  • blockquote - a long piece of quoted content
  • figure - referenced content (images, tables, etc.)
  • figcaption - the caption for a figure

Some of these are among the elements categorized as “flow” content. At a higher level, there are numerous organizational elements to choose from:

  • article - a piece of content that can stand on its own
  • section - a section of a document or article
  • header - preamble content for a document, article, or section
  • footer - supplementary information for a document, article, or section
  • main - the primary content of a document
  • nav - navigational content
  • aside - complementary content

There are a ton of meaningful elements out there that can enable our digital assistants to do more for our customers. And the more we use them, the more useful our assistants become, and the more powerful our users feel. For instance, using article and heading elements can enable voice commands like “Read me the top three headlines in the New York Times today” without involving any sort of specialized data feed.

A generic div gets you none of these benefits.

Create conversations

HTML is a truly robust and expressive language that is often overlooked and undervalued, but it has the incredible potential to nurture conversations with our users without requiring a lot of effort on our part. Simply taking the time to code web pages well will enable our sites to speak to our customers like they speak to each other. Thinking about how our sites are experienced as headless interfaces now will set the stage for more natural interactions between the real world and the digital one.

Coding with Clarity: Part II

Thu, 08/23/2018 - 07:05

As any developer who works with other developers can attest, if code is unclear, problems occur. In Part I of this series, I went over some principles to improve clarity in our code to prevent problems that can arise from unclear code. As our apps get larger, clarity becomes even more important, and we need to take extra care to ensure that our code is easy to read, understand, and modify or extend. This article discusses some more-advanced principles related to object-oriented programming (OOP) to improve clarity in larger apps.

Note: Though the principles in this article are applicable to a variety of programming languages, the examples pull from object-oriented JavaScript. If you’re not familiar with this, read my first article to get up to speed, as well as to find some other resources to help improve your understanding of object-oriented programming.

The Law of Demeter

Imagine you’re an office manager at an apartment complex. The end of the month comes and the rent is due. You go through the drop box in the office and find checks from most of your tenants. But among the neatly-folded checks is a messy note on a scrap of paper that instructs you to unlock apartment 309, open the top drawer of the dresser on the left side of the bed, and remove the money from the tenant’s wallet. Oh, and don’t let the cat out! If you’re thinking that’s ridiculous, yeah, you’re right. To get the rent money each month, you shouldn’t be required to know how a tenant lays out their apartment and where they store their wallet. It’s just as ridiculous when we write our code this way.

The Law of Demeter, or principle of least knowledge, states that a unit of code should require only limited knowledge of other code units and should only talk to close friends. In other words, your class should not have to reach several levels deep into another class to accomplish what it needs to. Instead, classes should provide abstractions to make any of its internal data available to the rest of the application.

(Note: the Law of Demeter is a specific application of loose coupling, which I talk about in my first article.)

As an example, let’s say we have a class for a department in your office. It includes various bits of information, including a manager. Now, let’s say we have another bit of code that wants to email one of these managers. Without the Law of Demeter, here’s how that function might look:

function emailManager(department) { const managerFirstName = department.manager.firstName; const managerLastName = department.manager.lastName; const managerFullName = `${managerFirstName} ${managerLastName}`; const managerEmail = department.manager.email; sendEmail(managerFullName, managerEmail); }

Very tedious! And on top of that, if anything changes with the implementation of the manager in the Department class, there’s a good chance this will break. What we need is a level of abstraction to make this function’s job easier.

We can add this method to our Department class:

getManagerEmailObj: function() { return { firstName: this.manager.firstName, lastName: this.manager.lastName, fullName: `${this.manager.firstName} ${this.manager.lastName}`, email: this.manager.email }; }

With that, the first function can be rewritten as this:

function emailManager(department) { let emailObj = department.getManagerEmailObj(); sendEmail(emailObj.fullName, emailObj.email); }

This not only makes the function much cleaner and easier to understand, but it makes it easier to update the Department class if needed (although that can also be dangerous, as we’ll discuss later). You won’t have to look for every place that tries to access its internal information, you just update the internal method.

Setting up our classes to enforce this can be tricky. It helps to draw a distinction between traditional OOP objects and data structures. Data structures should expose data and contain no behavior. OOP objects should expose behavior and limit access to data. In languages like C, these are separate entities, and you have to explicitly choose one of these types. In JavaScript, the lines are blurred a bit because the object type is used for both.

Here’s a data structure in JavaScript:

let Manager = { firstName: 'Brandon', lastName: 'Gregory', email: 'brandon@myurl.com' };

Note how the data is easily accessible. That’s the whole point. However, if we want to expose behavior, per best practice, we’d want to hide the data using internal variables on a class:

class Manager { constructor(options) { let firstName = options.firstName; let lastName = options.lastName; this.setFullName = function(newFirstName, newLastName) { firstName = newFirstName; lastName = newLastName; }; this.getFullName = function() { return `${firstName} ${lastName}`; } } }

Now, if you’re thinking that’s unnecessary, you’re correct in this case—there’s not much point to having getters and setters in a simple object like this one. Where getters and setters become important is when internal logic is involved:

class Department { constructor(options) { // Some other properties let Manager = options.Manager; this.changeManager(NewManager) { if (checkIfManagerExists(NewManager)) { Manager = NewManager; // AJAX call to update Manager in database } }; this.getManager { if (checkIfUserHasClearance()) { return Manager; } } } }

This is still a small example, but you can see how the getter and setter here are doing more than just obfuscating the data. We can attach logic and validation to these methods that consumers of a Department object shouldn’t have to worry about. And if the logic changes, we can change it on the getter and setter without finding and changing every bit of code that tries to get and set those properties. Even if there’s no internal logic when you’re building your app, there’s no guarantee that you won’t need it later. You don’t have to know what you’ll need in the future, you just have to leave space so you can add it later. Limiting access to data in an object that exposes behavior gives you a buffer to do this in case the need arises later.

As a general rule, if your object exposes behavior, it’s an OOP object, and it should not allow direct access to the data; instead, it should provide methods to access it safely, as in the above example. However, if the point of the object is to expose data, it’s a data structure, and it should not also contain behavior. Mixing these types muddies the water in your code and can lead to some unexpected (and sometimes dangerous) uses of your object’s data, as other functions and methods may not be aware of all of the internal logic needed for interacting with that data.

The interface segregation principle

Imagine you get a new job designing cars for a major manufacturer. Your first task: design a sports car. You immediately sit down and start sketching a car that’s designed to go fast and handle well. The next day, you get a report from management, asking you to turn your sports car into a sporty minivan. Alright, that’s weird, but it’s doable. You sketch out a sporty minivan. The next day, you get another report. Your car now has to function as a boat as well as a car. Ridiculous? Well, yes. There’s no way to design one vehicle that meets the needs of all consumers. Similarly, depending on your app, it can be a bad idea to code one function or method that’s flexible enough to handle everything your app could throw at it.

The interface segregation principle states that no client should be forced to depend on methods it does not use. In simpler terms, if your class has a plethora of methods and only a few of them are used by each user of the object, it makes more sense to break up your object into several more focused objects or interfaces. Similarly, if your function or method contains several branches to behave differently based on what data it receives, that’s a good sign that you need different functions or methods rather than one giant one.

One big warning sign for this is flags that get passed into functions or methods. Flags are Boolean variables that significantly change the behavior of the function if true. Take a look at the following function:

function addPerson(person, isManager) { if (isManager) { // add manager } else { // add employee } }

In this case, the function is split up into two different exclusive branches—there’s no way both branches are going to be used, so it makes more sense to break this up into separate functions, since we know if the person is a manager when we call it.

That’s a simplified example. An example closer to the actual definition of the interface segregation principle would be if a module contained numerous methods for dealing with employees and separate methods for dealing with managers. In this case, it makes much more sense to split the manager methods off into a separate module, even if the manager module is a child class of the employee module and shares some of the properties and methods.

Please note: flags are not automatically evil. A flag can be fine if you’re using it to trigger a small optional step while most of the functionality remains the same for both cases. What we want to avoid is using flags to create “clever” code that makes it harder to use, edit, and understand. Complexity can be fine as long as you’re gaining something from it. But if you’re adding complexity and there’s no significant payoff, think about why you’re coding it that way.

Unnecessary dependencies can also happen when developers try to implement features they think they might need in the future. There are a few problems with this. One, there’s a considerable cost to pay now in both development time and testing time for features that won’t be used now—or possibly at all. Two, it’s unlikely that the team will know enough about future requirements to adequately prepare for the future. Things will change, and you probably won’t know how things will change until phase one goes out into production. You should write your functions and methods to be open to extend later, but be careful trying to guess what the future holds for your codebase.

Adhering to the interface segregation principle is definitely a balancing act, as it’s possible to go too far with abstractions and have a ridiculous number of objects and methods. This, ironically, causes the same problem: added complexity without a payoff. There’s no hard rule to keep this in check—it’s going to depend on your app, your data, and your team. But there’s no shame in keeping things simple if making them complex does not help you. In fact, that’s usually the best route to go.

The open/closed principle

Many younger developers don’t remember the days before web standards changed development. (Thanks, Jeffrey Zeldman, for making our lives easier!) It used to be that whenever a new browser was released, it had its own interpretation of things, and developers had to scramble to find out what was different and how it broke all of their websites. There were articles and blog posts written quickly about new browser quirks and how to fix them, and developers had to drop everything to implement those fixes before clients noticed that their websites were broken. For many of the brave veterans of the first browser war, this wasn’t just a nightmare scenario—it was part of our job. As bad as that sounds, it’s easy for our code to do the same thing if we’re not careful about how we modify it.

The open/closed principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. In other words, your code should be written in such a way that it’s easy to add new functionality while you disallow changing existing functionality. Changing existing functionality is a great way to break your app, often without realizing it. Just like browsers rely on web standards to keep new releases from breaking our sites, your code needs to rely on its own internal standards for consistency to keep your code from breaking in unexpected ways.

Let’s say your codebase has this function:

function getFullName(person) { return `${person.firstName} ${person.lastName}`; }

A pretty simple function. But then, there’s a new use case where you need just the last name. Under no circumstances should you modify the above function like so:

function getFullName(person) { return { firstName: person.firstName, lastName: person.lastName }; }

That solves your new problem, but it modifies existing functionality and will break every bit of code that was using the old version. Instead, you should extend functionality by creating a new function:

function getLastName(person) { return person.lastName; }

Or, if we want to make it more flexible:

function getNameObject(person) { return { firstName: person.firstName, lastName: person.lastName }; }

This is a simple example, but it’s easy to see how modifying existing functionality can cause major problems. Even if you’re able to locate every call to your function or method, they all have to be tested—the open/closed principle helps to reduce testing time as well as unexpected errors.

So what does this look like on a larger scale? Let’s say we have a function to grab some data via an XMLHTTPrequest and do something with it:

function request(endpoint, params) { const xhr = new XMLHttpRequest(); xhr.open('GET', endpoint, true); xhr.onreadystatechange = function() { if (xhr.readyState == 4 && xhr.status == 200) { // Do something with the data } }; xhr.send(params); } request('https://myapi.com','id=91');

That’s great if you’re always going to be doing the same thing with that data. But how many times does that happen? If we do anything else with that data, coding the function that way means we’ll need another function to do almost the same thing.

What would work better would be to code our request function to accept a callback function as an argument:

function request(endpoint, params, callback) { const xhr = new XMLHttpRequest(); xhr.open('GET', endpoint, true); xhr.onreadystatechange = function() { if(xhr.readyState == 4 && xhr.status == 200) { callback(xhr.responseText); } }; xhr.send(params); } const defaultAction = function(responseText) { // Do something with the data }; const alternateAction = function(responseText) { // Do something different with the data }; request('https://myapi.com','id=91',defaultAction); request('https://myapi.com','id=42',alternateAction);

With the function coded this way, it’s much more flexible and useful to us, because it’s easy to add in new functionality without modifying existing functionality. Passing a function as a parameter is one of the most useful tools we have in keeping our code extensible, so keep this one in mind when you’re coding as a way to future-proof your code.

Keeping it clear

Clever code that increases complexity without improving clarity helps nobody. The bigger our apps get, the more clarity matters, and the more we have to plan to make sure our code is clear. Following these guidelines helps improve clarity and reduce overall complexity, leading to fewer bugs, shorter timelines, and happier developers. They should be a consideration for any complex app.

Thanks

A special thanks to Zell Liew of Learn JavaScript for lending his technical oversight to this article. Learn JavaScript is a great resource for moving your JavaScript expertise from beginner to advanced, so it’s worth checking out to further your knowledge!

Make Something Great: Become an Open Source Contributor

Thu, 08/16/2018 - 07:03

My first contribution to Bootstrap was a tiny line of CSS. It was a no-brainer to merge, but the feeling of seeing that bit of code in the project’s codebase was unreal and addictive.

You may think that open source is not for you. After all, it has always been a developer-dominant ecosystem. But code is by no means the only thing a piece of software is made of. Open source is first and foremost about community. Whether you’re a designer, developer, writer, doctor, or lawyer, there are many paths to the open source world.

Learn what you need to know to set out on your journey, from first steps to becoming a core contributor. It might change your career.

It’s OK if you don’t code

Developers think about their work logically. They break problems down into solvable pieces to make things work. They will devote themselves to crafting an API or a data structure, and optimize those solutions for performance and reusability. Unfortunately, this deconstruction often results in a Swiss Army knife of an interface, with a design that reflects the underlying data structures and APIs available.

Diversity is what can take open source from where it is to where it could be. Una Kravets, “Open Source Design: A Call to Arms

This is why the open source community needs you. Not only diversity in perspective, but also diversity of gender, location, cultures, and social backgrounds. Together these become greater than the sum of their parts.

Designers

Most people who contribute to an open source project are also users of the software. But designers look at the project from a different perspective. Their job is to defend the user, especially those that are not able to contribute to the project but still need the software. They make sure that everyone working on the project understands users’ needs and stays focused on them as the community makes decisions.

Writers

Let’s face it: writing is really hard! Designers and developers are usually bad at it. But it’s so valuable to an open source community, where members have to collaborate and communicate remotely, asynchronously, and, more often than not, in a non-native language.

Documentation, especially on open source projects, is rarely up-to-date. It’s worse when it involves the documentation meant for contributors. Information for getting started with a project frequently has gaps, with important information missing.

Also, like developers who dedicate themselves to different pieces of a software project, different types of writers can contribute to different pieces of a project’s messaging. They can team up with designers and subject matter experts to write copy for user interfaces, landing pages, or help documentation.

Geertjan Wielenga was a technical writer in the NetBeans community. Through his documentation, articles, and getting-started guides, he helped thousands of Java developers navigate their way around the project. His contributions had a profound impact, and he became the most acclaimed person in the community.

Without communication, you have no community. What you write may be the reason why someone decides to get involved. It can make the difference between someone feeling welcome or feeling lost. Your contribution as a writer is invaluable.

Developers that don’t want to code

Coding is optional; even software developers don’t always code. There’s administrative work too! Replying to issues, reviewing contributions, and helping users on forums, chats, Reddit, or Stack Overflow is as important to the success of the project as writing code.

Subject matter experts

Participation in open source projects is by no means limited to software engineers, designers, and writers. Lawyers, other engineers, and even medical doctors and other specialists can find a place to apply their knowledge too.

So if you thought open source projects were just for developers, think again. There is a place for you and every single contribution is important.

Why bother?

In 2013, Jay Balunas, the cofounder of AeroGear, a small open source project, saw that more than 85% of its Android code was written by a single developer: Daniel Passos. Jay had received some funding, so he reached out, offering him a job on the spot. But Daniel turned it down.

Why would someone turn down a paid position and want to continue working for free? Passos lived thousands of miles away, in Rio de Janeiro. He also didn’t speak any English.

Not about to lose a great developer that had already proven his worth, Jay solved the problem. He made the position remote, and sent an English teacher to Daniel’s house every week.

This story may sound too good to be true. But this may describe the careers of more people than you think—people who did not start out contributing to open source ever expecting anything in return. They would probably describe their experience starting out as a labor of love.

Getting a job offer shouldn’t be your only motivation to contribute to an open source project. If it is, you’ll likely be frustrated with the results.

Working for free

You may have a problem with working for free, especially when there seems to be plenty of well-paid work to go around. Why should you work in a vulnerable environment with total strangers, without ever receiving compensation?

If you are in your early twenties, willing to work all night for the love of this industry, and have few pressing expenses, then building up your professional reputation on open source projects and sharing your ideas is a great thing to do. It’s how we all got started, how I and the majority of my peers found our voices. Rachel Andrew, “The High Price of Free

On a professional level, among the biggest assets you have are your connections. But not everyone lives in a major tech industry area. Not everyone can attend industry conferences or participate in hackathons. The open source community opens a network of passionate and talented people from around the world. To become part of it, you don’t have to worry about whiteboarding exercises, interviews, or whether you have a degree from the right university.

But you may be disappointed if you contribute to an open source project just to get a job. Open source is volunteer work, just like helping other not-for-profit and community organizations that need people in order to stay open and reach as many people as possible. It should be approached from a place of wanting to give back to your community and contributing to a worthy cause.

Still, good employees are hard to find, and it’s often not a question of a person’s technical skills. Many companies today require applicants to participate in a months-long interview process, and complete hours of coding and design challenges that are unpaid, are unrecognized, and become the company’s property. In the case of Daniel Passos, by contributing to a project over time, he was able to demonstrate what he was capable of building, how he collaborated with others, and how passionate he was. This let him get past job requirements that aren’t related to the work but that are used to deny qualified job applicants all the time. This results in people who pay it forward: Daniel has since mentored many people in the community, including me.

As a contributor, you will be able to experiment and play with bleeding-edge techniques at a scale that you would hardly find on a personal project or in a hackathon scenario. It’s also an opportunity to continue working with technologies that you might not get to use anymore in other work. And if you have been away from making things for a long time, an open source project is a great way to get back on track.

Last but not least, it’s hard to explain with words the feeling you get when your name appears on a project. The positive feedback loop of being part of something larger than yourself is what makes open source addictive. Just ask what happened when a couple of people took over the blogging tool b2 when it was abandoned by its creator.

Finding your community

If all of this sounds good to you, it’s time to find your people. Start by taking a look at what you like and use. Ask yourself what problems you would like to solve. If you are passionate about something, there is probably a community around it.

If you enjoy working with a particular technology, you have options spanning the entire programming realm. For example, if you are like me and enjoy working with CSS, you can contribute to projects like Bulma, Bootstrap, Tachyons, Tailwind CSS, or Foundation, or design systems like Primer, PatternFly, or Lightning, among many others. GitHub has a great open source explorer you can use to find a group.

If you would like to work on use cases that you don’t get to in your day job, like healthcare software, for example, you can find lists of active projects by area and see what kind of contributions they need. OpenMRS is a great example of a project that benefits people outside the industry and that would never be successful with millions of developers but no designers, writers, or subject matter experts.

Respect

Brian Leathem, a notable open source developer, describes working on an open source project as being like working behind a glass wall. Every single action you take will be visible, transparent, and recorded. This makes you vulnerable, but a healthy community will make you feel welcome and comfortable. Check your project’s code of conduct before you contribute. Never tolerate harassment, bullying of any kind, or unkindness. If it happens and the members don’t act swiftly to enforce their code of conduct, they don’t deserve you.

Communication

Having said that, it’s essential to have a thick skin. Frame any criticism you will receive as a learning opportunity. When interacting with others, commit to setting aside ego for something bigger. Be humble, stay positive, have good arguments, and remain open-minded.

Being able to connect with people who are very different from you will be critical. You may collaborate with someone from a part of the world where communication styles and customs are different. For example, some cultures expect you to be very assertive if you care deeply about something. This is very different than cultures where it’s impolite to disagree in an open forum.

Trust

As you take your first steps, you might notice that thriving open source communities are supported by people who trust each other. Learn to trust and to be trusted by showing what you are able to do, admitting when you are wrong or don’t know something, and letting people do their work. Approach your first contributions from a place of humility. Once you gain an understanding of how people like to work with one another, you will be able to make a bigger dent with bolder contributions.

My very first contribution was to AeroGear, a small open source project. I downloaded the codebase to my computer, made my design changes, zipped the files, and sent an email to the community mailing list.

To say that the community had trouble understanding the improvements I had made to the user experience would be an understatement. I felt terribly lost, and a little rejected. I really wanted to become a part of this open source project, but I didn’t know where to begin. So I asked for help, and the community had endless patience with me, even when I destroyed the repository a few times.

The toolbox

To participate in an open source project, you will need to shed any fears you may have of using the command line and working with version control. However, many open source projects are hosted on Github, where you might be able to avoid some of this if you do not code and are posting sketches, making changes to copy, or writing documentation.

The command line

Level Up Tutorials has a great video series about the command line if you are a visual learner. Their Command Line Basics #1 video is a good place to start. If you prefer to read, Remy Sharp’s Working the Command Line is excellent.

Git

For getting started with version control, GitHub has a great step-by-step guide to Git. There are many Git desktop apps like Sourcetree, GitHub Desktop, or GitKraken that will help you visualize what Git does. I still highly recommend becoming familiar with the Git command-line tool. It’s a steeper learning curve, but you’ll get a return on your investment.

Communication channels

Every community has its communication channels. There is almost always a mailing list where the most important decisions are made. GitHub’s Issues feature is used for contribution issue tracking. Forums are common for user discussions.

Chat among contributors has traditionally been on IRC, but Slack, Rocket.Chat, and Gitter have become more popular, including for user discussions. Find out where your community hangs out, and get to know its members.

Making your first contribution

The harder part of getting started with open source is finding a community and becoming familiar with how it operates and communicates. If you have cleared that hurdle, you are more than ready to begin contributing. Start small, and be nice.

Look at the issues for a small task you feel comfortable doing. On some projects they are tagged as “help wanted” or “good first issue.” Documentation is also a great place to start. Go through the “getting started” guides and see if they make sense to a newcomer, like you. Can you improve them, make them more clear? Look for a typo or a grammar mistake. Contributions like these are easy to merge and are a perfect starting point.

If you want to contribute to a project in ways other than working on the code, these issues are good ways to introduce yourself and what you can do. For example, if you are a designer, a project will sometimes, but not always, be looking for UI designs. But in most cases, even on projects with very little UI, like a utility or a service, there will be usability problems that need solutions. By starting with pointing out unclear information and offering lots of quick solutions to a problem, you can start to demonstrate both your expertise and your passion.

Sometimes, changes or further explanation will be requested. Other times you’ll break things, and that’s OK. I once sent a pull request that messed up the border radius of Bootstrap buttons. I hadn’t tested the result. Mark Otto, the leader of the project, took the time to write a comment explaining where I made a mistake and how I might fix it. He didn’t have to do that; I should have known better. The gesture and the respect for my time as a contributor made me want to help the project even more.

Leveling up

Here is a secret: you don’t need to make a ton of commits to become a top contributor. React is probably the most active open source project today, and to become a Top 100 React contributor, you only need to merge five commits. They can even be five typos that you’ve fixed in the docs! And you can make an even greater impact in smaller communities with that level of contribution.

Commit to contribute

If you value the idea of open source, you are worthy of contributing to a project, earning recognition, and being a respected member of a community. If you have different expertise, experience, or points of view about a project, we need you even more. At the end of the day, without people contributing to the community, the web will not remain open and free.

Rachel Andrew goes on to write about how she’s seen people of her generation taking a step back, as she started to feel the pressure of the finite amount of time she has. Pioneers of the modern web like her paid it forward. Can you?

What is Typesetting?

Thu, 08/09/2018 - 07:02

A note from the editors: We’re pleased to share an excerpt from Chapter 1 of Tim Brown’s Flexible Typesetting, from A Book Apart.

Typesetting is the most important part of typography, because most text is meant to be read, and typesetting involves preparing text for reading.

You’re already great at typesetting. Think about it. You choose good typefaces. You determine font sizes and line spacing. You decide on the margins that surround text elements. You set media query breakpoints. All of that is typesetting.

Maybe you’re thinking, But Tim, I am a font muggins. Help me make better decisions! Relax. You make better decisions than you realize. Some people will try to make you feel inferior; ignore them. Your intuition is good. Practice, and your skills will improve. Make a few solid decisions; then build on them. I’ll help you get started.

In this chapter, I’ll identify the value of typesetting and its place within the practice of typography. I’ll talk about pressure, a concept I use throughout this book to explain why typeset texts sometimes feel awkward or wrong. I’ll also discuss how typesetting for the web differs from traditional typesetting.

Why does typesetting matter?

Typesetting shows readers you care. If your work looks good and feels right, people will stick around—not only because the typography is comfortable and familiar, but also because you show your audience respect by giving their experience your serious attention (Fig 1.1).

Fig 1.1: Glance at these two screenshots. Which one would you rather read? Which publisher do you think cares more about your experience?

Sure, you could buy the “it” font of the moment (you know, the font all the cool people are talking about). You could use a template that promises good typography. You could use a script that spiffs up small typographic details. None of these things is necessarily bad in and of itself.

But when you take shortcuts, you miss opportunities to care about your readers, the text in your charge, and the practice of typography, all of which are worthwhile investments. Spending time on these things can feel overwhelming, but the more you do it, the easier and more fun it becomes. And you can avoid feeling overwhelmed by focusing on the jobs type does.

Imagine yourself in a peaceful garden. You feel the soft sun on your arms, and take a deep breath of fresh, clean air. The smell of flowers makes you feel happy. You hear honeybees hard at work, water trickling in a nearby brook, and birds singing. Now imagine that this garden needs a website, and you’re trying to find the right typeface.

Sorry to spoil the moment! But hey, if you do this right, the website could give people the same amazing feeling as sitting in the garden itself.

If you’re anything like me, your first instinct will be to recall sensations from the imaginary garden and look for a typeface with shapes that evoke similar sensations. But this is not a good way to choose among thousands upon thousands of fonts, because it’s too easy to end up with typefaces that—as charming as they may seem at first—don’t do their jobs. You’ll get disappointed and go right back to relying on shortcuts.

Finding typefaces that are appropriate for a project, and that evoke the right mood, is easier and more effective if you know they’re good at the jobs you need them to do. The trick is to eliminate type that won’t do the job well (Fig 1.2).

Fig 1.2: Hatch, a typeface by Mark Caneso, is fun to use large, but not a good choice for body text.

Depending on the job, some typefaces work better than others—and some don’t work well at all. Detailed, ornate type is not the best choice for body text, just as traditional text typefaces are not great for signage and user interfaces. Sleek, geometric fonts can make small text hard to read. I’ll come back to this at the beginning of Chapter 3.

Considering these different jobs helps you make better design decisions, whether you’re selecting typefaces, tending to typographic details, or making text and layout feel balanced. We’ll do all of that in this book.

Typesetting covers type’s most important jobs

Typesetting, or the act of setting type, consists of typographic jobs that form the backbone of a reading experience: body text (paragraphs, lists, subheads) and small text (such as captions and asides). These are type’s most important jobs. The other parts of typography—which I call arranging and calibrating type—exist to bring people to the typeset text, so they can read and gather information (Fig 1.3).

Fig 1.3: Think of these typographic activities as job categories. In Chapter 3, we’ll identify the text blocks in our example project and the jobs they need to do.

Let’s go over these categories of typographic jobs one by one. Setting type well makes it easy for people to read and comprehend textual information. It covers jobs like paragraphs, subheads, lists, and captions. Arranging type turns visitors and passersby into readers, by catching their attention in an expressive, visual way. It’s for jobs like large headlines, titles, calls to action, and “hero” areas. Calibrating type helps people scan and process complicated information, and find their way, by being clear and organized. This is for jobs like tabular data, navigation systems, infographics, math, and code.

Arranging and calibrating type, and the jobs they facilitate, are extremely important, but I won’t spend much time discussing them in this book except to put them in context and explain where in my process I usually give them attention. They deserve their own dedicated texts. This book focuses specifically on setting type, for several reasons.

First, typesetting is critical to the success of our projects. Although the decisions we make while typesetting are subtle almost to the point of being unnoticeable, they add up to give readers a gut feeling about the work. Typesetting lays a strong foundation for everything else.

It also happens to be more difficult than other parts of typography. Good type for typesetting is harder to find than good type for other activities. Good typesetting decisions are harder to make than decisions about arranging type or calibrating type.

Furthermore, typesetting can help us deeply understand the web’s inherent flexibility, which responsive web design has called attention to so well. The main reason I make a distinction between typesetting, arranging type, and calibrating type is because these different activities each require text to flex in different ways.

In sum, typesetting matters because it is critical for readers, it supports other typographic activities, the difficult decisions informing it take practice, and its nature can help us understand flexibility and responsiveness on the web. A command of typesetting makes us better designers.

Why do some websites feel wrong?

It’s not hard to find websites that just feel, well, sort of wrong. They’re everywhere. The type they use is not good, the font size is too small (or too big), lines of text are too long (or comically short), line spacing is too loose or too tight, margins are either too small or way too big, and so on (Fig 1.4).

Fig 1.4: Some typesetting just looks wrong. Why? Keep reading.

It’s logical to think that websites feel wrong because, somewhere along the line, a typographer made bad decisions. Remember that a type designer is someone who makes type; a typographer is someone who uses type to communicate. In that sense, we are all typographers, even if we think of what we do as designing, or developing, or editing.

For more than 500 years, the job of a typographer has been to decide how text works and looks, and over those years, typographers have made some beautiful stuff. So if some websites feel wrong, it must be because the typographers who worked on them were inexperienced, or lazy, or had no regard for typographic history. Right?

Except that even the best typographers, who have years of experience, who have chosen a good typeface for the job at hand, who have made great typesetting decisions, who work hard and respect tradition—even those people can produce websites that feel wrong. Websites just seem to look awful in one way or another, and it’s hard to say why. Something’s just not quite right. In all likelihood, it’s the typesetting. Specifically, websites feel wrong when they put pressure on typographic relationships.

Typographic relationships

Have you ever chosen a new font for your blog template, or an existing project, and instinctively adjusted the font size or line spacing to make it feel better?

Fig 1.5: Replacing this theme’s default font with Kepler made the text seem too small. Size and line-spacing adjustments felt necessary

Those typesetting adjustments help because the typeface itself, as well as its font size, measure (a typographic term for the length of lines of text), and line spacing all work together to make a text block feel balanced. (We’ll return to text blocks in more detail in Chapter 3.) This balance is something we all instinctively notice; when it’s disrupted, we sense pressure.

But let’s continue for a moment with this example of choosing a new font. We sense pressure every time we choose a new font. Why? Because each typeface is sized and positioned in unique ways by its designer (Fig 1.6).

Fig 1.6: Glyphs are sized and positioned within a font’s em box. When we set a font size, we are sizing the em box—not the glyph inside it.

In Chapter 2, we’ll take a closer look at glyphs, which are instances of one or more characters. For now, suffice it to say that glyphs live within a bounding box called the em box, which is a built-in part of a font file. Type designers decide how big, small, narrow, or wide glyphs are, and where they are positioned, within this box. The em box is what becomes our CSS-specified font size—it maps to the CSS content area.

So when we select a new typeface, the visible font size of our text block—the chunk of text to which we are applying styles— often changes, throwing off its balance. This means we need to carefully adjust the font size and then the measure, which depends on both the typeface and the font size. Finally, we adjust line spacing, which depends on the typeface, font size, and measure. I’ll cover how to fine-tune all of these adjustments in Chapter 4.

Making so many careful adjustments to one measly text block seems quite disruptive, doesn’t it? Especially because the finest typographic examples in history—the work we admire, the work that endures—commands a compositional balance. Composition, of course, refers to a work of art or design in its
entirety. Every text block, every shape, every space in a composition relates to another. If one text block is off-kilter, the whole work suffers.

I’m sure you can see where I’m headed with this. The web puts constant pressure on text blocks, easily disrupting their balance in myriad ways.

Pressure

There are no “correct” fonts, font sizes, measures, or line heights. But relationships among these aspects of a text block determine whether reading is easier or harder. Outside forces can apply pressure to a balanced, easy-to-read text block, making the typesetting feel wrong, and thus interfering with reading.

We just discussed how choosing a new typeface introduces pressure. The same thing happens when our sites use local fonts that could be different for each reader, or when webfonts fail to load and our text is styled with fallback fonts. Typefaces are not interchangeable. When they change, they cause pressure that we have to work hard to relieve.

We also experience pressure when the font size changes (Fig 1.7). Sometimes, when we’re designing sites, we increase font size to better fill large viewports—the viewing area on our screens—or decrease it to better fit small ones. Readers can even get involved, by increasing or decreasing font size themselves to make text more legible. When font size changes, we have to consider whether our typeface, measure, and line spacing are still appropriate.

Fig 1.7: Left: a balanced text block. Right: a larger font size causes pressure.

Changes to the width of our text block also introduce pressure (Fig 1.8). When text blocks stretch across very wide screens, or are squeezed into very narrow viewports, the entire composition has to be reevaluated. We may find that our text blocks need new boundaries, or a different font size, or even a different typeface, to make sure they maintain a good internal balance—and feel right for the composition. (This may seem fuzzy right now, but it will become clearer in Chapters 5 and 6, I promise.)

Fig 1.8: Left: a balanced text block. Right: a narrower measure causes pressure.

We also experience pressure when we try to manage white space without considering the relationships in our text blocks (Fig 1.9). When we predetermine our line height with a baseline grid, or when we adjust the margins that surround text as if they were part of a container into which text is poured rather than an extension of the balance in the typesetting, we risk destroying relationships among compositional white spaces— not only the white spaces in text blocks (word spacing, line spacing), but also the smaller white spaces built into our typefaces. These relationships are at risk whenever a website flexes, whenever a new viewport size comes along.

Fig 1.9: Left: a balanced text block. Right: looser line spacing causes pressure.

Typesetting for the web can only be successful if it relieves inevitable pressures like these. The problem is that we can’t see all of the pressures we face, and we don’t yet have the means (the words, the tools) to address what we can see. Yet our natural response, based on centuries of typographic control, is to try to make better decisions.

But on the web, that’s like trying to predict the weather. We can’t decide whether to wear a raincoat a year ahead of time. What we can do is get a raincoat and be ready to use it under certain conditions. Typographers are now in the business of making sure text has a raincoat. We can’t know when it’ll be needed, and we can’t force our text to wear it, but we can make recommendations based on conditional instructions.

For the first time in hundreds of years, because of the web, the role of the typographer has changed. We no longer decide; we make suggestions. We no longer choose typefaces, font size, line length, line spacing, and margins; we prepare and instruct text to make those choices for itself. We no longer determine page shape and quality; we respond to our readers’ contexts and environments.

These changes may seem like a weakness compared to the command we have always been able to exercise. But they are in fact an incredible strength, because they mean that typeset text has the potential to fit everyone just right. In theory, at least, the web is universal.

The primary design principle underlying the web’s usefulness and growth is universality. Tim Berners-Lee

We must now practice a universal typography that strives to work for everyone. To start, we need to acknowledge that typography is multidimensional, relative to each reader, and unequivocally optional.

Read the rest of this chapter and more when you buy the book!

Fixing Variable Scope Issues with ECMAScript&#8239;6

Thu, 08/02/2018 - 07:07

Variable scope has always been tricky in JavaScript, particularly when compared to more structured languages like C and Java. For years, there wasn’t much talk about it because we had few options for really changing it. But ECMAScript 6 introduced some new features to help give developers more control of variable scope. Browser support is pretty great and these features are ready to use for most developers today. But which to choose? And what, exactly, do they do?

This article spells out what these new features are, why they matter, and how to use them. If you’re ready to take more control over variable scope in your projects or just want to learn the new way of doing things, read on.

Variable scope: a quick primer

Variable scope is an important concept in programming, but it can confuse some developers, especially those new to programming. Scope is the area in which a variable is known. Take a look at the following code:

var myVar = 1; function setMyVar() { myVar = 2; } setMyVar(); console.log(myVar);

What does the console log read? Not surprisingly, it reads 2. The variable myVar is defined outside of any function, meaning it’s defined in the global scope. Consequently, every function here will know what myVar is. In fact, even functions in other files that are included on the same page will know what this variable is.

Now consider the following code:

function setMyVar() { var myVar = 2; } setMyVar(); console.log(myVar);

All we did was move where the variable was declared. So what does the console log read now? Well, it throws a ReferenceError because myVar is not defined. That’s because the var declaration here is function-level, making the scope extend only within the function (and any potential functions nested in it), but not beyond. If we want a variable’s scope to be shared by two or more functions on the same level, we need to define the variable one level higher than the functions.

Here’s the tricky thing: most websites and apps don’t have all of the code written by one developer. Most will have several developers touching the code, as well as third-party libraries and frameworks thrown into the mix. And even if it’s just one developer, it’s common to pull JavaScript in from several places. Because of this, it’s generally considered bad practice to define a variable in the global scope—you never know what other variables other developers will be defining. There are some workarounds to share variables among a group of functions—most notably, the module pattern and IIFEs in object-oriented JavaScript, although encapsulating data and functions in any object will accomplish this. But variables with scopes larger than necessary are generally problematic.

The problem with var

Alright, so we’ve got a handle on variable scope. Let’s get into something more complex. Take a look at the following code:

function varTest() { for (var i = 0; i < 3; i++) { console.log(i); } console.log(i); } varTest();

What are the console logs? Well, inside the loop, you get the iteration variable as it increments: 0, 1, 2. After that, the loop ends and we move on. Now we try to reference that same variable outside of the for loop it was created in. What do we get?

The console log reads 3 because the var statement is function-level. If you define a variable using var, the entire function will have access to it, no matter where it is defined in that function.

This can get problematic when functions become more complex. Take a look at the following code:

function doSomething() { var myVar = 1; if (true) { var myVar = 2; console.log(myVar); } console.log(myVar); } doSomething();

What are the console logs? 2 and 2. We define a variable equal to 1, and then try to redefine the same variable inside the if statement. Since those two exist in the same scope, we can’t define a new variable, even though that’s obviously what we want, and the first variable we set is overwritten inside the if statement.

That right there is the biggest shortcoming with var: its scope is too large, which can lead to unintentional overwriting of data, and other errors. Large scope often leads to sloppy coding as well—in general, a variable should only have as much scope as it needs and no more. What we need is a way to declare a variable with a more limited scope, allowing us to exercise more caution when we need to.

Enter ECMAScript 6.

New ways to declare variables

ECMAScript 6 (a new set of features baked into JavaScript, also known as ES6 or ES2015) gives us two new ways to define variables with a more limited scope: let and const. Both give us block-level scope, meaning scope can be contained within blocks of code like for loops and if statements, giving us more flexibility in choosing how our variables are scoped. Let’s take a look at both.

Using let

The let statement is simple: it’s mostly like var, but with limited scope. Let’s revisit that code sample from above, replacing var with let:

function doSomething() { let myVar = 1; if (true) { let myVar = 2; console.log(myVar); } console.log(myVar); } doSomething();

In this case, the console logs would read 2 and 1. This is because an if statement defines a new scope for a variable declared with let—the second variable we declare is actually a separate entity than the first one, and we can set both independently. But that doesn’t mean that nested blocks like that if statement are completely cut off from higher-level scopes. Observe:

function doSomething() { let myVar = 1; if (true) { console.log(myVar); } } doSomething();

In this case, the console log would read 1. The if statement has access to the variable we created outside of it and is able to log that. But what happens if we try to mix scopes?

function doSomething() { let myVar = 1; if (true) { console.log(myVar); let myVar = 2; console.log(myVar); } } doSomething();

You might think that first console log would read 1, but it actually throws a ReferenceError, telling us that myVar is not defined or initialized for that scope. (The terminology varies across browsers.) JavaScript variables are hoisted in their scope—if you declare a variable within a scope, JavaScript reserves a place for it even before you declare it. How that variable is reserved differs between var and let.

console.log(varTest); var varTest = 1; console.log(letTest); let letTest = 2;

In both cases here, we’re trying to use a variable before it’s defined. But the console logs behave differently. The first one, using a variable later declared with var, will read undefined, which is an actual variable type. The second one, using a variable later defined with let, will throw a ReferenceError and tell us that we’re trying to use that variable before it’s defined/initialized. What’s going on?

Before executing, JavaScript will do a quick read of the code and see if any variables will be defined, and hoist them within their scope if they are. Hoisting reserves that space, even if the variable exists in the parent scope. Variables declared with var will be auto-initialized to undefined within their scope, even if you reference them before they’re declared. The big problem is that undefined doesn’t always mean you’re using a variable before it’s defined. Look at the following code:

var var1; console.log(var1); console.log(var2); var var2 = 1;

In this case, both console logs read undefined, even though different things are happening. Variables that are declared with var but have no value will be assigned a value of undefined; but variables declared with var that are referenced within their scope before being declared will also return undefined. So if something goes wrong in our code, we have no indication which of these two things is happening.

Variables defined with let are reserved in their block, but until they’re defined, they go into the Temporal Dead Zone (TDZ)—they can’t be used and will throw an error, but JavaScript knows exactly why and will tell you.

let var1; console.log(var1); console.log(var2); let var2 = 1;

In this case, the first console log reads undefined, but the second throws a ReferenceError, telling us the variable hasn’t been defined/initialized yet.

So, using var, if we see undefined, we don’t know if the variable has been defined and just doesn’t have a value, or if it hasn’t been defined yet in that scope but will be. Using let, we get an indication of which of these things is happening—much more useful for debugging.

Using const

The const statement is very similar to let, but with one major exception: it does not allow you to change the value once initialized. (Some more complex types, like Object and Array, can be modified, but can’t be replaced. Primitive types, like Number and String, cannot change at all.) Take a look at the following code:

let mutableVar = 1; const immutableVar = 2; mutableVar = 3; immutableVar = 4;

That code will run fine until the last line, which throws a TypeError for assignment to a constant variable. Variables defined with const will throw this error almost any time you try to reassign one, although object mutation can cause some unexpected results.

As a JavaScript developer, you might be wondering what the big deal is about immutable variables. Constant variables are new to JavaScript, but they’ve been a part of languages like C and Java for years. Why so popular? They make us think about how our code is working. There are some cases where changing a variable can be harmful to the code, like when doing calculations with pi or when you have to reference a certain HTML element over and over:

const myButton = document.querySelector('#my-button');

If our code depends on that reference to that specific HTML element, we should make sure it can’t be reassigned.

But the case for const goes beyond that. Remember our best practice of only giving variables the scope they need and no more. In that same line of thought, we should only give variables the mutability they need and no more. Zell Liew has written much more on the subject of immutable variables, but the bottom line is that making variables immutable makes us think more about our code and leads to cleaner code and fewer surprises.

When I was first starting to use let and const, my default option was let, and I would use const only if reassignment would cause harm to the code. But after learning more about programming practices, I changed my mind on this. Now, my default option is const, and I use let only if reassignment is necessary. That forces me to ask if reassignment for a variable is really necessary—most of the time, it’s not.

Is there a case for var?

Since let and const allow for more careful coding, is there a case for var anymore? Well, yes. There are a few cases where you’d want to use var over the new syntax. Give these careful consideration before switching over to the new declarations.

Variables for the masses

Variables declared with var do have one thing that the others don’t, and it’s a big one: universal browser support. 100% of browsers support var. Support is pretty great for both let and const, but you have to consider how differently browsers handle JavaScript it doesn’t understand vs. CSS it doesn’t understand.

If a browser doesn’t support a CSS feature, most of the time that’s just going to mean a display bug. Your site may not look the same as in a supporting browser, but it’s most likely still usable. If you use let and a browser doesn’t support it, that JavaScript will not work. At all. With JavaScript being such an integral part of the web today, that can be a major problem if you’re aiming to support old browsers in any way.

Most support conversations pose the question, “What browsers do we want to deliver an optimal experience for?” When you’re dealing with a site containing core functionality that relies on let and const, you’re essentially asking the question, “What browsers do we want to ban from using our site?” This should be a different conversation than deciding whether you can use display: flex. For most websites, there won’t be enough users of non-supporting browsers to worry about. But for major revenue-generating sites or sites where you’re paying for traffic, this can be a serious consideration. Make sure that risk is alright with your team before proceeding.

If you need to support really old browsers but want to use let and const (and other new, ES6 constructs), one solution is to use a JavaScript transpiler like Babel to take care of this for you. With Babel, you can write modern JavaScript with new features and then compile it into code that’s supported by older browsers.

Sound too good to be true? Well, there are some caveats. The resulting code is much more verbose than you’d write on your own, so you end up with a much larger file than necessary. Also, once you commit to a transpiler, that codebase is going to be stuck with that solution for a while. Even if you’re writing valid ECMAScript 6 for Babel, dropping Babel later will mean testing your code all over again, and that’s a hard sell for any project team when you have a version that’s working perfectly already. When’s the next time you’re going to rework that codebase? And when is that IE8 support not going to matter anymore? It might still be the best solution for the project, but make sure you’re comparing those two timelines.

And for the next trick ...

There is one more thing var can do that the others can’t. This is a niche case, but let’s say you have a situation like this:

var myVar = 1; function myFunction() { var myVar = 2; // Oops! We need to reference the original myVar! }

So we defined myVar in the global scope, but later lost that reference because we defined it in a function, yet we need to reference the original variable. This might seem silly, because you can ordinarily just pass the first variable into the function or rename one of them, but there may be some situations where your level of control over the code prevents this. Well, var can do something about that. Check it out:

var myVar = 1; function myFunction() { var myVar = 2; console.log(myVar); // 2 console.log(window.myVar); // 1 }

When a variable is defined on the global scope using var, it automatically attaches itself to the global window object—something let and const don’t do. This feature helped me out once in a situation where a build script validated JavaScript before concatenating files together, so a reference to a global variable in another file (that would soon be concatenated into the same file upon compilation) threw an error and prevented compilation.

That said, relying on this feature often leads to sloppy coding. This problem is most often solved with greater clarity and smaller margin of error by attaching variables to your own object:

let myGlobalVars = {}; let myVar = 1; myGlobalVars.myVar = myVar; function myFunction() { let myVar = 2; console.log(myVar); // 2 console.log(myGlobalVars.myVar); // 1 }

Yes, this requires an extra step, but it reduces confusion in working around something you’re not really supposed to be doing anyway. Nonetheless, there may be times when this feature of var is useful. Try to find a cleaner workaround before resorting to this one, though.

Which do I use?

So how do you choose? What’s the priority for using these? Here’s the bottom line.

First question: are you supporting IE10 or really old versions of other browsers in any way? If the answer is yes, and you don’t want to go with a transpiler solution, you need to choose var.

If you’re free to use the features that are new in ES6, start by making every variable a const. If a variable needs to be reassigned (and try to write your code so it doesn’t), switch it to let.

Scoping for the future

ECMAScript 6 statements like let and const give us more options for controlling variable scope in our websites and apps. They make us think about what our code is doing, and support is great. Give it careful consideration, of course, but coding with these declarations will make your codebase more stable and prepare it for the future.

Webmentions: Enabling Better Communication on the Internet

Thu, 07/19/2018 - 07:00

Over 1 million Webmentions will have been sent across the internet since the specification was made a full Recommendation by the W3C—the standards body that guides the direction of the web—in early January 2017. That number is rising rapidly, and in the last few weeks I’ve seen a growing volume of chatter on social media and the blogosphere about these new “mentions” and the people implementing them.

So what are Webmentions and why should we care?

While the technical specification published by the W3C may seem incomprehensible to most, it’s actually a straightforward and extremely useful concept with a relatively simple implementation. Webmentions help to break down some of the artificial walls being built within the internet and so help create a more open and decentralized web. There is also an expanding list of major web platforms already supporting Webmentions either natively or with easy-to-use plugins (more on this later).

Put simply, Webmention is a (now) standardized protocol that enables one website address (URL) to notify another website address that the former contains a reference to the latter. It also allows the latter to verify the authenticity of the reference and include its own corresponding reference in a reciprocal way. In order to understand what a big step forward this is, a little history is needed.

The rise of @mentions

By now most people are familiar with the ubiquitous use of the “@” symbol in front of a username, which originated on Twitter and became known as @mentions and @replies (read “at mentions” and “at replies”). For the vast majority, this is the way that one user communicates with other users on the platform, and over the past decade these @mentions, with their corresponding notification to the receiver, have become a relatively standard way of communicating on the internet.

Tweet from Wiz Khalifa

Many other services also use this type of internal notification to indicate to other users that they have been referenced directly or tagged in a post or photograph. Facebook allows it, so does Instagram. Google+ has a variant that uses + instead of @, and even the long-form article platform Medium, whose founder Ev Williams also co-founded Twitter, quickly joined the @mentions party.

The biggest communications problem on the internet

If you use Twitter, your friend Alice only uses Facebook, your friend Bob only uses his blog on WordPress, and your pal Chuck is over on Medium, it’s impossible for any one of you to @mention another. You’re all on different and competing platforms, none of which interoperate to send these mentions or notifications of them. The only way to communicate in this way is if you all join the same social media platforms, resulting in the average person being signed up to multiple services just to stay in touch with all their friends and acquaintances.

Given the issues of privacy and identity protection, different use cases, the burden of additional usernames and passwords, and the time involved, many people don’t want to do this. Possibly worst of all, your personal identity on the internet can end up fragmented like a Horcrux across multiple websites over which you have little, if any, control.

Imagine if AT&T customers could only speak to other AT&T customers and needed a separate phone, account, and phone number to speak to friends and family on Verizon. And still another to talk to friends on Sprint or T-Mobile. The massive benefit of the telephone system is that if you have a telephone and service (from any one of hundreds or even thousands of providers worldwide), you can potentially reach anyone else using the network. Surely, with a basic architecture based on simple standards, links, and interconnections, the same should apply to the internet?

The solution? Enter Webmentions!

As mentioned earlier, Webmentions allow notifications between web addresses. If both sites are set up to send and receive them, the system works like this:

  1. Alice has a website where she writes an article about her rocket engine hobby.
  2. Bob has his own website where he writes a reply to Alice’s article. Within his reply, Bob includes the permalink URL of Alice’s article.
  3. When Bob publishes his reply, his publishing software automatically notifies Alice’s server that her post has been linked to by the URL of Bob’s reply.
  4. Alice’s publishing software verifies that Bob’s post actually contains a link to her post and then (optionally) includes information about Bob’s post on her site; for example, displaying it as a comment.

A Webmention is simply an @mention that works from one website to another!

If she chooses, Alice can include the full text of Bob’s reply—along with his name, photo, and his article’s URL (presuming he’s made these available)—as a comment on her original post. Any new readers of Alice’s article can then see Bob’s reply underneath it. Each can carry on a full conversation from their own websites and in both cases display (if they wish) the full context and content.

Using Webmentions, both sides can carry on a conversation where each is able to own a copy of the content and provide richer context.

User behaviors with Webmentions are a little different than they are with @mentions on Twitter and the like in that they work between websites in addition to within a particular website. They enable authors (of both the original content and the responses) to own the content, allowing them to keep a record on the web page where it originated, whether that’s a website they own or the third-party platform from which they chose to send it.

Interaction examples with Webmention

Webmentions certainly aren’t limited to creating or displaying “traditional” comments or replies. With the use of simple semantic microformats classes and a variety of parsers written in numerous languages, one can explicitly post bookmarks, likes, favorites, RSVPs, check-ins, listens, follows, reads, reviews, issues, edits, and even purchases. The result? Richer connections and interactions with other content on the web and a genuine two-way conversation instead of a mass of unidirectional links. We’ll take a look at some examples, but you can find more on the IndieWeb wiki page for Webmention alongside some other useful resources.

Marginalia

With Webmention support, one could architect a site to allow inline marginalia and highlighting similar to Medium.com’s relatively well-known functionality. With the clever use of URL fragments, which are well supported in major browsers, there are already examples of people who use Webmentions to display word-, sentence-, or paragraph-level marginalia on their sites. After all, aren’t inline annotations just a more targeted version of comments?

An inline annotation on the post “Hey Ev, what about mentions?,” in which Medium began to roll out their @mention functionality. Reads

As another example, and something that could profoundly impact the online news business, I might post a link on my website indicating I’ve read a particular article on, say, The New York Times. My site sends a “read” Webmention to the article, where a facepile or counter showing the number of read Webmentions received could be implemented. Because of the simplified two-way link between the two web pages, there is now auditable proof of interaction with the content. This could similarly work with microinteractions such as likes, favorites, bookmarks, and reposts, resulting in a clearer representation of the particular types of interaction a piece of content has received. Compared to an array of nebulous social media mini-badges that provide only basic counters, this is a potentially more valuable indicator of a post’s popularity, reach, and ultimate impact.

Listens

Building on the idea of using reads, one could extend Webmentions to the podcasting or online music sectors. Many platforms are reasonably good at providing download numbers for podcasts, but it is far more difficult to track the number of actual listens. This can have a profound effect on the advertising market that supports many podcasts. People can post about what they’re actively listening to (either on their personal websites or via podcast apps that could report the percentage of the episode listened to) and send “listen” Webmentions to pages for podcasts or other audio content. These could then be aggregated for demographics on the back end or even shown on the particular episode’s page as social proof of the podcast’s popularity.

For additional fun, podcasters or musicians might use Webmentions in conjunction with media fragments and audio or video content to add timecode-specific, inline comments to audio/video players to create an open standards version of SoundCloud-like annotations and commenting.

SoundCloud allows users to insert inline comments that dovetail with specific portions of audio. Reviews

Websites selling products or services could also accept review-based Webmentions that include star-based ratings scales as well as written comments with photos, audio, or even video. Because Webmentions are a two-way protocol, the reverse link to the original provides an auditable path to the reviewer and the opportunity to assess how trustworthy their review may be. Of course, third-party trusted sites might also accept these reviews, so that the receiving sites can’t easily cherry-pick only positive reviews for display. And because the Webmention specification includes the functionality for editing or deletion, the original author has the option to update or remove their reviews at any time.

Getting started with Webmentions Extant platforms with support

While the specification has only recently become a broad recommendation for use on the internet, there are already an actively growing number of content management systems (CMSs) and platforms that support Webmentions, either natively or with plugins. The simplest option, requiring almost no work, is a relatively new and excellent social media service called Micro.blog, which handles Webmentions out of the box. CMSs like Known and Perch also have Webmention functionality built in. Download and set up the open source software and you’re ready to go.

If you’re working with WordPress, there’s a simple Webmention plugin that will allow you to begin using Webmentions—just download and activate it. (For additional functionality when displaying Webmentions, there’s also the recommended Semantic Linkbacks plugin.) Other CMSs like Drupal, ProcessWire, Elgg, Nucleus CMS, Craft, Django, and Kirby also have plugins that support the standard. A wide variety of static site generators, like Hugo and Jekyll, have solutions for Webmention technology as well. More are certainly coming.

If you can compose basic HTML on your website, Aaron Parecki has written an excellent primer on “Sending Your First Webmention from Scratch.”

A weak form of Webmention support can be bootstrapped for Tumblr, WordPress.com, Blogger, and Medium with help from the free Bridgy service, but the user interface and display would obviously be better if they were supported fully and natively.

As a last resort, if you’re using Tumblr, WordPress.com, Wix, Squarespace, Ghost, Joomla, Magento, or any of the other systems without Webmention, file tickets asking them to support the standard. It only takes a few days of work for a reasonably experienced developer to build support, and it substantially improves the value of the platform for its users. It also makes them first-class decentralized internet citizens.

Webmentions for developers

If you’re a developer or a company able to hire a developer, it is relatively straightforward to build Webmentions into your CMS or project, even potentially open-sourcing the solution as a plugin for others. For anyone familiar with the old specifications for pingback or trackback, you can think of Webmentions as a major iteration of those systems, but with easier implementation and testing, improved performance and display capabilities, and decreased spam vulnerabilities. Because the specification supports editing and deleting Webmentions, it provides individuals with more direct control of their data, which is important in light of new laws like GDPR.

In addition to reading the specification, as mentioned previously, there are multiple open source implementations already written in a variety of languages that you can use directly, or as examples. There are also a test suite and pre-built services like Webmention.io, Telegraph, mention-tech, and webmention.herokuapp.com that can be quickly leveraged.

Maybe your company allows employees to spend 20% of their time on non-specific projects, as Google does. If so, I’d encourage you to take the opportunity to fbuild Webmentions support for one or more platforms—let’s spread the love and democratize communication on the web as fast as we can!

And if you already have a major social platform but don’t want to completely open up to sending and receiving Webmentions, consider using Webmention functionality as a simple post API. I could easily see services like Twitter, Mastodon, or Google+ supporting the receiving of Webmentions, combined with a simple parsing mechanism to allow Webmention senders to publish syndicated content on their platform. There are already several services like IndieNews, with Hacker News-like functionality, that allow posting to them via Webmention.

If you have problems or questions, I’d recommend joining the IndieWeb chat room online via IRC, web interface, Slack, or Matrix to gain access to further hints, pointers, and resources for implementing a particular Webmention solution.

The expansion of Webmentions

The big question many will now have is Will the traditional social media walled gardens like Facebook, Twitter, Instagram, and the like support the Webmention specification?

At present, they don’t, and many may never do so. After all, locking you into their services is enabling them to leverage your content and your interactions to generate income. However, I suspect that if one of the major social platforms enabled sending/receiving Webmentions, it would dramatically disrupt the entire social space.

In the meantime, if your site already has Webmentions enabled, then congratulations on joining the next revolution in web communication! Just make sure you advertise the fact by using a button or badge. You can download a copy here.

Order Out of Chaos: Patterns of Organization for Writing on the Job

Thu, 07/05/2018 - 07:18

A few years ago, a former boss of mine emailed me out of the blue and asked for a resource that would help him and his colleagues organize information more effectively. Like a dutiful friend, I sent him links to a few articles and the names of some professional writing books. And I qualified my answer with that dreaded disclaimer: “Advice varies widely depending on the situation.” Implication: “You’ll just have to figure out what works best for you. So, good luck!”

In retrospect, I could have given him a better answer. Much like the gestalt principles of design that underpin so much of what designers do, there are foundational principles and patterns of organization that are relevant to any professional who must convey technical information in writing, and you can adapt these concepts to bring order out of chaos whether or not you’re a full-time writer.

.row{margin:0 132px 24px}.col ol,.col ul{margin-left:40px}.row:after{clear:left;content:"";display:block}.col{float:left;width:50%}.col ul{list-style-type:disc}.col ul li{margin-bottom:9px}@media only screen and (max-width:37.5em){.row{margin:0 0 24px}.col{float:none;width:100%}.col+.col{margin-top:24px}} Recognize the primary goals: comprehension and performance

Not long after I wrote my response, I revisited a book I’d read in college: Technical Editing, by Carolyn D. Rude. In my role as a technical writer, I reference the book every now and then for practical advice on revising software documentation. This time, as I reviewed the chapter on organization, I realized that Rude explained the high-level goals and principles better than any other author I’d read up to that point.

In short, she says that whether you are outlining a procedure, describing a product, or announcing a cool new feature, a huge amount of writing in the workplace is aimed at comprehension (here’s what X is and why you should care) and performance (here’s how to do X). She then suggests that editors choose from two broad kinds of order to support these goals: content-based order and task-based order. The first refers to structures that guide readers from major sections to more detailed sections to facilitate top-down learning; the second refers to structures of actions that readers need to carry out. Content-based orders typically start with nouns, whereas task-based orders typically begin with verbs.

Content-Based Order Example

Product Overview

  • Introduction
  • Features
    • Feature 1
    • Feature 2
    • Feature n
  • Contact
  • Support

Task-Based Order Example

User Guide (WordPress)

  • Update your title and tagline
  • Pick a theme you love
  • Add a header or background
  • Add a site icon
  • Add a widget

Of course, not all writing situations fall neatly into these buckets. If you were to visit Atlassian’s online help content, you would see a hybrid of content-based topics at the first level and task-based topics within them. The point is that as you begin to think about your organization, you should ask yourself:

  • Which of the major goals of organization (comprehension or performance) am I trying to achieve?
  • And which broad kind of order will help me best achieve those goals?

This is still pretty abstract, so let’s consider the other principles from Carolyn Rude, but with a focus on how a writer rather than an editor should approach the task of organization.1

Steal like an organizer: follow pre-established document structures

In his book Steal Like an Artist, Austin Kleon argues that smart artists don’t actually create anything new but rather collect inspiring ideas from specific role models, and produce work that is profoundly shaped by them.

“If we’re free from the burden of trying to be completely original,” he writes, “we can stop trying to make something out of nothing, and we can embrace influence instead of running away from it.”

The same principle applies to the art of organization. To “steal like an organizer” means to look at what other people have written and to identify and follow pre-established structures that may apply to your situation. Doing so not only saves time and effort but also forces you to remember that your audience may already expect a particular pattern—and experience cognitive dissonance if they don’t get it.

You are probably familiar with more pre-established structures than you think. News reports follow the inverted pyramid. Research reports often adhere to some form of the IMRAD structure (Introduction, Methodology, Results, and Discussion). Instruction manuals typically have an introductory section followed by tasks grouped according to the typical sequence a user would need to follow. Even troubleshooting articles tend to have a standard structure of Problem, Cause, and Solution.

All this may sound like common sense, and yet many writers entirely skip this process of adapting pre-made structures. I can understand the impulse. When you face a blank screen, it feels simpler to capture the raw notes and organize it all later. That approach can certainly help you get into the flow, but it may also result in an ad hoc structure that fails to serve readers who are less familiar with your material.

Instead, when you begin the writing process, start by researching available templates or pre-made structures that could support your situation. Standard word processors and content management systems already contain some good templates, and it’s easy to search for others online. Your fellow writers and designers are also good resources. If you’re contributing to a series of documents at your organization, you should get familiar with the structure of that series and learn how to work within it. Or you can do some benchmarking and steal some ideas from how other companies structure similar content.

My team once had to do our own stealing for a major project that affected about half our company. We needed to come up with a repeatable structure for standard operating procedures (SOPs) that any employee could use to document a set of tasks. Knowing SOPs to be a well-established genre, we found several recommended structures online and in books, and came up with a list of common elements. We then decided which ones to steal and arranged them into a sequence that best suited our audience. We made out like bandits.

Structural SOP Elements We Found Our Assessment Overview Steal Roles Involved Steal Dependencies Steal Estimated Level of Effort Nah, too hard to calculate and maintain. Process Diagram Meh, kind of redundant, not to mention a lot of work. No thanks. Tasks Steal Task n Steal Task n Introduction Steal Task n Responsibility Steal Task n Steps Steal See Also Steal

But what if there is no pre-established pattern? Or what if a pattern exists, but it’s either too simple or too complex for what you’re trying to accomplish? Or what if it’s not as user-friendly as you would like?

There may indeed be cases where you need to develop a mostly customized structure, which can be daunting. But fear not! That’s where the other principles of organization come in.

Anticipate your readers’ questions (and maybe even talk to them)

Recently I had an extremely frustrating user experience. While consulting some documentation to learn about a new process, I encountered a series of web pages that gave no introduction and dove straight into undefined jargon and acronyms that I had never heard of. When I visited related pages to get more context, I found the same problem. There was no background information for a newbie like me. The writers failed in this case to anticipate my questions and instead assumed a great deal of prior knowledge.

Don’t make this mistake when you design your structure. Like a journalist, you need to answer the who, what, where, when, how, and why of your content, and then incorporate the answers in your structure. Anticipate common questions, such as “What is this? Where do I start? What must I know? What must I do?” This sort of critical reflection is all the more important when organizing web content, because users will almost certainly enter and exit your pages in nonlinear, unpredictable ways.

If possible, you should also meet with your readers, and gather information about what would best serve them. One simple technique you could try is to create a knowledge map, an annotated matrix of sorts that my team once built after asking various teams about their information priorities. On the left axis, we listed categories of information that we thought each team needed. Along the top axis, we listed a column for each team. We then gave team representatives a chance to rank each category and add custom categories we hadn’t included. (You can learn more about the process we followed in this video presentation.)

A knowledge map my team created after asking other teams which categories of information were most important to them.

The weakness of this approach is that it doesn’t reveal information that your audience doesn’t know how to articulate. To fill in this gap, I recommend running a few informal usability tests. But if you don’t have the time for that, building a knowledge map is better than not meeting with your readers at all, because it will help you discover structural ideas you hadn’t considered. Our knowledge map revealed multiple categories that were required across almost all teams—which, in turn, suggested a particular hierarchy and sequence to weave into our design.

Go from general to specific, familiar to new

People tend to learn and digest information best by going from general to specific, and familiar to new. By remembering this principle, which is articulated in the schema theory of learning, you can better conceptualize the structure you’re building. What are the foundational concepts of your content? They should appear in your introductory sections. What are the umbrella categories under which more detailed categories fall? The answer should determine which headings belong at the top and subordinate levels of your hierarchy. What you want to avoid is presenting new ideas that don’t flow logically from the foundational concepts and expectations that your readers bring to the table.

Consider the wikiHow article “How to Create a Dungeons and Dragons Character.” It begins by defining what Dungeons and Dragons is and explaining why you need to create a character before you can start playing the game.

Writers at wikiHow help readers learn by starting with general concepts before moving on to specifics.

The next section, “Part 1: Establishing the Basics,” guides the reader into subsequent foundational steps, such as deciding which version of the game to follow and printing out a character sheet. Later sections (“Selecting a gender and race,” “Choosing a class,” and “Calculating ability scores”) expand on these concepts to introduce more specific, unfamiliar ideas in an incremental fashion, leading readers up a gentle ramp into new territory.

Use conventional patterns to match structure to meaning

Within the general-to-specific/familiar-to-new framework, you can apply additional patterns of organization that virtually all humans understand. Whereas the pre-established document structures above are usually constructed for particular use cases or genres, other conventional patterns match more general mental models (or “schemas,” as the schema theory so elegantly puts it) that we use to make sense of the world. These patterns include chronological, spatial, comparison-contrast, cause-effect, and order of importance.

Chronological

The chronological pattern reveals time or sequence. It’s appropriate for things like instructions, process flows, progress reports, and checklists. In the case of instructions, the order of tasks on a page often implies (or explicitly states) the “proper” or most common sequence for a user to follow. The wikiHow article above, for example, offers a recommended sequence of tasks for beginner players. In the case of progress reports, the sections may be ordered according to the periods of time in which work was done, as in this sample outline from the book Reporting Technical Information, by Kenneth W. Houp et al.:

Beginning

  • Introduction
  • Summary of work completed

Middle

  • Work completed
    • Period 1 (beginning and end dates)
      • Description
      • Cost
    • Period 2 (beginning and end dates)

      • Description
      • Cost
  • Work remaining

    • Period 3 (or remaining periods)
      • Description of work to be done
      • Expected cost

End

  • Evaluation of work in this period
  • Conclusions and recommendations

The principles of organization listed in this article are in fact another example of the chronological pattern. As Carolyn Rude points out in her book, the principles are arranged as a sort of methodology to follow. Try starting at the top of the list and work your way down. You may find it to be a useful way to produce order out of the chaos before you.

Spatial

The spatial pattern refers to top-to-bottom, left-to-right structures of organization. This is a good pattern if you need to describe the components of an interface or a physical object.

Take a look at the neighbor comparison graph below, which is derived from a sample energy efficiency solution offered by Oracle Utilities. Customers who see this graph would most likely view it from top to bottom and left to right.

A neighbor comparison graph that shows a customer how they compare with their neighbors in terms of energy efficiency.

A detailed description of this feature would then describe each component in that same order. Here’s a sample outline:

  • Feature name
    • Title
    • Bar chart
      • Efficient neighbors
      • You
      • Average neighbors
    • Date range
    • Performance insight

      • Great
      • Good
      • Using more than average
    • Energy use insight
    • Comparison details (“You’re compared with 10 homes within 6 miles …”)
Comparison-contrast

The comparison-contrast pattern helps users weigh options. It’s useful when reporting the pros and cons of different decisions or comparing the attributes of two or more products or features. You see it often when you shop online and need to compare features and prices. It’s also a common pattern for feasibility studies or investigations that list options along with upsides and downsides.

Cause-effect

The cause-effect pattern shows relationships between actions and reactions. Writers often use it for things like troubleshooting articles, medical diagnoses, retrospectives, and root cause analyses. You can move from effect to cause, or cause to effect, but you should stick to one direction and use it consistently. For example, the cold and flu pages at Drugs.com follow a standard cause-effect pattern that incorporates logical follow-up sections such as “Prevention” and “Treatment”:

  • What Is It? (This section defines the illness and describes possible “causes.”)
  • Symptoms (This section goes into the “effects” of the illness.)
  • Diagnosis
  • Expected Duration
  • Prevention
  • Treatment
  • When to Call a Professional
  • Prognosis

For another example, see the “Use parallel structure for parallel sections” section below, which shows what a software troubleshooting article might look like.

Order of importance

The order of importance pattern organizes sections and subsections of content according to priority or significance. It is common in announcements, marketing brochures, release notes, advice articles, and FAQs.

The order of importance pattern is perhaps the trickiest one to get right. As Carolyn Rude says, it’s not always clear what the most important information is. What should come in the beginning, middle, and end? Who decides? The answers will vary according to the author, audience, and purpose.

When writing release notes, for example, my team often debates which software update should come first, because we know that the decision will underscore the significance of that update relative to the others. FAQs by definition are focused on which questions are most common and thus most important, but the exact order will depend on what you perceive as being the most frequent or the most important for readers to know. (If you are considering writing FAQs, I recommend this great advice from technical writer Lisa Wright.)

Other common patterns

Alphabetical order is a common pattern that Rude doesn’t mention in detail but that you may find helpful for your situation. To use this pattern, you would simply list sections or headings based on the first letter of the first word of the heading. For example, alphabetical order is used frequently to list API methods in API documentation sites such as those for Flickr, Twitter, and Java. It is also common in glossaries, indexes, and encyclopedic reference materials where each entry is more or less given equal footing. The downside of this pattern is that the most important information for your audience may not appear in a prominent, findable location. Still, it is useful if you have a large and diverse set of content that defies simple hierarchies and is referenced in a non-linear, piecemeal fashion.

Group related material

Take a look at the lists below. Which do you find easier to scan and digest?

  1. Settle on a version of D&D.
  2. Print a character sheet, if desired.
  3. Select a gender and race.
  4. Choose a class.
  5. Name your character.
  6. Identify the main attributes of your character.
  7. Roll for ability scores.
  8. Assign the six recorded numbers to the six main attributes.
  9. Use the “Point Buy” system, alternatively.
  10. Generate random ability scores online.
  11. Record the modifier for each ability.
  12. Select skills for your character.
  13. List your character’s feats.
  14. Roll for your starting gold.
  15. Equip your character with items.
  16. Fill in armor class and combat bonuses.
  17. Paint a picture of your character.
  18. Determine the alignment of your character.
  19. Play your character in a campaign.

Part 1: Establishing the Basics

  1. Settle on a version of D&D.
  2. Print a character sheet, if desired.
  3. Select a gender and race.
  4. Choose a class.
  5. Name your character.

Part 2: Calculating Ability Scores

  1. Identify the main attributes of your character.
  2. Roll for ability scores.
  3. Assign the six recorded numbers to the six main attributes.
  4. Use the “Point Buy” system, alternatively.
  5. Generate random ability scores online.
  6. Record the modifier for each ability.

Part 3: Equipping Skills, Feats, Weapons, and Armor

  1. Select skills for your character.
  2. List your character’s feats.
  3. Roll for your starting gold.
  4. Equip your character with items.
  5. Fill in armor class and combat bonuses.

Part 4: Finishing Your Character

  1. Paint a picture of your character.
  2. Determine the alignment of your character.
  3. Play your character in a campaign.

(Source: wikiHow: How to Create a Dungeons and Dragons Character.)

If you chose the second list, that is probably because the writers relied on a widely used organizational technique: grouping.

Grouping is the process of identifying meaningful categories of information and putting information within those categories to aid reader comprehension. Grouping is especially helpful when you have a long, seemingly random list of information that could benefit from an extra layer of logical order. An added benefit of grouping is that it may reveal where you have gaps in your content or where you have mingled types of content that don’t really belong together.

To group information effectively, first analyze your content and identify the discrete chunks of information you need to convey. Then tease out which chunks fall within similar conceptual buckets, and determine what intuitive headings or labels you can assign to those buckets. Writers do this when creating major and minor sections within a book or printed document. For online content, grouping is typically done at the level of articles or topics within a web-based system, such as a wiki or knowledge base. The Gmail Help Center, for example, groups topics within categories like “Popular articles,” “Read & organize emails,” and “Send emails.”

It’s possible to go overboard here. Too many headings in a short document or too many topics in a small help system can add unnecessary complexity. I once faced the latter scenario when I reviewed a help system written by one of my colleagues. At least five of the topics were so short that it made more sense to merge them together on a single page rather than forcing the end user to click through to separate pages. I’ve also encountered plenty of documents that contain major section headings with only one or two sentences under them. Sometimes this is fine; you may need to keep those sections for the sake of consistency. But it’s worth assessing whether such sections can simply be merged together (or conversely, whether they should be expanded to include more details).

Because of scenarios like these, Carolyn Rude recommends keeping the number of groupings to around seven, give or take a few—though, as always, striking the right balance ultimately depends on your audience and purpose, as well as the amount of information you have to manage.

Use parallel structure for parallel sections

One of the reasons Julius Caesar’s phrase “I came, I saw, I conquered” still sticks in our memory after thousands of years is the simple fact of parallelism. Each part of the saying follows a distinct, repetitive grammatical form that is easy to recall.

Parallelism works in a similar manner with organization. By using a consistent and repetitive structure across types of information that fit in the same category, you make it easier for your readers to navigate and digest your content.

Imagine you’re writing a troubleshooting guide in which all the topics follow the same basic breakdown: Problem Title, Problem, Cause, Solution, and See Also. In this case, you should make sure that each topic includes those same headings, in the exact same hierarchy and sequence, and using the exact same style and formatting. This kind of parallelism delivers a symmetry that reduces the reader’s cognitive load and clarifies the relationships of each part of your content. Deviations from the pattern not only cause confusion but can undermine the credibility of the content.

Do This

ABC Troubleshooting Guide

  • Introduction
  • Problem 1 Title
    • Problem
    • Cause
    • Solution
    • See Also
  • Problem 2 Title

    • Problem
    • Cause
    • Solution
    • See Also
  • Problem 3 Title

    • ...
  • Don’t Do This

    ABC Troubleshooting Guide

    • Introduction
    • Problem 1 Title
      • Problem
      • Root causes
      • How to Fix it
      • Advanced Tips and tricks
      • Related
    • Problem 2 title

      • Issue
      • Steps to Fix
      • Why did this happen, and how can I avoid it next time?
      • See also
    • Problem 3 title

      • ...

    This last principle is probably the easiest to grasp but may be the most difficult to enforce, especially if you are managing contributions from multiple authors. Templates and style guides are useful here because they invite authors to provide standard inputs, but you will still need to watch the content like a hawk to squash the inconsistencies that inevitably emerge.

    Conclusion

    In one sense, my response to my former boss was accurate. Given the endless variety of writing situations, there is no such thing as a single organization solution. But saying that “advice varies widely depending on the situation” doesn’t tell the whole story. There are flexible patterns and principles that can guide you in finding, customizing, and creating structures for your goals.

    The key thing to remember is that structure affects meaning. The sequence of information, the categories you use, the emphasis you imply through your hierarchy—all of these decisions impact how well your audience understands what you write. Your ideal structure should therefore reinforce what you mean to say.

    Footnotes
    • 1. The principles in this article are based on the same ones that Carolyn Rude outlines in chapter 17, pp. 289–296, of the third edition of her book. I highly recommend it for anyone who’s interested in gaining an in-depth understanding of editing. The book is now in its fifth edition and includes an additional author, Angela Eaton. See Technical Editing (Fifth Edition) for details. The examples and illustrations used in this article are derived from a variety of other sources, including my own work.

Your Emails (and Recipients) Deserve Better Context

Thu, 06/28/2018 - 07:05

Email communication is an integral part of the user experience for nearly every web application that requires a login. It’s also one of the first interactions the user has after signing up. Yet too often both the content and context of these emails is treated as an afterthought (at best), with the critical parts that users see first—sender name and email, subject, and preheader—largely overlooked. Your users, and the great application you’ve just launched, deserve better.

A focus on recipient experience

Designing and implementing a great email recipient experience is difficult. And by the time it comes to the all-important context elements (name, subject, and so on), it’s commonly left up to the developer to simply fill something in and move on. That’s a shame, because these elements play an outsized role in the email experience, being not only the first elements seen but also the bits recipients use to identify emails when searching through their archives. Given the frequency with which they touch users, it really is time we started spending a little more effort to fine-tune them.

The great news is that despite the constraints imposed on these elements, they’re relatively easy to improve, and they can have a huge impact on engagement, open rates, and recipient satisfaction. When they all work together, sender name and email, subject, and preheader provide a better experience for your recipients.

So whether you’re a developer stuck fixing such oversights and winging it, or on the design or marketing team responsible for making the decisions, use the following guide to improve your recipient’s experience. And, if possible, bring it up with your whole team so it’s always a specific requirement in the future.

Details that matter

As they say, the devil is in the details, and these details matter. Let’s start with a quick example that highlights a few common mistakes.

In the image below, the sender is unnecessarily repeated within the subject, wasting key initial subject characters, while the subjects themselves are all exactly the same. This makes it difficult to tell one email from the next, and the preview content doesn’t help much either since the only unique information it provides is the date (which is redundant alongside the email’s time stamp). The subject copy could be more concise as well—“Payment Successfully Processed” is helpful, but it’s a bit verbose.

Avoid redundancy and make your sender name, subject, and preheaders work together. Periscope repeats the sender name, and doesn’t provide unique or relevant information in the subject or preheader.

Outside of the sender and the dates on the emails, there’s not much useful information until you open the email itself. Fortunately, none of these things are particularly difficult to fix. Weather Underground provides a great example of carefully crafted emails. The subject conveys the most useful information without even requiring the recipient to open the email. In addition, their strategic use of emojis helps complement that information with a very rich, yet judicious, use of subject-line space.

Weather Underground does a great job with the sender and even front-loads the subject with the most valuable bit of information. The date is included, but it’s at the end of the subject.

Weather Underground also makes use of Gmail Inbox Actions to provide a direct link to the key information online without needing a recipient to open the email to follow a link. Gmail Inbox Actions require some extra work to set up and only work in Gmail, but they can be great if you’re sending high volumes of email.

Both scenarios involve recurring emails with similar content from one to the next, but the difference is stark. With just a little effort and fine-tuning, the resulting emails are much more useful to the recipients. Let’s explore how this is done.

Emphasizing unique content for recurring emails

With the earlier examples, both organizations are sending recurring emails, but by focusing on unique subject lines, Weather Underground’s emails are much more helpful. Recurring emails like invoices may not contain the most glamorous content, but you still have an opportunity to make each one unique and informative.

Instead of a generic “You have a new invoice” notification, you can surface important or unique information like the invoice total, the most expensive products or services, or the due date.

By surfacing the most important or unique information from the content of the email, there’s additional context to help the recipient know whether they need to act or not. It also makes it easier to find a specific invoice when searching through emails in the future.

Clarifying the sender

Who (or what) is sending this email? Is it a person? Is it automated? Do I want to hear from them? Do I trust them? Is this spam? These questions and more automatically run through our heads whenever we see an email, and the sender information provides the first clue when we start processing our inbox. Just as for caller ID on incoming phone calls, recognition and trust both play a role. As Joanna Wiebe said in an interview with Litmus, “If the from name doesn’t sound like it’s from someone you want to hear from, it doesn’t matter what the subject line is.” This can be even more critical on mobile devices where the sender name is the most prominent element.

The first and most important step is to explicitly specify a name. You don’t want the recipient’s email client choosing what to display based on the email address alone. For instance, if you send emails from “alerts@example.com” (with no name specified), some clients will display “alerts” as the name, and others will display “alerts@example.com.” With the latter, it just feels rough around the edges. In either case, the experience is less than ideal for the sender.

Without a name specified, email clients may use the username portion of an email address or truncate longer email addresses, making the name portion incomplete or less helpful to recipients.

The technical implementation may vary depending on your stack, but at the simplest level, correct implementation is all in the string formatting. Let’s look at “Jane Doe <email@example.com>” as an example. “Jane Doe” is the name, and the email is included after the name and surrounded by angle brackets. It’s a small technical detail, but it makes a world of difference to recipients.

But what name should we show? This depends on the type of email, so you’ll want to consider the sender for each email independently. For example, with a receipt or invoice you may want to use “Acme Billing.” But with a comment notification, it may be more informative for recipients if you use the commenter’s name, such as “Jane Doe via AcmeApp.” Depending on the context, you could use “with” or “from” as well, but those have an extra character, so I’ve found “via” to be the shortest and most semantically accurate option.

Similarly, if your business entity or organization name is different from your product name, you should use the name that will be most familiar to your recipients.

Recipients aren’t always familiar with the names of corporate holding companies, so make sure to use the company or product name that will be most familiar to the recipient. In the above cases, while “Jane Doe” may have made the comment, the email isn’t directly from her, so it’s best to add something lik “via Acme Todos” to make it clear that it was sent on Jane’s behalf. In the case of “Support,” content doesn’t clarify which product it refers to. Since users could have a variety of emails from “Support” for different products, it fails to provide important context. Avoiding contact confusion

In the case where you use someone’s name—like with the “Jane Doe via AcmeApp” example above—it’s important to add a reference to the app name. Since the email isn’t actually from Jane, it’s inaccurate to represent that it’s from Jane Doe directly. This can be confusing for users, but it can also create problems with address books. If you use just “Jane Doe,” your sending email address can be accidentally added to the recipient’s address book in association with Jane’s entry. Then, when they go to email Jane later, they may unwittingly send an email to “notifications@acme.com” instead of Jane. That could lead to some painful missed emails and miscommunication. The other reason is that it’s simply helpful for the recipient to know the source of the email. It’s not just from Jane, it’s from Jane via your application.

You’ll also want to put yourself in your recipient’s shoes and carefully consider whether a name is recognizable to your recipient. For example, if your corporate entity name and product name aren’t the same, recipients will be much less likely to recognize the sender if you use the name of your corporate entity. So make sure to use the product name that will be most familiar to the recipient. Similarly, you’ll want to avoid using generic names that could be from any company. For example, use “Acme Billing” instead of just “Billing,” so the recipient can quickly and easily identify your product.

Finally, while names are great, the underlying sending address can be just as important. In many ways, it’s the best attribute for recipients to use when filtering and organizing their inbox, and using unique email addresses or aliases for different categories of emails makes this much easier. There’s a fine line, but the simplest way to do this is to group emails into three categories: billing, support, and activity/actions. You may be able to use more, like notifications, alerts, or legal, but remember that the more you create, the more you’ll have to keep track of.

Also, keep the use of subdomains to a minimum. By consistently only sending transactional email like password resets, receipts, order updates, and other similar emails from your primary domain, users learn to view any emails from other domains as suspicious. It may seem like a minor detail, but these bits of information add up to create important signals for recipients. It is worth noting, however, that you should use a different address, and ideally a separate subdomain, for your bulk marketing emails. This helps Gmail and other inbox providers understand the type of email coming from each source, which in turn helps ensure the domain reputation for your bulk marketing emails—which is traditionally lower—doesn’t affect delivery of your more critical transactional email.

Subject line utility

Now that recipients have clearly identifiable and recognizable sender information, it’s time to think about the subjects of your emails. Since we’ve focused on transactional emails in the examples used so far, we’ll similarly focus on the utility of your subject line content rather than the copywriting. You can always use copywriting to improve the subject, but with transactional emails, utility comes first.

The team at MailChimp has studied data about subject lines extensively, and there are a few key things to know about subjects. First, the presence of even a single word can have a meaningful impact on open rates. A 2015 report by Adestra had similar findings. Words and phrases like “thank you,” “monthly,” and “thanks” see higher engagement than words like “subscription,” “industry,” and “report,” though different words will have different impacts depending on your industry, so you’ll still need to test and monitor the results. Personalization can also have an impact, but remember, personalization isn’t just about using a person’s name. It can be information like location, previous purchases, or other personal data. Just remember that it’s important to be tasteful, judicious, and relevant.

The next major point from MailChimp is that subject line length doesn’t matter. Or, rather, it doesn’t matter directly. After studying 6 billion emails, they found “little or no correlation between performance and subject length.” That said, when line length is considered as one aspect of your overall subject content, it can be used to help an email stand out. Clarity and utility are more important than brevity, but when used as a component to support clarity and utility, brevity can help.

One final point from the Adestra report is that open rates aren’t everything. Regardless of whether someone opens an email, the words and content of your subject line leaves an impression. So even if a certain change doesn’t affect your open rates, it can still have a far-reaching impact.

Clearing out redundancy

The most common mistake with subjects is including redundant information. If you’ve carefully chosen the sender name and email address, there’s no need to repeat the sender name in the subject, and the characters could be better applied to telling the recipient additional useful information. Dates are a bit of a gray area, but in many cases, the email’s time stamp can suffice for handling any time-based information. On the other hand, when the key dates don’t correlate to when the email was sent, it can be helpful to include the relevant date information in the subject.

With these examples, after the sender, there’s no new or useful information displayed, and some form of the company name is repeated several times. Even the preheader is neglected leaving the email client to use alternate text from the logo.

With the subject of your application emails, you’ll also want to front-load the most important content to prevent it from being cut off. For instance, instead of “Your Invoice for May 2018,” you could rewrite that as “May 2018 Invoice.” Since your sender is likely “Acme Billing,” the recipient already knows it’s about billing, so the month and year is the most important part of the subject. However, “May 2018 Invoice” is a bit terse, so you may want to add something at the end to make it more friendly.

Next, in situations where time stamps are relevant, avoid relying on relative dates or times. Phrases like “yesterday,” “last week,” or “two hours ago” don’t age well with email since you never know when someone will receive or read it. Similarly, when someone goes to search their email archives, relative dates aren’t helpful. If you must use relative dates, look for opportunities to add explicit dates or time stamps to add clarity.

With regularly occurring emails like reports or invoices, strive to give each message a unique subject. If every report has the subject “Your Monthly Status Report,” they can run together in a list of emails that all have the same subject. It can also make them more difficult to search later on. The same goes for invoices and receipts. Yes, invoice numbers and order numbers are technically unique, but they aren’t particularly helpful. Make sure to include useful content to help identify each email individually. Whether that’s the date, total value, listing the most expensive items, or all three, it’s easier on recipients when they can identify the contents of an email without having to open it. While open rates are central to measuring marketing emails, transactional emails are all about usefulness. So open rates aren’t as purely correlated with successful transactional emails.

There’s a case to be made that in some contexts a great transactional email doesn’t need to be opened at all for it to be useful. The earlier Weather Underground example does an excellent job communicating the key information without requiring recipients to open it. And while the subject is the best place for key content, some useful content can also be displayed using a preheader.

Making the most of preheaders

If you’re not familiar with the preheader, you can think of it as a convenient name for the content at the beginning of an email. Campaign Monitor has a great write-up with in-depth advice on making the most of your preheaders. It’s simply a way of acknowledging and explicitly suggesting the text that email clients should show in the preview pane for an email. While there’s no formal specification for preheaders, and different email clients will handle them differently, they’re still widely displayed.

Most importantly, well-written and useful preheaders of 40–50 characters have been shown to increase overall engagement, particularly if delivering a concise call to action. A study by Yes Lifecycle Marketing (signing up required) points out that preheader content is important, especially on mobile devices where subjects are truncated and it can act as a sort of extended subject.

If the leading content in your email is a logo or other image, email clients will often use the alternate text for the image as the preview text. Since “Acme Logo” isn’t very helpful, it’s best to include a short summary of text at the beginning of your email. Sometimes this short summary text can interfere with the design of your email, so it’s not uncommon for the design to accommodate some visually muted—but still readable—text at the beginning. Or, as long as you’re judicious, in most cases you can safely hide preheader text entirely by using the display: none CSS declaration. Abusing this could get you caught in spam filters, but for the most part, inbox providers seem to focus on the content that is hidden rather than the fact that it’s hidden.

If you’re not explicitly specifying your preheader text, there’s a good chance email clients will use content that at best is less than useful and at worst makes a bad impression.

If your email can be designed and written such that the first content encountered is the useful content for previews, then you’re all set. In the case of receipts, invoices, or activity summaries, that’s not always easy. In those cases, a short text-based summary of the content makes a good preheader.

Context element interplay

The rules outlined above are great guidelines, but remember that rules are there to be broken (well, sometimes …). As long as you understand the big picture, sender, subject, and preheader can still work together effectively even if some of those rules are bent. A bit. For example, if you ensure that you have relevant and unique content in your preheader for the preview, you may be able to get away with using the same subject for each recurring email. Alternatively, there may be cases where you need to repeat the sender name in the subject.

The key is that when you’re crafting these elements, make sure you’re looking at how they work together. Sometimes a subject can be shortened by moving some content into the preheader. Alternatively, you may be able to use a more specific sender to reduce the need for a word or two in the subject. The application of these guidelines isn’t black and white. Simply being aware of the recipient’s experience is the most important factor when crafting the elements they’ll see in preview panes.

Finally, a word on monitoring and testing

Simple changes to the sender, subject, and preheader can significantly impact open rates and recipient experience. One critical thing to remember, however, is that while some of these improvements are guaranteed winners, monitoring and testing things like open rates and click rates is critical to validate any changes made. And since these elements can either play against each other or work together, it’s best to test combinations and view all three elements holistically.

The value of getting this right really is in the details, and despite their tendency to be overlooked, taking the time to craft helpful and useful sender names and addresses, subject lines, and preheaders can drastically improve the experience for your email recipients. It’s a small investment that’s definitely worth your time.

Discovery on a Budget: Part III

Thu, 06/21/2018 - 06:59

Sometimes we have the luxury of large budgets and deluxe research facilities, and sometimes we’ve got nothing but a research question and the determination to answer it. Throughout the “Discovery on a Budget” series we have discussed strategies for conducting discovery research with very few resources but lots of creativity. In part 1 we discussed the importance of a clearly defined problem hypothesis and started our affordable research with user interviews. Then, in part 2, we discussed competing hypotheses and “fake-door” A/B testing when you have little to no traffic. Today we’ll conclude the series by considering the pitfalls of the most tempting and seemingly affordable research method of all: surveys. We will also answer the question “when are you done with research and ready to build something?”

A quick recap on Candor Network

Throughout this series I’ve used a budget-conscious, and fictitious, startup called Candor Network as my example. Like most startups, Candor Network started simply as an idea:

I bet end-users would be willing to pay directly for a really good social networking tool. But there are lots of big unknowns behind that idea. What exactly would “really good” mean? What are the critical features? And what would be the central motivation for users to try yet another social networking tool? 

To kick off my discovery research, I created a hypothesis based on my own personal experience: that a better social network tool would be one designed with mental health in mind. But after conducting a series of interviews, I realized that people might be more interested in a social network that focused on data privacy as opposed to mental health. I captured this insight in a second, competing hypothesis. Then I launched two corresponding “fake door” landing pages for Candor Network so I could A/B test both ideas.

For the past couple of months I’ve run an A/B test between the two landing pages where half the traffic goes to version A and half to version B. In both versions there is a short, two-question survey. To start our discussion today, we will take a more in-depth look at this seemingly simple survey, and analyze the results of the A/B test.

Surveys: Proceed with caution

Surveys are probably the most used, but least useful, research tool. It is ever so tempting to say, “lets run a quick survey” when you find yourself wondering about customer desires or user behavior. Modern web-based tools have made surveys incredibly quick, cheap, and simple to run. But as anyone who has ever tried running a “quick survey” can attest, they rarely, if ever, provide the insight you are looking for.

In the words of Erika Hall, surveys are “too easy.” They are too easy to create, too easy to disseminate, and too easy to tally. This inherent ease masks the survey’s biggest flaw as a research method: it is far, far too easy to create biased, useless survey questions. And when you run a survey littered with biased, useless questions, you either (1) realize that your results are not reliable and start all over again, or (2) proceed with the analysis and make decisions based on biased results. If you aren’t careful, a survey can be a complete waste of time, or worse, lead you in the wrong direction entirely.

However, sometimes a survey is the only method at your immediate disposal. You might be targeting a user group that is difficult to reach through other convenience- or “guerilla”-style means (think of products that revolve around taboo or sensitive topics—it’s awfully hard to spring those conversations on random people you meet in a coffee shop!). Or you might work for a client that is reluctant to help locate research participants in any way beyond sending an email blast with a survey link. Whatever the case may be, there are times when a survey is the only step forward you can take. If you find yourself in that position, keep the following tips in mind.

Tip 1: Try to stick to questions about facts, not opinions

If you were building a website for ordering dog food and supplies, a question like “how many dogs do you own?” can provide key demographic information not available through standard analytics. It’s the sort of question that works great in a short survey. But if you need to ask “why did you decide to adopt a dog in the first place?” then you’re much better off with a user interview.

If you try asking any kind of “why” question in a survey, you will usually end up with a lot of “I don’t know” and otherwise blank responses. This is because people are, in general, not willing to write an essay on why they’ve made a particular choice (such as choosing to adopt a dog) when they’re in the middle of doing something (like ordering pet food). However, when people schedule time for a phone call, they are more than willing to talk about the “whys” behind their decisions. In short, people like to talk about their opinions, but are generally too lazy or busy to write about their opinions. Save the why questions for later (and see Tip 5).

Tip 2: Avoid asking about the future

People live in the present, and only dream about the future. There are a lot of things outside of our control that affect what we will buy, eat, wear, and do in the future. Also, sometimes the future selves we imagine are more aspirational than factual. For example, if you were to ask a random group of people how many times they plan to go to the gym next month, you might be (not so) surprised to see that their prediction is significantly higher than the actual number. It is much better to ask “how many times did you go to the gym this week?” as an indicator of general gym attendance than to ask about any future plans.

I asked a potentially problematic, future-looking question in the Candor Network landing page survey:

How much would you be willing to pay, per year, for Candor Network?

  • Would not pay anything
  • $1
  • $5
  • $10
  • $15
  • $20
  • $25
  • $30
  • Would pay more

In this question, I’m asking participants to think about how much money they would like to spend in the future on a product that doesn’t exist yet. This question is problematic for a number of reasons, but the main issue is that people, in general, don’t know how they really feel about pricing until the exact moment they are poised to make a purchase. Relying on this question to, say, develop my income projections for an investor pitch would be unwise to say the least. (I’ll discuss what I actually plan to do with the answers to this question in the next tip.)

Tip 3: Know how you are going to analyze responses before you launch the survey

A lot of times, people will create and send out a survey without thinking through what they are going to do with the results once they are in hand. Depending on the length and type of survey, the analysis could take a significant amount of time. Also, if you were hoping to answer some specific questions with the survey data, you’ll want to make sure you’ve thought through how you’ll arrive at those answers. I recommend that while you are drafting survey questions, you also simultaneously draft an analysis plan.

In your analysis plan, think about what you are ultimately trying to learn from each survey question. How will you know when you’ve arrived at the answer? If you are doing an A/B test like I am, what statistical analysis should you run to see if there is a significant difference between the versions? You should also think about what the numbers will look like and what kinds of graphs or tables you will need to build. Ultimately, you should try to visualize what the data will look like before you gather it, and plan accordingly.

For example, when I created the two survey questions on the Candor Network landing pages, I created a short analysis plan for each. Here is what those plans looked like:

Analysis plan for question 1: “How much would you be willing to pay per year for Candor Network?”

Each response will go into one of two buckets:

  • Bucket 1: said they would not pay any money;
  • and Bucket 2: said they might pay some money.

Everyone who answered “Would not pay anything” goes in Bucket 1. Everyone else goes in Bucket 2. I will interpret every response that falls into Bucket 2 as an indicator of general interest (and I’m not going to put any value on the specific answer selected). To see whether any difference in response between landing page A and B is statistically significant (i.e., attributable to more than just chance), I will use a chi-square test. (Side note: There are a number of different statistical tests we could use in this scenario, but I like chi-square because of its simplicity. It is a test that’s easy for non-statisticians to run and understand, and it errs on the conservative side.)

Analysis plan for question 2: “Would you like to be a beta tester or participate in future research?”

The question only has two possible responses: “yes” and “no.” I will interpret every “yes” response as an indicator of general interest in the idea. Again, a chi-square test will show if there is a significant difference between the two landing pages. 

Tip 4: Never rely on a survey by itself to make important decisions

Surveys are hard to get right, and even when they are well made, the results are often approximations of what you really want to measure. However, if you pair a survey with a series of user interviews or contextual inquiries, you will have a richer, more thorough set of data to analyze. In the social sciences, this is called triangulation. If you use multiple methods to triangulate and study the same phenomenon, you will get a richer, more complete picture. This leads me to my final tip …

Tip 5: End every survey with an opportunity to participate in future research

There have been many times in my career when I have launched surveys with only one objective in mind: to gather the contact information of potential study participants. In cases like these, the survey questions themselves are not entirely superfluous, but they are certainly secondary to the main research objective. Shortly after the survey results have been collected, I will select and email a few respondents, inviting them to participate in a user interview or usability study. If I planned on continuing Candor Network, this is absolutely what I would do.

Finally, the results

According to Google Optimize, there were a total of 402 sessions in my experiment. Of those sessions, 222 saw version A and 180 saw version B. Within the experiment, I tracked how often the “submit” button on the survey was clicked, and Google Optimize tells me “no clear leader was found” on that measure of engagement. Roughly an equal number of people from each condition submitted the survey.

Here is a breakdown of the number of sessions and survey responses each condition received:

Version A:
better mental health Version B:
privacy and data security Total Sessions 222 180 402 Survey responses 76 68 144

When we look at the actual answers to the survey questions, we start to get some more interesting results.

Bucket 1:
would not pay any money Bucket 2:
might pay some money Version A 25 51 Version B 14 54

Breakdown of question 1, “How much would you be willing to pay per year for Candor Network?”

Plugging these figures into my favorite chi-square calculator, I get the following values: chi-square = 2.7523, p = 0.097113. In general, bigger chi-square values indicate greater differences between the groups. And the p-value is less than 0.1, which suggests that the result is marginally significant (i.e., the result is probably not due to random chance). This gives me a modest indicator that respondents in group B, who saw the “data secure” version of the landing page, are more likely to fall into the “might pay some money” bucket.

And when we look at the breakdown and chi-square calculation of question two, we see similar results.

No Yes Version A 24 52 Version B 13 55

Breakdown of question 2, “Would you like to be a beta tester or participate in future research?”

The chi-square = 2.9189, and p = .087545. Again, I have a modest indicator that respondents in group B are more likely to say yes to participating in future research. (If you’d like to learn more about how to run and interpret chi-square tests, the Interaction Design department at the University of California, San Diego has provided a great video tutorial.)

How do we know when it’s time to move on?

I wish I could provide you with a formula for calculating the exact moment when the research is done and it’s time to move on to prototyping, but I’m afraid no such formula exists. There is no definitive way to determine how much research is enough. Every round of research teaches you something new, but you are always left with more questions. As Albert Einstein said, “the more I learn, the more I realize how much I don’t know.”

However, with experience you come to recognize certain hallmarks that indicate it’s time to move on. Erika Hall, in her book Just Enough Research, described it as feeling a “satisfying click.” She says, “[O]ne way to know you’ve done enough research is to listen for the satisfying click. That’s the sound of the pieces falling into place when you have a clear idea of the problem you need to solve and enough information to start working on a solution.” (Just Enough Research, p. 36.)

When it comes to building a product on a budget, you may also want to consider that research is relatively cheap compared to the cost of design and development. The rule I tend to follow is this: continue conducting discovery research until the questions you really want answered can only be answered by putting something in front of users. That is, wait to build something until you absolutely have to. Learn as much as you can about your target market and user base until the only way forward is to put some sketches on paper.

With Candor Network, I’m not quite there yet. There is still plenty of runway to cover in the research cycle. Now that I know that data privacy is a more motivating reason to consider paying for a social networking tool, I need to work out what other features will be essential. In the next round of research, I could do think-aloud studies and ask participants to give me a tour of their Facebook and other social media pages. Or I could continue with more interviews, but recruit from a different source and reach a broader demographic of participants. Regardless of the exact path I choose to take from here, the key is to focus on what the requirements would be for the ultra-private, data-secure social network that users would value.

A few parting words

Discovery research helps us learn more about the users we want to help and the problems they need a solution for. It doesn’t have to be expensive either, and it definitely isn’t something that should be omitted from the development cycle. By starting with a problem hypothesis and conducting multiple rounds of research, we can ultimately save time and money. We can move from gut instincts and personal experiences to a tested hypothesis. And when it comes time to launch, we’ll know it’s from a solid foundation of research-backed understanding.

Recommended reading

If you’re testing the waters on a new idea and want to jump into some (budget-friendly) discovery research, here are some additional resources to help you along:

Books

Articles

The Problem with Patterns

Thu, 06/14/2018 - 07:01

It started off as an honest problem with a brilliant solution. As the ways we use the web continue to grow and evolve, we, as its well-intentioned makers and stewards, needed something better than making simple collections of pages over and over again.

Design patterns, component libraries, or even style guides have become the norm for organizations big and small. Having reusable chunks of UI aids consistency and usability for users, and it lends familiarity and efficiency to designers. This in turn frees up designers’ time to focus on bigger problems, like solving for their users’ needs. In theory.

The use of design patterns, regardless of their scope or complexity, should never stifle creativity or hold back design progress. In order to achieve what they promise, they should be adaptable, flexible, and scalable. A good design pattern is undeterred by context, and most importantly, is unobtrusive. Again, in theory.

Before getting further into the weeds, let’s define what is meant by the term pattern here. You’re probably wondering what the difference is between all the different combinations of the same handful of words being used in the web community.

Initially, design patterns were small pieces of a user interface, like buttons and error messages.

Buttons and links from Co-op

Design patterns go beyond the scope and function of a style guide, which deals more with documenting how something should look, feel, or work. Type scales, design principles, and writing style are usually found within the bounds of a style guide.

More recently, the scope of design patterns has expanded as businesses and organizations look to work more efficiently and consistently, especially if it involves a group or family of products and services. Collections of design patterns are then commonly used to create reusable components of a larger scope, such as account sign-up, purchase checkout, or search. This is most often known as the component library.

Tabs from BBC Global Experience Language (GEL)

The final evolution of all these is known as a design system (or a design language). This encompasses the comprehensive set of design standards, documentation, and principles. It includes the design patterns and components to achieve those standards and adhere to those principles. More often than not, a design system is still used day-to-day by designers for its design patterns or components.

The service design pattern

A significant reason why designing for the web has irrevocably changed like this is due to the fact that more and more products and services live on it. This is why service design is becoming much more widely valued and sought after in the industry.

Service patterns—unlike all of the above patterns, which focus on relatively small and compartmentalized parts of a UI—go above and beyond. They aim to incorporate an entire task or chunk of a user’s journey. For example, a credit card application can be represented by some design patterns or components, but the process of submitting an application to obtain a credit card is a service pattern.

Pattern for GOV.UK start pages

If thinking in terms of an analogy like atomic design, service patterns don’t fit any one category (atoms, molecules, organisms, etc). For example, a design pattern for a form can be described as a molecule. It does one thing and does it well. This is the beauty of a good design pattern—it can be taken without context and used effectively across a variety of situations.

Service design patterns attempt to combine the goals of both design patterns and components by creating a reusable task. In theory.

So, what’s the problem? The design process is undervalued

Most obvious misuses of patterns are easy to avoid with good documentation, but do patterns actually result in better-designed products and services?

Having a library of design components can sometimes give the impression that all the design work has been completed. Designers or developers can revert to using a library as clip art to create “off-the-shelf” solutions. Projects move quickly into development.

Although patterns do help teams hesitate less and build things in shorter amounts of time, it is how and why a group of patterns and components are stitched together that results in great design.

For example, when designing digital forms, using button and input fields patterns will improve familiarity and consistency, without a doubt. However, there is no magic formula for the order in which questions on a form should be presented or for how to word them. To best solve for a user’s needs, an understanding of their goals and constraints is essential.

Patterns can even cause harm without considering a user’s context and the bearing it may have on their decision-making process.

For example, if a user will likely be filling out a form under stress (this can be anything from using a weak connection, to holding a mobile phone with one hand, to being in a busy airport), an interface should prioritize minimizing cognitive load over the number of steps or clicks needed to complete it. This decision architecture cannot be predetermined using patterns.

Break up tasks into multiple steps to reduce cognitive load Patterns don’t start with user needs

Components and service patterns have a tendency to serve the needs of the business or organization, not the user.

Pattern Service User need Organization need Apply for something Get a fishing license Enjoy the outdoors Keep rivers clean; generate income Apply for something Apply for a work visa Work in a different country Check eligibility Create an account Online bank account Save money Security; fraud prevention Create an account Join a gym Lose weight Capture customer information Register Register to vote Make my voice heard Check eligibility Register Online shopping Find my order Security; marketing

If you are simply designing a way to apply for a work visa, having form field and button patterns is very useful. But any meaningful testing sessions with users will speak to how confident they felt in obtaining the necessary documents to work   abroad, not if they could simply locate a “submit” button.

User needs are conflated with one another

Patterns are also sometimes a result of grouping together user needs, essentially creating a set of fictional users that in reality do not exist. Users usually have one goal that they want to achieve efficiently and effectively. Assembling a group of user needs can result in a complex system trying to be everything to everyone.

For example, when creating a design pattern for registering users to a service across a large organization, the challenge can very quickly move from:

“How can I check the progress of my application?”
“Can I update or change my delivery address?”
“Can I quickly repeat or renew an application?”

to:

“How can we get all the details we need from users to allow them to register for an account?”

The individual user needs are forgotten and replaced with a combined assumed need to “register for an account” in order to “view a dashboard.” In this case, the original problem has even been adapted to suit the design pattern instead of the other way around. 

Outcomes are valued over context

Even if they claim to address user context, the success of a service pattern might still be measured through an end result, output, or outcome. Situations, reactions, and emotions are still overlooked.

Take mass transit, for example. When the desired outcome is to get from Point A to Point B, we may find that a large number of users need to get there quickly, especially if they’re headed home from work. But we cannot infer from this need that the most important goal of transportation is speed. Someone traveling alone at night or in unfamiliar surroundings may place greater importance on safety or need more guidance and reassurance from the service.

Sometimes, service patterns cannot solve complex human problems like these. More often than not, an over-reliance on outcome-focused service patterns just defeats the purpose of building any empathy during the design process.

For example, date pickers tend to follow a similar pattern across multiple sectors, including transport, leisure, and healthcare. Widely-used patterns like this are intuitive and familiar to most users.

This does not mean that the same date picker pattern can be used seamlessly in any service. If a user is trying to book an emergency doctor appointment, the same patterns seen above are suddenly much less effective. Being presented with a full calendar of options is no longer helpful because choice is no longer the most valuable aspect of the service. The user needs to quickly see the first available appointment with minimal choices or distractions.

Digital by default

Because patterns are built for reuse, they sometimes encourage us to use them without much question, particularly assuming that digital technology is the solution.

A service encompasses everything a user needs to complete their goal. By understanding the user’s entire journey, we start to uncover their motivations and can begin to think about new, potentially non-digital ways to solve their problems.

For example, the Canadian Immigration Service receives more than 5.2 million inquiries a year by email or phone from people looking for information about applications.

One of the most common reasons behind the complaints was the time it took to complete an application over the phone. Instead of just taking this data and speeding up the process with a digital form, the product team focused on understanding the service’s users and their reasons behind their reactions and behaviors.

For example, calls received were often bad-tempered, despite callers being greeted by a recorded message informing them of the length of time it could take to process an application, and advising them against verbally abusing the staff. 

The team found that users were actually more concerned with the lack of information than they were with the length of time it took to process their application. They felt confused, lost, and clueless about the immigration process. They were worried they had missed an email or letter in the mail asking for missing documentation.

In response to this, the team decided to change the call center’s greeting, setting the tone to a more positive and supportive one. Call staff also received additional training and began responding to questions even if the application had not reached its standard processing time.

The team made sure to not define the effectiveness of the design by how short new calls were. Although the handling time for each call went up by 16 percent, follow-up calls dropped by a whopping 30 percent in fewer than eight weeks, freeing up immigration agents’ time to provide better quality information to callers.

Alternatives to patterns

As the needs of every user are unique, every service is also unique. To design a successful service you need to have an in-depth understanding of its users, their motivations, their goals, and their situations. While there are numerous methodologies to achieve this, a few key ones follow:

Framing the problem

Use research or discovery phases to unearth the real issues with the existing service or process. Contextual research sessions can help create a deeper understanding of users, which helps to ensure that the root cause of a problem is being addressed, not just the symptoms.

Journey maps

Journey maps are used to create a visual representation of a service through the eyes of the user. Each step a user takes is recorded against a timeline along with a series of details including:

  • how the user interacts with the service;
  • how the service interacts with the user;
  • the medium of communication;
  • the user’s emotions;
  • and service pain points.
Service teams, not product teams

Setting up specialist pattern or product teams creates a disconnect with users. There may be common parts to user journeys, such as sign-up or on-boarding, but having specialist design teams will ultimately not help an organization meet user (and therefore business) needs. Teams should consider taking an end-to-end, service approach.

Yes No Mortgage service Registration; Application Passports service Registration; Application Tax-return service Registration; Submit Information

Assign design teams to a full service rather than discrete parts of it

Be open and inclusive

Anyone on a wider team should be able to contribute to or suggest improvements to a design system or component library. If applicable, people should also be able to prune away patterns that are unnecessary or ineffective. This enables patterns to grow and develop in the most fruitful way.

Open-sourcing pattern libraries, like the ones managed by a11yproject.com or WordPress.org, is a good way to keep structure and process in place while still allowing people to contribute. The transparent and direct review process characteristic of the open-source spirit can also help reduce friction.

Across larger organizations, this can be harder to manage, and the time commitment can contradict the intended benefits. Still, some libraries, such as the Carbon Design System, exist and are open to suggestions and feedback.

In summary

A design pattern library can range from being thorough, trying to cover all the bases, to politely broad, so as to not step on the toes of a design team. But patterns should never sacrifice user context for efficiency and consistency. They should reinforce the importance of the design process while helping an organization think more broadly about its users’ needs and its own goals. Real-world problems rarely are solved with out-of-the-box solutions. Even in service design.

Orchestrating Experiences

Thu, 06/07/2018 - 07:04

A note from the editors: It’s our pleasure to share this excerpt from Chapter 2 (“Pinning Down Touchpoints”) of Orchestrating Experiences: Collaborative Design for Complexity by Chris Risdon and Patrick Quattlebaum, available now from Rosenfeld Media.

If you embrace the recommended collaborative approaches in your sense-making activities, you and your colleagues should build good momentum toward creating better and valuable end-to-end experiences. In fact, the urge to jump into solution mode will be tempting. Take a deep breath: you have a little more work to do. To ensure that your new insights translate into the right actions, you must collectively define what is good and hold one another accountable for aligning with it.

Good, in this context, means the ideas and solutions that you commit to reflect your customers’ needs and context while achieving organizational objectives. It also means that each touchpoint harmonizes with others as part of an orchestrated system. Defining good, in this way, provides common constraints to reduce arbitrary decisions and nudge everyone in the same direction.

How do you align an organization to work collectively toward the same good? Start with some common guidelines called experience principles.

A Common DNA

Experience principles are a set of guidelines that an organization commits to and follows from strategy through delivery to produce mutually beneficial and differentiated customer experiences. Experience principles represent the alignment of brand aspirations and customer needs, and they are derived from understanding your customers. In action, they help teams own their part (e.g., a product, touchpoint, or channel) while supporting consistency and continuity in the end-to-end experience. Figure 6.1 presents an example of a set of experience principles.

Figure 6.1: Example set of experience principles. Courtesy of Adaptive Path

Experience principles are not detailed standards that everyone must obey to the letter. Standards tend to produce a rigid system, which curbs innovation and creativity. In contrast, experience principles inform the many decisions required to define what experiences your product or service should create and how to design for individual, yet connected, moments. They communicate in a few memorable phrases the organizational wisdom for how to meet customers’ needs consistently and effectively. For example, look at the following:   

  • Paint me a picture.
  • Have my back.
  • Set my expectations.
  • Be one step ahead of me.
  • Respect my time.
Experience Principles vs Design Principles
Orchestrating experiences is a team sport. Many roles contribute to defining, designing, and delivering products and services that result in customer experiences. For this reason, the label experience—rather than design—reflects the value of principles better that inform and guide the organization. Experience principles are outcome oriented; design principles are process oriented. Everyone should follow and buy into them, not just designers. Patrick Quattlebaum

Experience principles are grounded in customer needs, and they keep collaborators focused on the why, what, and how of engaging people through products and services. They keep critical insights and intentions top of mind, such as the following:

  • Mental Models: How part of an experience can help people have a better understanding, or how it should conform to their mental model.
  • Emotions: How part of an experience should support the customer emotionally, or directly address their motivations.
  • Behaviors: How part of an experience should enable someone to do something they set out to do better.
  • Target: The characteristics to which an experience should adhere.
  • Impact: The outcomes and qualities an experience should engender in the user or customer.
Focusing on Needs to Differentiate
Many universal or heuristic principles exist to guide design work. There are visual design principles, interaction design principles, user experience principles, and any number of domain principles that can help define the best practices you apply in your design process. These are lessons learned over time that have a broader application and can be relied on consistently to inform your work across even disparate projects.

It’s important to reinforce that experience principles specific to your customers’ needs provide contextual guidelines for strategy and design decisions. They help everyone focus on what’s appropriate to specific customers with a unique set of needs, and your product or service can differentiate itself by staying true to these principles. Experience principles shouldn’t compete with best practices or universal principles, but they should be honored as critical inputs for ensuring that your organization’s specific value propositions are met. Chris Risdon Playing Together

Earlier, we compared channels and touchpoints to instruments and notes played by an orchestra, but in the case of experience principles, it’s more like jazz. While each member of a jazz ensemble is given plenty of room to improvise, all players understand the common context in which they are performing and carefully listen and respond to one another (see Figure 6.2). They know the standards of the genre backward and forward, and this knowledge allows them to be creative individually while collectively playing the same tune.

Figure 6.2: Jazz ensembles depend upon a common foundation to inspire improvisation while working together to form a holistic work of art. Photo by Roland Godefroy, License

Experience principles provide structure and guidelines that connect collaborators while giving them room to be innovative. As with a time signature, they ensure alignment. Similar to a melody, they provide a foundation that encourages supportive harmony. Like musical style, experience principles provide boundaries for what fits and what doesn’t.

Experience principles challenge a common issue in organizations: isolated soloists playing their own tune to the detriment of the whole ensemble. While still leaving plenty of room for individual improvisation, they ask a bunch of solo acts to be part of the band. This structure provides a foundation for continuity in the resulting customer journey, but doesn’t overengineer consistency and predictability, which might prevent delight and differentiation. Stressing this balance of designing the whole while distributing effort and ownership is a critical stance to take to engender cross-functional buy-in.

To get broad acceptance of your experience principles, you must help your colleagues and your leadership see their value. You will need to craft value propositions for your different stakeholders, educate the stakeholders on how to use experience principles, and pilot the experience principles to show how they are used in action. This typically requires crafting specific value propositions and education materials for different stakeholders to gain broad support and adoption. Piloting your experience principals on a project can also help others understand their tactical use. When approaching each stakeholder, consider these common values:

  • Defining good: While different channels and media have their specific best practices, experience principles provide a common set of criteria that can be applied across an entire end-to-end experience.
  • Decision-making filter: Throughout the process of determining what to do strategically and how to do it tactically, experience principles ensure that customers’ needs and desires are represented in the decision-making process.
  • Boundary constraints: Because these constraints represent the alignment of brand aspiration and customer desire, experience principles can filter out ideas or solutions that don’t reinforce this alignment.
  • Efficiency: Used consistently, experience principles reduce ambiguity and the resultant churn when determining what concepts should move forward and how to design them well.
  • Creativity inspiration: Experience principles are very effective in sparking new ideas with greater confidence that will map back to customer needs. (See Chapter 8, “Generating and Evaluating Ideas.”)
  • Quality control: Through the execution lifecycle, experience principles can be used to critique touchpoint designs (i.e., the parts) to ensure that they align to the greater experience (i.e., the whole).

Pitching and educating aside, your best bet for creating good experience principles that get adopted is to avoid creating them in a black box. You don’t want to spring your experience principles on your colleagues as if they were commandments from above to follow blindly. Instead, work together to craft a set of principles that everyone can follow energetically.

Identifying Draft Principles

Your research into the lives and journeys of customers will produce a large number of insights. These insights are reflective. They capture people’s current experiences—such as, their met and unmet needs, how they frame the world, and their desired outcomes. To craft useful and appropriate experience principles, you must turn these insights inside out to project what future experiences should be.

When You Can’t Do Research (Yet)
If you lack strong customer insights (and the support or time to gather them), it’s still valuable to craft experience principles with your colleagues. The process of creating them provides insight into the various criteria that people are using to make decisions. It also sheds light on what your collaborators believe are the most important customer needs to meet. While not as sound as research-driven principles, your team can align around a set of guidelines to inform and critique your collective work—and then build the case for gathering insights for creating better experience principles. Patrick Quattlebaum From the Bottom Up

The leap from insights to experience principles will take several iterations. While you may be able to rattle off a few candidates based on your research, it’s well worth the time to follow a more rigorous approach in which you work from the bottom (individual insights) to the top (a handful of well-crafted principles). Here’s how to get started:

  • Reassemble your facilitators and experience mappers, as they are closest to what you learned in your research.
  • Go back to the key insights that emerged from your discovery and research. These likely have been packaged in maps, models, research reports, or other artifacts. You can also go back to your raw data if needed.
  • Write each key insight on a sticky note. These will be used to spark a first pass at potential principles.
  • For each insight, have everyone take a pass individually at articulating a principle derived from just that insight. You can use sticky notes again or a quarter sheet of 8.5”’’x 11”’ (A6) template to give people a little more structure (see Figure 6.3).
Figure 6.3: A simple template to generate insight-level principles quickly.
  • At this stage, you should coach participants to avoid finding the perfect words or a pithy way to communicate a potential principle. Instead, focus on getting the core lesson learned from the insight and what advice you would give others to guide product or service decisions in the future. Table 6.1 shows a couple of examples of what a good first pass looks like.
  • At this stage, don’t be a wordsmith. Work quickly to reframe your insights from something you know (“Most people don’t want to…”) to what should be done to stay true to this insight (“Make it easy for people…”).
  • Work your way through all the insights until everyone has a principle for each one.
Table 6.1: From insights to draft principles Insight Principle Most people don’t want to do their homework first. They want to get started and learn what they need to know when they need to know it. Make it easy for people to dive in and collect knowledge when it’s most relevant. Everyone believes their situation (financial, home, health) is unique and reflects their specific circumstances, even if it’s not true. Approach people as they see themselves: unique people in unique situations. Finding Patterns

You now have a superset of individual principles from which a handful of experience principles will emerge. Your next step is to find the patterns within them. You can use affinity mapping to identify principles that speak to a similar theme or intent. As with any clustering activity, this may take a few iterations until you feel that you have mutually exclusive categories. You can do this in just a few steps:

  • Select someone to be a workshop participant to present the principles one by one, explaining the intent behind each one.
  • Cycle through the rest of the group, combining like principles and noting where principles conflict with one another. As you cluster, the dialogue the group has is as important as where the principles end up.
  • Once things settle down, you and your colleagues can take a first pass at articulating a principle for each cluster. A simple half sheet (8.5” x 4.25” or A5) template can give some structure to this step. Again, don’t get too precious with every word yet.  (see Figure 6.4). Get the essence down so that you and others can understand and further refine it with the other principles.
  • You should end up with several mutually exclusive categories with a draft principle for each.
Designing Principles as a System

No experience principle is an island. Each should be understandable and useful on its own, but together your principles should form a system. Your principles should be complementary and reinforcing. They should be able to be applied across channels and throughout your product or service development process. See the following “Experience Principles Refinement Workshop” for tips on how to critique your principles to ensure that they work together as a complete whole.

The Cult of the Complex

Thu, 05/31/2018 - 07:15

‘Tis a gift to be simple. Increasingly, in our line of work, ‘tis a rare gift indeed.

In an industry that extols innovation over customer satisfaction, and prefers algorithm to human judgement (forgetting that every algorithm has human bias in its DNA), perhaps it should not surprise us that toolchains have replaced know-how.

Likewise, in a field where young straight white dudes take an overwhelming majority of the jobs (including most of the management jobs) it’s perhaps to be expected that web making has lately become something of a dick measuring competition.

It was not always this way, and it needn’t stay this way. If we wish to get back to the business of quietly improving people’s lives, one thoughtful interaction at a time, we must rid ourselves of the cult of the complex. Admitting the problem is the first step in solving it.

And the div cries Mary

In 2001, more and more of us began using CSS to replace the non-semantic HTML table layouts with which we’d designed the web’s earliest sites. I soon noticed something about many of our new CSS-built sites. I especially noticed it in sites built by the era’s expert backend coders, many of whom viewed HTML and CSS as baby languages for non-developers.

In those days, whether from contempt for the deliberate, intentional (designed) limitations of HTML and CSS, or ignorance of the HTML and CSS framers’ intentions, many code jockeys who switched from table layouts to CSS wrote markup consisting chiefly of divs and spans. Where they meant list item, they wrote span. Where they meant paragraph, they wrote div. Where they meant level two headline, they wrote div or span with a classname of h2, or, avoiding even that tragicomic gesture toward document structure, wrote a div or span with verbose inline styling. Said div was followed by another, and another. They bred like locusts, stripping our content of structural meaning.

As an early adopter and promoter of CSS via my work in The Web Standards Project (kids, ask your parents), I rejoiced to see our people using the new language. But as a designer who understood, at least on a basic level, how HTML and CSS were supposed to work together, I chafed.

Cry, the beloved font tag

Everyone who wrote the kind of code I just described thought they were advancing the web merely by walking away from table layouts. They had good intentions, but their executions were flawed. My colleagues and I here at A List Apart were thus compelled to explain a few things.

Mainly, we argued that HTML consisting mostly of divs and spans and classnames was in no way better than table layouts for content discovery, accessibility, portability, reusability, or the web’s future. If you wanted to build for people and the long term, we said, then simple, structural, semantic HTML was best—each element deployed for its intended purpose. Don’t use a div when you mean a p.

This basic idea, and I use the adjective advisedly, along with other equally rudimentary and self-evident concepts, formed the basis of my 2003 book Designing With Web Standards, which the industry treated as a revelation, when it was merely common sense.

The message messes up the medium

When we divorce ideas from the conditions under which they arise, the result is dogma and misinformation—two things the internet is great at amplifying. Somehow, over the years, in front-end design conversations, the premise “don’t use a div when you mean a p” got corrupted into “divs are bad.”

A backlash in defense of divs followed this meaningless running-down of them—as if the W3C had created the div as a forbidden fruit. So, let’s be clear. No HTML element is bad. No HTML element is good. A screwdriver is neither good nor bad, unless you try to use it as a hammer. Good usage is all about appropriateness.

Divs are not bad. If no HTML5 element is better suited to an element’s purpose, divs are the best and most appropriate choice. Common sense, right? And yet.

Somehow, the two preceding simple sentences are never the takeaway from these discussions. Somehow, over the years, a vigorous defense of divs led to a defiant (or ignorant) overuse of them. In some strange way, stepping back from a meaningless rejection of divs opened the door to gaseous frameworks that abuse them.

Note: We don’t mind so much about the abuse of divs. After all, they are not living things. We are not purists. It’s the people who use the stuff we design who suffer from our uninformed or lazy over-reliance on these div-ridden gassy tools. And that suffering is what we protest. div-ridden, overbuilt frameworks stuffed with mystery meat offer the developer tremendous power, agility. But that power comes at a price your users pay: a hundred tons of stuff your project likely doesn’t need, but you force your users to download anyway. And that bloat is not the only problem. For who knows what evil lurks in someone else’s code?

Two cheers for frameworks

If you entered web design and development in the past ten years, you’ve likely learned and may rely on frameworks. Most of these are built on meaningless arrays of divs and spans—structures no better than the bad HTML we wrote in 1995, however more advanced the resulting pages may appear. And what keeps the whole monkey-works going? JavaScript, and more JavaScript. Without it, your content may not render. With it, you may deliver more services than you intended to.

There’s nothing wrong with using frameworks to quickly whip up and test product prototypes, especially if you do that testing in a non-public space. And theoretically, if you know what you’re doing, and are willing to edit out the bits your product doesn’t need, there’s nothing wrong with using a framework to launch a public site. Notice the operative phrases: if you know what you’re doing, and are willing to edit out the bits your product doesn’t need.

Alas, many new designers and developers (and even many experienced ones) feel like they can’t launch a new project without dragging in packages from NPM, or Composer, or whatever, with no sure idea what the code therein is doing. The results can be dangerous. Yet here we are, training an entire generation of developers to build and launch projects with untrusted code.

Indeed, many designers and developers I speak with would rather dance naked in public than admit to posting a site built with hand-coded, progressively enhanced HTML, CSS, and JavaScript they understand and wrote themselves. For them, it’s a matter of job security and viability. There’s almost a fear that if you haven’t mastered a dozen new frameworks and tools each year (and by mastered, I mean used), you’re slipping behind into irrelevancy. HR folks who write job descriptions listing the ten thousand tool sets you’re supposed to know backwards and forwards to qualify for a junior front-end position don’t help the situation.

CSS is not broken, and it’s not too hard

As our jerrybuilt contraptions, lashed together with fifteen layers of code we don’t understand and didn’t write ourselves, start to buckle and hiss, we blame HTML and CSS for the faults of developers. This fault-finding gives rise to ever more complex cults of specialized CSS, with internecine sniping between cults serving as part of their charm. New sects spring up, declaring CSS is broken, only to splinter as members disagree about precisely which way it’s broken, or which external technology not intended to control layout should be used to “fix” CSS. (Hint: They mostly choose JavaScript.)

Folks, CSS is not broken, and it’s not too hard. (You know what’s hard? Chasing the ever-receding taillights of the next shiny thing.) But don’t take my word for it. Check these out:

CSS Grid is here; it’s logical and fairly easy to learn. You can use it to accomplish all kinds of layouts that used to require JavaScript and frameworks, plus new kinds of layout nobody’s even tried yet. That kind of power requires some learning, but it’s good learning, the kind that stimulates creativity, and its power comes at no sacrifice of semantics, or performance, or accessibility. Which makes it web technology worth mastering.

The same cannot be said for our deluge of frameworks and alternative, JavaScript-based platforms. As a designer who used to love creating web experiences in code, I am baffled and numbed by the growing preference for complexity over simplicity. Complexity is good for convincing people they could not possibly do your job. Simplicity is good for everything else.

Keep it simple, smarty

Good communication strives for clarity. Design is its most brilliant when it appears most obvious—most simple. The question for web designers should never be how complex can we make it. But that’s what it has become. Just as, in pursuit of “delight,” we forget the true joy reliable, invisible interfaces can bring, so too, in chasing job security, do we pile on the platform requirements, forgetting that design is about solving business and customer problems … and that baseline skills never go out of fashion. As ALA’s Brandon Gregory, writing elsewhere, explains:

I talk with a lot of developers who list Angular, Ember, React, or other fancy JavaScript libraries among their technical skills. That’s great, but can you turn that mess of functions the junior developer wrote into a custom extensible object that we can use on other projects, even if we don’t have the extra room for hefty libraries? Can you code an image slider with vanilla JavaScript so we don’t have to add jQuery to an older website just for one piece of functionality? Can you tell me what recursion is and give me a real-world example? “I interview web developers. Here’s how to impress me.” Growing pains

There’s a lot of complexity to good design. Technical complexity. UX complexity. Challenges of content and microcopy. Performance challenges. This has never been and never will be an easy job.

Simplicity is not easy—not for us, anyway. Simplicity means doing the hard work that makes experiences appear seamless—the sweat and torture-testing and failure that eventually, with enough effort, yields experiences that seem to “just work.”

Nor, in lamenting our industry’s turn away from basic principles and resilient technologies, am I suggesting that CDNs and Git are useless. Or wishing that we could go back to FTP—although I did enjoy the early days of web design, when one designer could do it all. I’m glad I got to experience those simpler times.

But I like these times just fine. And I think you do, too. Our medium is growing up, and it remains our great privilege to help shape its future while creating great experiences for our users. Let us never forget how lucky we are, nor, in chasing the ever-shinier, lose sight of the people and purpose we serve.